Generating Functions in Engineering and the Applied Sciences 9781681736389, 9781681736396, 9781681736402

560 125 682KB

English Pages [105] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Generating Functions in Engineering and the Applied Sciences
 9781681736389, 9781681736396, 9781681736402

Table of contents :
Contents
Tables
Glossary
Types of Generating Functions
Introduction
Notation & Nomenclature
Rising & Falling Factorials
Dummy Variable
Types of Generating Functions
Ordinary Generating Functions
Recurrence Relations
Types of Sequences
OGF for Partial Sums
Exponential Generating Functions (EGF)
Pochhammer Generating Functions
Rising Pochhammer GF (RPGF)
Falling Pochhammer GF (FPGF)
Other Generating Functions
Auto-Covariance Generating Function
Information Generating Function (IGF)
Generating Functions in Graph Theory
Generating Functions in Number Theory
Rook Polynomial Generating Function
Stirling Numbers of 2nd Kind
Summary
Operations on Generating Functions
Basic Operations
Extracting Coefficients
Addition & Subtraction
Multiplication by Non-Zero Constant
Linear Combination
Shifting
Functions of Dummy Variable
Convolutions and Powers
Differentiation & Integration
Integration
Invertible Sequences
Composition of Generating Functions
Summary
Generating Functions in Statistics
Generating Functions in Statistics
Probability Generating Functions (PGF)
Generating Functions for CDF
Generating Functions for Survival Functions
Generating Functions for Mean Deviation
MD of Some Distributions
MD of Geometric Distribution
MD of Binomial Distribution
MD of Poisson distribution
MD of Negative Binomial Distribution
Moment Generating Functions (MGF)
Characteristic Functions
Cumulant Generating Functions
Factorial Moment Generating Functions
Conditional Moment Generating Functions (CMGF)
Generating Functions of Truncated Distributions
Convergence of Generating Functions
Summary
Applications of Generating Functions
Algebra
Series involving Integer Parts
Permutation Inversions
Generating Function of Strided Sequences
Computing
Merge-Sort Algorithm Analysis
Quick-Sort Algorithm Analysis
Binary-Search Algorithm Analysis
Well-formed Parentheses
Formal Languages
Combinatorics
Combinatorial Identities
New Generating Functions from old
Recurrence Relations
Towers of Hanoi Puzzle
Graph Theory
Graph Enumeration
Tree Enumeration
Chemistry
Polymer Chemistry
Counting Isomers of Hydrocarbons
Epidemiology
Number Theory
Statistics
Sums of IID Random Variables
Infinite Divisibility
Stochastic Processes
Reliability
Bioinformatics
Genomics
Management
Economics
Summary
Biblio
Index

Citation preview

Generating Functions in Engineering and the Applied Sciences

Rajan Chattamvelli VIT University, Vellore, Tamil Nadu

Ramalingam Shanmugam Texas State University, San Marcos, TX

SYNTHESIS LECTURES ON ENGINEERING #37

M &C

Morgan

& cLaypool publishers

© 2019 by Morgan & Claypool Generating Functions in Engineering and the Applied Sciences Rajan Chattamvelli and Ramalingam Shanmugam www.morganclaypool.com

ISBN: 9781681736389 ISBN: 9781681736396 ISBN: 9781681736402

paperback ebook hardcover

DOI 10.2200/S00942ED1V01Y201907ENG037 A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON ENGINEERING Lecture #37 Series ISSN Print 1939-5221 Electronic 1939-523X

ABSTRACT This is an introductory book on generating functions (GFs) and their applications. It discusses commonly encountered generating functions in engineering and applied sciences, such as ordinary generating functions (OGF), exponential generating functions (EGF), probability generating functions (PGF), etc. Some new GFs like Pochhammer generating functions for both rising and falling factorials are introduced in Chapter 2. Two novel GFs called “mean deviation generating function” (MDGF) and “survival function generating function” (SFGF), are introduced in Chapter 3. The mean deviation of a variety of discrete distributions are derived using the MDGF. The last chapter discusses a large number of applications in various disciplines including algebra, analysis of algorithms, polymer chemistry, combinatorics, graph theory, number theory, reliability, epidemiology, bio-informatics, genetics, management, economics, and statistics. Some background knowledge on GFs is often assumed for courses in analysis of algorithms, advanced data structures, digital signal processing (DSP), graph theory, etc. These are usually provided by either a course on “discrete mathematics” or “introduction to combinatorics.” But, GFs are also used in automata theory, bio-informatics, differential equations, DSP, number theory, physical chemistry, reliability engineering, stochastic processes, and so on. Students of these courses may not have exposure to discrete mathematics or combinatorics. This book is written in such a way that even those who do not have prior knowledge can easily follow through the chapters, and apply the lessons learned in their respective disciplines. The purpose is to give a broad exposure to commonly used techniques of combinatorial mathematics, highlighting applications in a variety of disciplines. Any suggestions for improvement will be highly appreciated. Please send your comments to [email protected], and they will be incorporated in subsequent revisions.

KEYWORDS algebra, analysis of algorithms, bio-informatics, CDF generating functions, combinatorics, cumulants, difference equations, discrete mathematics, economics, epidemiology, finance, genetics, graph theory, management, mean deviation generating function, moments, number theory, Pochhammer generating functions, polymer chemistry, power series, recurrence relations, reliability engineering, special numbers, statistics, strided sequences, survival function, truncated distributions.

Contents List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Glossary of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

1

Types of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1

1.2

1.3 1.4

1.5 1.6

1.7

1.8

2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Origin of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.2 Existence of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Notations and Nomenclatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Rising and Falling Factorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Dummy Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Types of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Ordinary Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.2 Types of Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.3 OGF for Partial Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Exponential Generating Functions (EGF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Pochhammer Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.6.1 Rising Pochhammer GF (RPGF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.6.2 Falling Pochhammer GF (FPGF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Other Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.7.1 Auto-Covariance Generating Function . . . . . . . . . . . . . . . . . . . . . . . . 16 1.7.2 Information Generating Function (IGF) . . . . . . . . . . . . . . . . . . . . . . . 16 1.7.3 Generating Functions in Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . 16 1.7.4 Generating Functions in Number Theory . . . . . . . . . . . . . . . . . . . . . . 17 1.7.5 Rook Polynomial Generating Function . . . . . . . . . . . . . . . . . . . . . . . . 17 1.7.6 Stirling Numbers of Second Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Operations on Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1

Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.1 Extracting Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2 2.3 2.4

3

2.1.2 Addition and Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Multiplication by Non-Zero Constant . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Functions of Dummy Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.7 Convolutions and Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.8 Differentiation and Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.9 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invertible Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Composition of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20 20 21 21 23 24 28 30 31 32 32

Generating Functions in Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1 3.2 3.3 3.4 3.5 3.6

3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14

Generating Functions in Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Types of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probability Generating Functions (PGF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Properties of PGF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generating Functions for CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generating Functions for Survival Functions . . . . . . . . . . . . . . . . . . . . . . . . . . Generating Functions for Mean Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . MD of Some Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 MD of Geometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 MD of Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 MD of Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 MD of Negative Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . Moment Generating Functions (MGF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Properties of Moment Generating Functions . . . . . . . . . . . . . . . . . . . Characteristic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Properties of Characteristic Functions . . . . . . . . . . . . . . . . . . . . . . . . . Cumulant Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Relations Among Moments and Cumulants . . . . . . . . . . . . . . . . . . . . Factorial Moment Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditional Moment Generating Functions (CMGF) . . . . . . . . . . . . . . . . . . Generating Functions of Truncated Distributions . . . . . . . . . . . . . . . . . . . . . . Convergence of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33 34 34 39 40 42 43 45 45 45 47 47 48 49 52 53 54 54 56 58 58 59 59

4

Applications of Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.1

Applications in Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Series Involving Integer Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Permutation Inversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Generating Function of Strided Sequences . . . . . . . . . . . . . . . . . . . . . 4.2 Applications in Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Merge-Sort Algorithm Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Quick-Sort Algorithm Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Binary-Search Algorithm Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Well-Formed Parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Formal Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Applications in Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Combinatorial Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 New Generating Functions from Old . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Towers of Hanoi Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Applications in Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Graph Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Tree Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Applications in Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Polymer Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Counting Isomers of Hydrocarbons . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Applications in Epidemiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Disease Progression and Containment . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Applications in Number Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Applications in Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Sums of IID Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.2 Infinite Divisibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.3 Applications in Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Applications in Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Series-Parallel Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Applications in Bioinformatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Lifetime of Cellular Proteins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.2 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Applications in Genomics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Applications in Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 62 62 64 64 65 66 67 68 68 69 69 71 71 73 74 74 76 76 77 78 79 81 82 83 83 84 84 85 86 87 87 87 88 89

4.12.1 Annuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.13 Applications in Economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.14 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Tables 1.1

Some standard generating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1

Convolution as diagonal sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.2

Summary of convolutions and powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.3

Summary of generating functions operations . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.1

Summary table of generating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2

MD of geometric distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.3

MD of binomial distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.4

MD of Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.5

MD of negative binomial distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.6

Table of characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.7

Summary table of zero-truncated generating functions . . . . . . . . . . . . . . . . . . 59

Glossary CDFGF CGF ChF CMGF EGF FCGF FMGF FPGF GF HCW IGF MDGF MGF OGF PGF PMF RoC RPGF SF SFGF SIR SIS

Cumulative Distribution Function GF Cumulant Generating Function Characteristic Function Central Moment Generating Function Exponential Generating Function Factorial Cumulant Generating Function Factorial Moment Generating Function Falling Pochhammer Generating Function Generating Function Health Care Worker Information Generating Function Mean Deviation Generating Function Moment Generating Function Ordinary Generating Function Probability Generating Function Probability Mass Function Radius of Convergence Rising Pochhammer Generating Function Survival Function Survival Function Generating Function Susceptible, Infected, Recovered model Susceptible, Infected, Susceptible model

CHAPTER

1

Types of Generating Functions After finishing the chapter, readers will be able to    • explain the meaning and uses of generating functions. • describe the ordinary generating functions. • explore exponential generating functions. • apply generating functions to enumeration problems. • comprehend Pochhammer generating functions. • outline some other generating functions.

1.1

INTRODUCTION

A GF is a mathematical technique to concisely represent a known ordered sequence into a simple algebraic function. In essence, it takes a sequence as input, and produces a continuous function in one or more dummy (arbitrary) variables as output. The input can be real or complex numbers, matrices, symbols, functions, or variables including random variables. The input depends on the area of application. For example, it is amino acid symbols or protein sequences in bioinformatics, random variables in statistics, digital signals in signal processing, number of infected persons in a population in epidemiology, etc. The output can contain unknown parameters, if any.

1.1.1

ORIGIN OF GENERATING FUNCTIONS

Generating functions (GFs) were first introduced by Abraham de Moivre circa 1730, whose work originated in enumeration. It became popular with the works of Leonhard Euler, who used it to find the number of divisors of a positive integer (Chapter 4). The literal meaning of enumerate is to count or reckon something. This includes persons or objects, activities, entities, or known processes. Counting the number of occurrences of a discrete outcome is fundamental in enumerative combinatorics. For example, it may be placing objects into urns, selecting entities from a finite population, arranging objects linearly or circularly, etc. Thus, GFs are very useful to explore discrete problems involving finite or infinite sequences.

2

1. TYPES OF GENERATING FUNCTIONS

Definition 1.1 (Definition of Generating Function) A GF is a simple and concise expression in one or more dummy variables that captures the coefficients of a finite or infinite sequence, and generates a quantity of interest using calculus or algebraic operations, or simple substitutions. GFs are used in engineering and applied sciences to generate different quantities with minimal work. As they convert practical problems that involve sequences into the continuous domain, calculus techniques can be employed to get more insight into the problems or sequences. They are used in quantum mechanics, mathematical chemistry, population genetics, bio-informatics, vector differential equations, epidemiology, stochastic processes, etc., and their applications in numerical computing include best uniform polynomial approximations, and finding characteristic polynomials. Average complexity of computer algorithms are obtained using GFs that have closed form (Chapter 4). This allows asymptotic estimates to be obtained easily. GFs are also used in polynomial multiplications in symbolic programming languages. In statistics, they are used in discrete distributions to generate probabilities, moments, cumulants, factorial moments, mean deviations, etc. (Chapter 3). The partition functions, Green functions, and Fourier series expansions are other examples from various applied science fields. They are known by different names in these fields. For example, z -transforms (also called s -transforms) used in digital signal processing (DSP), characteristic equations used in econometrics and numerical computing, canonical functions used in some engineering fields, and annihilator polynomials used in analysis of algorithms are all special types of GFs. Note, however, that the McClaurin series in calculus that expands an arbitrary continuous differentiable function around the point zero as f .x/ D f .0/ C xf 0 .0/ C .x 2 =2Š/f 00 .0/ C    is not considered as a GF in this book because it involves derivatives of a function.

1.1.2

EXISTENCE OF GENERATING FUNCTIONS

GFs are much easier to work with than a whole lot of numbers or symbols. This does not mean that any sequence can be converted into a GF. For instance, a random sequence may not have a nice and compact GF. Nevertheless, if the sequence follows a mathematical rule (like recurrence relation, geometric series, harmonic series), it is possible to express them compactly using a GF. Thus, it is extensively used in counting problems where the primary aim is to count the number of ways a task can be accomplished, if other parameters are known. A typical example is in counting objects of a finite collection or arrangement. Explicit enumeration is applicable only when the size (number of elements) is small. Either computing devices must be used, or alternate mathematical techniques like combinatorics, graph theory, dynamic programming, etc., employed when this size becomes large. In addition to the size, the enumeration problem may also involve one or more parameters (usually discrete). This is where GFs become very helpful. Many useful relationships among the elements of the sequence can then be derived from the corresponding GF. If the GF has closed form, it is possible to obtain a closed form

1.2. NOTATIONS AND NOMENCLATURES

for its coefficients as well. They are powerful tools to prove facts and relationships among the coefficients. Although they are called generating “functions,” they are not like mathematical functions that have a domain and range. However, most of the GFs encountered in this book are polynomials in the dummy variable. Convergence of Generating Functions

Convergence of the GF is assumed in some fields for limited values of the parameters or dummy variables. For example, GF associated with real-valued random variables assumes that the dummy variable t (defined below) is in the range Œ0; 1, whereas for complex-valued random variables the range is Œ 1; C1. Similarly, the GF used in signal processing assumes that t 2 Œ 1; C1. As the input can be complex numbers in some signal processing applications, the GF can also be complex. A formal power series implies that nothing is assumed on the convergence. As shown below, the sequence 1; 1; 1; : : : has GF F(x) = 1=.1 x/ which converges in the region 0 < x < 1, while a sub-sequence 1; 0; 1; 0; 1; : : : has GF F(x) = 1=.1 x 2 /, which converges for 1 < x < 1. The region within which it converges is called the Radius of Convergence (RoC). Thus the RoC of a series is the set of bounded values within which it will converge. Obviously, .1; 1; : : :/ (infinite series) has ordinary GF (OGF) 1=.1 x/, so that the RoC is 0 < x < 1. Similarly, .c; c; c; : : :/ is generated by 1=.1 cx/ so that the RoC is 0 < x < 1=c . An OGF being weighted by x k will converge if all terms beyond a value (say R) is upperbounded. Such a value of R, which is a finite real number, is called its RoC. This is given by the ratio 1=R D Ltn!1 janC1 =an j, where vertical bars denote absolute value for real coefficients. It can easily be found by taking the ratio of two successive terms of the series, if the nth term is expressed as a simple mathematical function. Convergence of a GF is often ignored in this book because it is rarely evaluated if at all, and that too for dummy variable values 0 or 1 (an exception appears in Chapter 4, page 88).

1.2

NOTATIONS AND NOMENCLATURES

A finite sequence will be denoted by placing square-brackets around them, assuming that each element is distinguishable. An English alphabet will be used to identify it when necessary. An infinite series will be denoted by simple brackets, and named using Greek alphabet. Thus, S D Œa0 ; a1 ; a2 ; a3 ; a4  denotes a finite sequence, whereas  D .a0 ; a1 ; a2 ; a3 ; : : : an ; : : :/ denotes an P infinite sequence. The summation sign ( ) will denote the sum of the elements that follow. The P k index of summation will either be explicitly specified, or denoted implicitly. Thus, 1 kD0 pk t P k is an explicit summation in which k varies from 0–1, whereas k0 pk t implicitly assumes that the summation is varying from 0 to an upper limit (which will be clear from context). An assumption that n is a power of 2 may simplify the derivation in some cases (see Chapter 4, pp. 65–66). Random variables will be denoted by uppercase letters, and particular values of them by lowercase letters.

3

4

1. TYPES OF GENERATING FUNCTIONS

1.2.1

RISING AND FALLING FACTORIALS

This section introduces a particular type of the product form presented above. These expressions are useful in finding factorial moments of discrete distributions, whose probability mass function (PMF) involves factorials or binomial coefficients. In the literature, these are known as Pochhammer’s notation for rising and falling factorials. This will be explored in subsequent chapters. 1. Rising Factorial Notation Factorial products come in two flavors. In the rising factorial, a variable is incremented successively in each iteration. This is denoted as x .j / D x  .x C 1/      .x C j

1/ D

jY1 kD0

.x C k/ D

€.x C j / : €.x/

(1.1)

2. Falling Factorial Notation In the falling factorial, a variable is decremented successively at each iteration. This is denoted as ! jY1 xŠ x x.j / D x  .x 1/      .x j C 1/ D .x k/ D : (1.2) D jŠ j .x j /Š kD0

Writing Equation (1.1) in reverse gives us the relationship x .j / D .x C j 1/.j / . Similarly, writing Equation (1.2) in reverse gives us the relationship x.j / D .x j C 1/.j / .

1.2.2

DUMMY VARIABLE

An ordinary GF (OGF) is denoted by F .t / or G.t /, where t is the dummy (arbitrary) variable. It can be any variable you like, and it depends on the field of application. Quite often, t is used as dummy variable in statistics, s in signal processing, x in genetics, bio-informatics, and orthogonal polynomials, z in analysis of algorithms, and L in auto-correlation, and time-series analysis.1 An exponential GF is denoted by H.t /. As GFs in some fields are associated with variables (like random variables in statistics), they may appear before the dummy variable. For example, G.x; t/ denotes an OGF of a random variable X . Some authors use x as a subscript as Gx .t/. These dummy variables assume special values (like 0 or 1) to generate a quantity of interest. Thus, uniform and absolute convergence at these points are assumed. The GFs used in statistics can be finite or infinite, because they are defined on (sample spaces of ) random variables, or on a sequence of random variables. Note that the GF is also defined for bivariate and 1 Analysis of algorithms is used to either select an algorithm with minimal execution time from a set of possible choices, or minimum storage or network communication requirements. As the attribute of interest is time, the recurrences are developed as T .n/ where n is the problem size. This is one of the reasons for choosing a dummy variable other than t .

1.3. TYPES OF GENERATING FUNCTIONS

multivariate data. Bivariate GF has two dummy variables. This book focuses only on univariate GF. Although the majority of GFs encountered in this book are of “sum-type” (additive GF), there also exist GFs of “product-type.” One example is the GF for the number of partitions of an integer (Chapter 4). These are called multiplicative GF. The GF for the number of partitions Q n of n into distinct parts is p.t/ D 1 nD1 .1 C t /.

1.3

TYPES OF GENERATING FUNCTIONS

GFs allow us to describe a sequence in as simple a form as possible. Thus, a GF encodes a sequence in compact form, instead of giving the nt h term as output. Depending on what we wish to generate, there are different GFs. For example, moment generating function (MGF) generates moments of a population, and probability generating function (PGF) generates corresponding probabilities. These are specific to each distribution. An advantage is that if the MGF of an arbitrary random variable X is known, the MGF of any linear combination of the form a  X C b can be derived easily. This reasoning holds for other GFs too.

1.4

ORDINARY GENERATING FUNCTIONS

Let  D .a0 ; a1 ; a2 ; a3 ; : : : an ; : : :/ be a sequence of bounded numbers. Then the power series f .x/ D

1 X

an x n

(1.3)

nD0

is called the Ordinary Generating Function (OGF) of  . Here x is a dummy variable, n is the indexing variable (indexvar) and an0 s are known constants. For different values of an , we get different OGF. The set of all possible values of the indexing variable of the summation in (1.3) is called the “index set.” If the sequence is finite and of size n, we have a polynomial of degree n. Otherwise it is a power series. Such a power series has an inherently powerful enumerative property. As they lend themselves to algebraic manipulations, several useful and interesting results can be derived from them as shown in Chapter 2. There exist a one-to-one correspondence between a sequence, and its GF. This can be represented as .a0 ; a1 ; a2 ; a3 ; : : : ; an ; : : :/ ”

1 X

an x n :

(1.4)

nD0

We get f .x/ D .1 x/ 1 when all an D 1, and .1 C x/ an D C1 for n even. These are represented as

1

when an D 1 for n odd, and

.1; 1; 1; 1; : : : ; 1; : : :/ ” .1 x/ 1 .1; 1; 1; 1; : : : ; 1; 1; : : :/ ” .1 C x/ 1 :

5

6

1. TYPES OF GENERATING FUNCTIONS

The OGF is .x n 1/=.x 1/ when there are a finite number of 1’s (say n of them). Thus, .1; 1; 1; 1; 1/ has OGF .x 5 1/=.x 1/. Similarly, we get .1 x 2 / 1 when even coefficients a2n D C1, and odd coefficients a2nC1 D 0. The OGF is called the McClaurin series of the function on the left-hand side (LHS) when the right-hand side (RHS) has functional values as coefficients. In the particular case when the sum of all the coefficients is 1, it is called a PGF, which is discussed at length in Chapter 3. Consider the set of all positive integers P D .1; 2; 3; 4; 5; : : :/. This has OGF 1 C 2x C 3x 2 C 4x 3 C    . We know that 1=.1 x/ D 1 C x C x 2 C x 3 C x 4 C    . Differentiate both sides with respect to (w.r.t.) x to get 1=.1 x/2  . 1/ D 1 C 2x C 3x 2 C 4x 3 C    . Thus, the GF is 1=.1 x/2 or equivalently .1 x/ 2 . This can be represented as .1; 2; 3; 4; : : : ; n; : : :/ ” .1

x/

.1; 2; 3; 4; : : : ; n; .n C 1/; : : :/ ” .1 C x/

2 2

D 1=.1

x/2

D 1=.1 C x/2 :

See Table 1.1 for more examples. Table 1.1: Some standard generating functions Function

Series

Type

Notation

1±x

1±x

OGF

(1,±1,0,0,…)

OGF

(1,2,3,0,0,…)

xk

OGF

(1,1,1,1,…)

(–1)kxk

OGF

(1,-1,1,-1,…)

OGF

(1,2,4,8,…)

OGF

(1,2,3,4,…)

OGF

(1,a, a2, a3,…)

OGF

(1, 0, 1, 0,…)

EGF

(1, 1, 1,…)

EGF

(1, a, a2, a3,…)

3x2

1 + 2x + 1 1–x 1 1+x 1 1 – 2x 1 (1 – x)2 1 1 – ax 1 1 – x2 exp(x)

exp(ax)

1 + 2x +

3x2



k=0

∞ k=0



kxk

k=0 2



k=0 (k

+ 1)xk

∞ k k k=0 a x ∞

k=0 x

∞ k=0

2k

xk/k!

∞ k k=0 (ax) /k!

1.4. ORDINARY GENERATING FUNCTIONS

1.4.1

RECURRENCE RELATIONS

There are in general two ways to represent the nt h term of a sequence: (i) as a function of n, and (ii) as a recurrence relation between nth term and one or more of the previous terms. For example, an D 1=n gives an explicit expression for nth term. Taking the ratio an =an 1 D .n 1/=n D .1 1=n/ gives an D .1 1=n/an 1 , which is a first-order recurrence relation [Chattamvelli and Jones, 1996]. A recurrence relation can be defined either for a sequence, or on the parameters of a function (like a density function in statistics). It expresses the nth term in terms of one or more prior terms. First-order recurrence relations are those in which nt h term is related to .n 1/th term. It is called a linear recurrence relation if this relation is linear. First-order linear recurrence relations are easy to solve, and it requires one initial condition. A second-order recurrence relation is one in which the nth term is related to .n 1/th and .n 2/th terms. It needs two initial conditions to ensure that the ensuing sequence is unique. One classical example is the Fibonacci numbers. In general, an nth -order recurrence relation expresses the nth term in terms of n 1 previous  P terms. Consider the Bell numbers defined as Bn D nkD01 n k 1 Bk . This generates 1, 1, 2, 5, 15, 52, 203, etc. Recurrence relations are extensively discussed in Chapter 4 (page 71).

1.4.2

TYPES OF SEQUENCES

There are many types of sequences encountered in engineering. Examples are arithmetic sequence, geometric sequence, exponential sequence, harmonic sequence, etc. Alternating sequence is one in which the sign of the coefficients alter between plus and minus. This is of the form (a1 ; a2 ; a3 ; a4 ; : : :). The sum of an alternating series may be convergent or diP nC1 vergent. Consider 1 an . Take an D 1=n to get the alternating harmonic series nD1 . 1/ P1 nC1 1=2 C 1=3 1=4 C    , which is convergent as the sum is log.2/. nD1 . 1/P .1=n/ D 1 nC1 Similarly, 1 . 1/ =.2n C 1/ converges to =4. nD1

A telescopic series is one in which all terms of its partial sum cancels out except the first or P last (or both). Consider 1 nD1 1=Œn.n C 1/ D 1=2 C 1=6 C 1=12 C    . Use partial fractions to write 1=Œn.n C 1/ D 1=n 1=.n C 1/ so that when the upper limit of the summation is finite, all terms cancel out except the first and last. Consider the geometric sequence ŒP; Pr; Pr2 ; : : : ; Prn . It is called a geometric progression in mathematics and geometric sequence in engineering. It also appears in various physical, natural, and life sciences. As examples, the equations of state of gaseous mixtures (like van der Vaals equation, Virial equation), mean-energy modeling of oscillators used in crystallography and other fields, and geometric growth and deprecation models used in banking and finance can all be modeled as geometric series. Such series also appear in population genetics, insurance, exponential smoothing models in time-series, etc. Here the first term is P , and rate of progression (called common ratio) is r . It is possible to represent the sum of any k terms in terms of 3

7

8

1. TYPES OF GENERATING FUNCTIONS

variables, P (initial term), r (common ratio), and k . An interesting property of the geometric progression is that ak 1 akC1 D ak2 , i.e., the square of k th term is the product of the two terms around it. A geometric series can be finite or infinite. It is called an alternating geometric series if the signs differ alternatively. Let Sn D Œ1; r; r 2 ; r 3 ; : : : ; r n  denote a finite geometric sequence. P Then the sum nkD0 r k D .r nC1 1/=.r 1/. Both the numerator and denominator are negative when 0 < r < 1, so that we could write the RHS in the alternate form .1 r nC1 /=.1 r/. Example 1.1 (OGF of alternating geometric sequence) Find the OGF of alternating geometric sequence .1; a; a2 ; a3 ; a4 ; : : :/. Solution 1.1 Consider the infinite series expansion of .1 C ax/ 1 D 1 ax C a2 x 2    , which is the OGF of the given sequence. Here “a” is any nonzero constant.

a3 x 3 C

Example 1.2 (Find the OGF of an ) Find the OGF of (i) an D 2n C 3n and (ii) an D 2n C 1=3n . P n n n Solution 1.2 The OGF is G.x/ D 1 nD0 .2 C 3 /x . Split this into two terms and use the closed form formula for geometric series to get G.x/ D 1=.1 2x/ C 1=.1 3x/ D .2 5x/=Œ.1 2x/.1 3x/ which is the required GF. In the second case a similar procedure yields G.x/ D 1=.1 2x/ C 1=.1 x=3/ which is the required OGF.

Example 1.3 (OGF of odd positive integers) Find the OGF of the sequence 1, 3, 5, 7, 9, 11,... P Solution 1.3 The nth term of the sequence is obviously .2n C 1/. Write G.x/ D 1 nD0 .2n C 1/x n . Split this into two terms, and take the constant 2 outside the summation to get G.x/ D P P1 n n 2 1 x/2 and second one is 1=.1 x/ so that nD0 nx C nD0 x . The first sum is 2x=.1 2 2 G.x/ D 2x=.1 x/ C 1=.1 x/. Take .1 x/ as a common denominator and simplify to get G.x/ D .1 C x/=.1 x/2 D 2.1 2x/=Œ.1 x/.1 3x/ as the required OGF.

Example 1.4 (OGF of perfect squares) Find the OGF for all positive perfect squares .1; 4; 9; 16; 25; : : :/. Solution 1.4 Consider the expression x=.1 x/2 . From pages 2–3 we see that this is the infinite series expansion x=.1 x/2 D x C 2x 2 C 3x 3 C    C nx n C    : (1.5) Differentiate both sides w.r.t. x to get .@=@x/ x=.1

 x/2 D 1 C 22 x C 32 x 2 C    C n2 x n

1

C  :

(1.6)

1.4. ORDINARY GENERATING FUNCTIONS n

2

If we multiply both sides by x again, we see that the coefficient of x on the RHS is n . Hence, @ x @x .x=.1 x/2 / is the GF that we are looking for. To find the derivative of f D x=.1 x/2 , take log and differentiate to get2 .1=f /

@f D 1=x C 2=.1 @x

x/ D .1 C x/=Œx.1

(1.7)

x/

D .1 C x/=.1 x/3 . Multiply this expression by x to get the desired GF of from which @f @x perfect squares of all positive integers as x.1 C x/=.1 x/3 . This can be represented symbolically as follows: .1; 1; 1; 1; : : : ; n; : : :/ ” .1

Derivative: .1; 2; 3; 4; : : : ; n; : : :/ ” .1 Rightshift:

Derivative: Rightshift:

x/

1

x/

2

D 1=.1

D 1=.1

x/ x/2

.0; 1; 2; 3; 4; : : : ; n; : : :/ ” x.1 x/ 2 D x=.1 x/2 @f .1; 4; 9; 16; : : : ; n2 ; : : :/ ” .x=.1 x/2 / D .1 C x/=.1 @x .0; 1; 4; 9; 16; : : : ; n2 ; : : :/ ” x.1 C x/=.1 x/3 :

(1.8) x/3

Example 1.5 (OGF of even sequence) Find OGF of the sequence (2, 4, 10, 28, 82,: : :). Solution 1.5 Suppose we subtract 1 from each number in the sequence. Then we get (1, 3, 9, 27, 81, 243, : : :), which are powers of 3. Hence, above sequence is the sum of .1; 1; 1; 1; : : :/ C .1; 3; 32 ; 33 ; 34 ; : : :/ with respective OGFs 1=.1 x/ and 1=.1 3x/. Add them to get 1=.1 x/ C 1=.1 3x/ D 2.1 2x/=Œ.1 x/.1 3x/ as the answer.

1.4.3

OGF FOR PARTIAL SUMS

If the GF for any sequence is known in compact form (say F .x/), the GF for the sum of the first n terms of that particular sequence is given by F .x/=.1 x/. A consequence of this is that if we know the OGF of the sum of the first n terms of a sequence, we could get the original sequence by multiplying it by .1 x/. As multiplying by .1 x/ is the same as dividing by 1=.1 x/ we could state this as follows:

Theorem 1.1 (OGF of G.x/=.1 x/) Multiplying an OGF by 1=.1 x/ results in the OGF of the partial sums of coefficients, and dividing by 1=.1 x/ results in differenced sequence. Proof. Let G.x/ be the OGF of the infinite sequence (a0 ; a1 ; : : : ; an ; : : :). By definition G.x/ D a0 C a1 x C a2 x 2 C    C an x n C    . Expand .1 x/ 1 as a power series 1 C x C x 2 C    , 2 Note

1/Š.1

that the nth derivative of 1=.1 ax/bCn .

x/ is nŠ=.1

x/nC1 and that of 1=.1

ax/b is .n C b

1/Šan =Œ.b

9

10

1. TYPES OF GENERATING FUNCTIONS

and multiply with G.x/ to get the RHS as g.x/ D .1

x/

1

 G.x/ D

.1 C x C x 2 C x 3 C    /.a0 C a1 x C a2 x 2 C    C an x n C    / D 0 1 ! 1 1 X X jA k @ 1:x ak :x : j D0

(1.9)

kD0

Change the order of summation to get 0 0 1 1 0 1 1 k 1 k X X X X @ @ @ aj A x k D a0 C .a0 C a1 /x C .a0 C a1 C a2 /x 2 C    : aj :1A x k A D kD0

j D0

kD0

j D0

(1.10) 

This is the OGF of the given sequence.

This result has great implications. Computing the sum of several terms of a sequence appears in several fields of applied sciences. If the GF of such a sequence has closed form, it is a simple matter of multiplying the GF by 1=.1 x/ to get the new GF for the partial sum; and multiplying by 1 x gives the OGF of differences. Assume that the OGF of a sequence an is known in closed form. We seek the OGF of another sequence dn D an an 1 . As this is a P P telescopic series, nkD1 dk D an a0 . It follows that D.x/ D n dn x n D .1 x/A.x/, where A.x/ is the OGF of an ’s. .a0 ; a1 ; a2 ; a3 ; : : : ; an ; : : :/ ” F .x/

.a0 ; a0 C a1 ; a0 C a1 C a2 ; : : :/ ” F .x/=.1

x/:

(1.11)

P Consider the problem of finding the sum of squares of the first n natural numbers as nkD1 k 2 . The OGF of n2 was found in Example 1.4 as x.1 C x/=.1 x/3 . If this is divided by .1 x/, P we should have the OGF of nkD1 k 2 . This gives the OGF as x.1 C x/=.1 x/4 .

Example 1.6 (OGF of harmonic series) Find the OGF of 1 C 1=2 C 1=3 C    C 1=n. Solution 1.6 We know that log.1 x/ D log.1=.1 x// D x C x 2 =2 C x 3 =3 C x 4 =4 C P1 k    D kD1 x =k . We also know that when an OGF is multiplied by 1=.1 x/, we get a new OGF in which the coefficients are the partial sum of the coefficients of the original one. Thus, the above sequence has OGF log.1 x/=.1 x/. As another example, consider the sequence .1=2; 31 .1 C 12 /; 14 .1 C 21 C 13 /;    /. First form the convolution of log.1 C x/ and 1=.1 C x/ to get log.1 C x/=.1 C x/ D x x 2 .1 C 1=2/ C x 3 .1 C 1=2 C 1=3/ C    . Now integrate both sides to get the OGF as .1=2/Œlog.1 C x/2 . Similarly, .1=2/Œlog.1 x/2 is the OGF of .1=2; 31 .1 C 21 /; 14 .1 C 12 C 13 /;    /.

1.5. EXPONENTIAL GENERATING FUNCTIONS (EGF)

11

A direct application of the above result is in proving the combinatorial identity    k n m n 1 . First consider the sequence . 1/k kn , which has OGF .1 x/n . kD0 . 1/ k D . 1/ m According to the above theorem, the sum of the first m terms has OGF .1 x/n =.1 x/. Cancel out .1 x/ to get .1 x/n 1 , which is the OGF of the RHS. This proves the result. Pm

Theorem 1.2 If F .t/ D

P1

kD0

ak t k , then G.t / D .F .t / C F . t//=2 is the GF of

P1

kD0

a2k t 2k .

Proof. The proof follows easily from the fact that F . t / has coefficients with negative sign when t index is odd so that it cancels out when we sum both. Similarly, G.t/ D .F .t / P P 2kC1 k F . t //=2 D 1 . If F .t/ D m kD0 a2kC1 t kD0 ak t where the upper limit is m, then G.t/ D Pbm=2c 1 2k .F .t / C F . t // is the GF of kD0 a2k t , and G.t / D 21 .F .t/ F . t// is the OGF of 2 P bm=2c a2kC1 t 2kC1 . Here bxc denotes the greatest integer  x (see Chapter 4, § 4.1.1, pp. 62). kD0  P If F .t / is an OGF, .1 C F .t//n D nkD0 kn F .t /k is also a GF. 

1.5

EXPONENTIAL GENERATING FUNCTIONS (EGF)

The primary reason for using a GF in applied scientific problems is that it represents an entire sequence as a single mathematical function in dummy variables, which is easy to investigate and manipulate algebraically. Whereas an OGF is used to count coefficients involving combinations, an EGF is used when the coefficients are permutations or functions thereof. The EGF is preferred to OGF in those applications in which order matters. Examples are counting strings of fixed width formed using binary digits .0; 1/. Although bit strings 100 and 010 contain two 0’s and one 1, they are distinct. Definition 1.2 The function H.x/ D

1 X

an x n =nŠ

(1.12)

nD0

is called the exponential generating function (EGF). There are two ways to interpret the divisor of nth term. Writing it as H.x/ D

1 X nD0

bn x n where

bn D an =nŠ

(1.13)

gives an OGF. But it is the usual practice to write it either as (1.12) or as H.x/ D

1 X nD0

an en x n where en D 1=nŠ

(1.14)

where we have two independent multipliers. As the coefficients of exponential function .exp.x// are reciprocals of factorials of natural numbers, they are most suited when terms of a sequence

12

1. TYPES OF GENERATING FUNCTIONS

grow very fast. EGF is a convenient way to manipulate such infinite series. Consider the expansion of e x as 1 X ex D ak x k ; (1.15) kD0

where ak D 1=kŠ. This can be considered as the OGF of the sequence 1=kŠ, namely the sequence .1; 1=2Š; 1=3Š; : : :/. If kŠ is always associated with x k , we could alternately write the above as ex D

1 X

ak x k =kŠ;

(1.16)

kD0

where ak D 1 for all values of k . As the denominator nŠ grows fast for increasing values of n, it can be used to scale down functions that grow too fast. For example, number of permutations, number of subsets of an n-element set which has the form 2n , etc., when multiplied by x n =nŠ will often result in a converging sequence.3 Thus, e 2x can be considered as the GF of the number of subsets. These coefficients grow at different rates in practical applications. If this growth is too fast, the multiplier x n =nŠ will often ensure that the overall series is convergent. Quite often, both the OGF and EGF may exist for a sequence. As an example, consider the sequence 1; 2; 4; 8; 16; 32; : : : with nth term an D 2n . The OGF is 1=.1 2x/, whereas the EGF is e 2x . Thus, the EGF may converge in lot many problems where the OGF either is not convergent or  P does not exist. Consider .1 C x/n D nkD0 kn x k . We could write this as an EGF by expanding  n D nŠ=ŒkŠ.n k/Š as k n X .1 C x/n D nŠ=.n k/Š .x k =kŠ/ (1.17) kD0

so that ak D nŠ=.n a time).

k/Š (which is P .n; k/, the number of permutations of n things taken k at

When the OGF is of the form 1=.1 C ax/, where a is positive or negative, the EGF can be obtained by replacing x by . sign.a/=a/Œ1 exp.ax/. As shown above, the OGF of .1; 1; 1; : : :/ is 1=.1 x/. Replace x by Œ1 exp. x/ to get the EGF as 1=.1 Œ1 exp. x// D P n exp.x/. Multiply the numerator and denominator by nŠ to get 1 x/. nD0 nŠ  x =nŠ D 1=.1 This shows that the EGF of .1; 2Š; 3Š; : : :/ is 1=.1 x/. As the number of permutations of size n is nŠ, this is the GF for permutations. Now consider   e bt e at =.b a/ D t=1Š C .b 2 a2 /=.b a/t 2 =2Š C .b 3 a3 /=.b a/t 3 =3Š C    3 The

sequence

P

n0

C .b n

an /=.b

nŠx n converges only at x D 0.

a/t n =nŠ C    :

(1.18)

1.5. EXPONENTIAL GENERATING FUNCTIONS (EGF) n

13

n

The general solution to the Fibonacci recurrence is of the form .ˇ ˛ /=.ˇ ˛/ which matches n the coefficient p of t =nŠ in Equation p (1.18) with “a” replaced by ˛ and “b” replaced by ˇ where ˛ D .1 5/=2 and ˇ D .1 C 5/=2. This means that the EGF for Fibonacci numbers is   e ˇ t e ˛t =.ˇ ˛/ D F0 C F1 t =1Š C F2 t 2 =2Š C F3 t 3 =3Š C    C Fn t n =nŠ C    ; (1.19) where Fn is the nth Fibonacci number. Another reason for the popularity of EGF is that they have interesting product and composition formulas. See Chapter 2 for details. The EGF has x n =nŠ as multiplier. Some researchers have extended it by replacing nŠ by the nth Fibonacci number or its factorial Fn Š resulting in “Fibonential” GF. Several extensions of EGF are available for specific problems. One of them is P the tree-like GF (TGF) defined as F .x/ D n0 an nn 1 x n =nŠ, which has applications in algorithm analysis, coding theory, etc. Example 1.7 (k th derivative of EGF) Prove that D k H.x/ D

P1

nD0

anCk x n =nŠ

Solution 1.7 Consider H.x/ D a0 C a1 x=1Š C a2 x 2 =2Š C a3 x 3 =3Š C    C ak x k =kŠ C    . Take the derivative w.r.t. x of both sides. As a0 is a constant, its derivative is zero. Use derivative of x n D n  x n 1 for each term to get H 0 .x/ D a1 C a2 x=1Š C a3 x 2 =2Š C    C ak x k 1 =.k 1/Š C    . Differentiate again (this time a1 being a constant vanishes) to get H 00 .x/ D a2 C a3 x=1Š C a4 x 2 =2Š C    C ak x k 2 =.k 2/Š C    . Repeat this process k times. All terms whose coefficients are below ak will vanish. What remains is H .k/ .x/ D ak C akC1 x=1Š C akC2 x 2 =2Š C    . This can be expressed using the summaP n k tion notation as H .k/ .x/ D 1 =.n k/Š. Using the change of indexvar innDk an x troduced in Section 1.5 of Shanmugam and Chattamvelli [2016] this can be written as P n H .k/ .x/ D 1 nD0 anCk x =nŠ. Now if we put x D 0, all higher order terms vanish except the constant ak . Example 1.8 (OGF of balls in urns) Find the OGF for the number of ways to distribute n distinguishable balls into m distinguishable urns (or boxes) in such a way that none of the urns is empty. Solution 1.8 Let un;m denote the total number of ways. Assume that i th urn has ni balls so that all urn contents should add up to n. That gives n1 C n2 C    C nm D n where each ni > 0. Let x=1Š C x 2 =2Š C x 3 =3Š C    denote the GF for just one urn (where we have omitted the constant term due to our assumption that the urn is not empty; meaning that there must be at least one ball in it). As there are m such urns, the GF for all of them taken together is .x=1Š C x 2 =2Š C x 3 =3Š C    /m . The coefficient of x n is what we are seeking. This is the sum

14

1. TYPES OF GENERATING FUNCTIONS

P

n1 Cn2 CCnm Dn

nŠ=Œn1 Šn2 Š    nm Š. Denote the GF for un;m by H.x/. Then H.x/ D

n X kDm

ak x k =kŠ D .x=1Š C x 2 =2Š C x 3 =3Š C    /m ;

(1.20)

where ak D uk;m and the lower limit is obviously m (because this corresponds to the case where each urn gets one ball), and upper limit is n. The RHS is easily seen to be .e x 1/m . Expand it using binomial theorem to get ! ! m .m 1/x m .m 2/x x m mx .e 1/ D e e C e    C . 1/m : (1.21) 1 2 Expand each of the terms of the form e kx and collect coefficients of x n to get the answer as ! ! ! m m m n n n m 1 un;m D m .m 1/ C .m 2/    . 1/ (1.22) 1 2 m 1 because the last term in the above expansion does not have x n . Example 1.9 (EGF of Laguerre polynomials) Prove that the Laguerre polynomials are the  EGF of . 1/k kn for k D 0; 1; 2; : : : ; n. Solution 1.9 The Laguerre polynomials are defined as ! n X k n Ln .x/ D . 1/ =kŠx k : k

(1.23)

kD0

Writing this as polynomial.

1.6

Pn

kD0 .

1/k

 n

k

.x k =kŠ/ shows that the EGF of . 1/k

n k



is the classical Laguerre

POCHHAMMER GENERATING FUNCTIONS

Analogous to the formal power series expansions given above, we could also define Pochhammer GF as follows. Here the series remains the same but the dummy variable is either rising factorial or falling factorial type.

1.6.1

RISING POCHHAMMER GF (RPGF)

If (a0 ; a1 ; : : :) is a sequence of numbers, the RPGF is defined as X RPGF.x/ D ak x .k/ ; k0

(1.24)

1.6. POCHHAMMER GENERATING FUNCTIONS

15

.k/

where x D x.x C 1/ : : : .x C k 1/. Note that this is different from the OGF or EGF obtained where the known coefficients follow the Pochhammer factorial as RPGF.x/ D where a.k/ D a.a C 1/ : : : .a C k

n X

a.k/ x k ;

(1.25)

k0

1/. An example of an EGF of this type is .1

x/

b

D

1 X

b .k/ x k =kŠ

(1.26)

kD0

which is the EGF for Pochhammer numbers b .k/ (see also hypergeometric series in page 38).

1.6.2

FALLING POCHHAMMER GF (FPGF)

The dummy variable is of type falling factorial in this type of GF. X FPGF.x/ D ak x.k/ :

(1.27)

k0

P Consider Stirling number of second kind. The FPGF is x n because nkD0 S.n; k/x.k/ D x n . The P OGF of Stirling number of first kind is nkD0 s.n; k/x k D x.n/ . The EGF of Stirling number P1 of first kind is nDk s.n; k/x n =nŠ D .1=kŠ/.log.1 C x//k and that of Stirling number of secP n x ond kind is 1 1/k =kŠ. The falling factorial can also be applied to the nDk S.n; k/x =nŠ D .e constants to get falling EGF for Pochhammer numbers b .k/ .

FPGF D

1 X

b.k/ x k =kŠ:

(1.28)

kD0

This shows that both the sequence terms and dummy variable can be written using Pochhammer symbol, which results in four different types of GFs for OGF and EGF. The falling dummy variable technique was introduced by James Stirling in 1725 to represent an anP alytic function f .z/ in terms of difference polynomials as f .z/ D 1 kD0 ak z.k/ . Some examples are z 2 D z.z 1/ C z; z 3 D z.z 1/.z 2/ C 3z.z 1/ C z; z 4 D z.z 1/.z 2/.z 3/ C 6z.z 1/.z 2/ C 7z.z 1/ C z , etc. Example 1.10 Find OGF for the number of ways to color the vertices of a complete graph Kn . Solution 1.10 Fix any of the nodes and give a color to it. There are n 1 neighbors to it so that there are n 1 choices. Continue arguing like this until a single node is left. Hence, the GF is x.x 1/.x 2/ : : : .x n C 1/ or x.n/ as the required GF.

16

1. TYPES OF GENERATING FUNCTIONS

1.7

OTHER GENERATING FUNCTIONS

There are many other GFs used in various fields. Some of them are simple modifications of the dummy variable while others are combinations. A few of them are given below.

1.7.1

AUTO-COVARIANCE GENERATING FUNCTION

Consider the GF P .z/ D 2 a0 C a1 .z C 1=z/ C a2 .z 2 C 1=z 2 / C    C ak .z k C 1=z k / C    ;

(1.29)

where ak ’s are the known coefficients. This is used in some of the time series modeling. Separate the z and 1=z terms to get a0 C a1 z C a2 z 2 C    C and a0 C a1 =z C a2 =z 2 C    C which can be written as two separate GF as G1 .z/ C G2 .1=z/. See Shishebor et al. [2006] for autocovariance GFs for periodically correlated autoregressive processes. Example 1.11 (OGF of Auto-correlation) If the auto-correlation sequence of a discrete time process XŒn is given by RŒn D ajnj where jaj < 1, find the auto-correlation GF. Solution 1.11 Split the range from ( 1 to 1) and (0 to 1) to get the OGF P 1 P n n n n as nD t C 1 a=t/ 1 C 1=.1 at / D a=.t 1a nD0 a t . This simplifies to .a=t /.1 a/ C 1=.1 at /.

1.7.2

INFORMATION GENERATING FUNCTION (IGF)

P Let p1 ; p2 ; : : : ; pn be the probabilities of a discrete random variable, so that nkD1 pk D 1. The Pn t IGF of a discrete random variable is defined as I.t / D kD1 pk where auxiliary variable t is real or complex depending on the nature of the probability distribution. Differentiate w.r.t. t and put t D 1 to get n X pk log.pk / (1.30) H.p/ D @[email protected] /j tD1 D kD1

which is Shannon’s entropy. Differentiate m times and put t D 1 to get H.p/

.m/

m

D .@=@t/ I.t /j tD1 D

which is the expected value of mt h power of

1.7.3

n X

pk . log.pk //m

(1.31)

kD1

log.p/.

GENERATING FUNCTIONS IN GRAPH THEORY

Graph theory is a branch of discrete mathematics that deals with relationships (adjacency and incidence) among a set of finite number of elements. As a graph with n vertices is a subset of

1.7. OTHER GENERATING FUNCTIONS

 n

17

pairs of vertices, there exist a total number of 2. / graphs on n vertices. Hence, P.n2/ Gn;k x k where Gn;k is the number of graphs the GF for graphs with n vertices is Gn .x/ D kD0 with n vertices and k edges. Total number of simple labeled graphs with n vertices and m edges n  P .n2/ x n =nŠ D e C.x/ where C.x/ is the EGF of is .m2/ . This in turn results in the EGF 1 nD0 2 connected graphs. These are further discussed in Chapter 4 (pp. 74). the set of

1.7.4

n 2

2

GENERATING FUNCTIONS IN NUMBER THEORY

There are many useful GFs used in number theory. Some of them are discussed in subsequent chapters. Examples are Fibonacci and Lucas numbers, Bell numbers, Catalan numbers, Lah numbers, Bernoulli numbers, Stirling numbers, Euler’s .n/, etc. The Lah numbers satisfy the recurrence L.n; k/ D L.n 1; k 1/ C .n C k 1/L.n 1; k/. It has EGF P1 n =.1 t//k . Replacing t by t , this could also be written as nD0 L.n; k/t =nŠ D .1=kŠ/.t P n n . 1/k .1=kŠ/.t =.1 C t //k D 1 nD0 . 1/ L.n; k/t =nŠ.

1.7.5

ROOK POLYNOMIAL GENERATING FUNCTION

Consider an n  n chessboard. A rook can be placed in any cell such that no two rooks appear in any row or column. Let rk denote the number of ways to place k rooks on a chessboard. Obviously r0 D 1 and r1 D n2 (because it can be placed anywhere), and rn D nŠ. The rook polynomial GF can now be defined as R.t / D r0 C r1 t C r2 t 2 C    :

(1.32)

2 P The rook polynomial for an n  n chessboard is given by nkD0 kŠ kn t k . There are r1 D n2 ways to place one rook. Suppose there are k distinguishable rooks. The first can be place in r1 D n2 ways, which marks-off one row and one column (because rooks attack horizontally or vertically). There are .n 1/2 squares remaining so that the second one can be placed in .n 1/2 ways and so on. As the rooks can be arranged among themselves in kŠ ways, we get P the GF as nkD0 n2 .n 1/2 .n 2/2 : : : .n k C 1/2 =kŠt k . Multiply and divide by kŠ and use  n.n 1/.n 2/ : : : .n k C 1/=kŠ D kn to get the result.

1.7.6

STIRLING NUMBERS OF SECOND KIND

There are two types of Stirling numbers—the first kind denoted by s.n; k/ and second kind denoted by S.n; k/. Consider a set S with n elements. We wish to partition it into k.< n/ nonempty subsets. The S.n; k/ denotes the number of partitions of a finite set of size n into k subsets. It can also be considered as the number of ways to distribute n distinguishable balls into k indistinguishable cells with no cell empty.

18

1. TYPES OF GENERATING FUNCTIONS

The Stirling number of second kind satisfies the recurrence relation S.n; k/ D kS.n 1; k/ C S.n 1; k 1/. The OGF for the Stirling numbers of the second kind is given by 1 X nD0

S.n; k/x n D x k =Œ.1

x/.1

2x/.1

3x/ : : : .1

kx/:

(1.33)

A simplified form results when n is represented as a function of k (displaced by another integer): 1 X nD0

1.8

S.m C n; m/x n D 1=Œ.1

x/.1

2x/.1

3x/ : : : .1

mx/:

(1.34)

SUMMARY

This chapter introduced the basic concepts in GFs. Topics covered include structure of GFs, types of GFs, ordinary and exponential GFs, etc. The Pochhammer GF for rising and falling factorials are also introduced. Some special GFs like auto-covariance GFs, information GFs, and those encountered in number theory and graph theory are briefly described. Multiplying an OGF by 1=.1 x/ results in the OGF of the partial sums of coefficients. This result is used extensively in Chapter 3.

CHAPTER

2

Operations on Generating Functions After finishing the chapter, readers will be able to    • do arithmetic on generating functions. • find linear combination of generating functions. • differentiate and integrate generating functions. • apply left-shift and right-shift of generating functions. • find convolution and powers of generating functions.

2.1

BASIC OPERATIONS

The first chapter explored several simple sequences. As a sequence is indexed by natural numbers, the best way to express an arbitrary term of a sequence seems to be a closed form as a function of the index .n/. For example, an D 2n 1 expresses the nth term as a function of n. Some of the series encountered in practice cannot be described succinctly as shown above. A recurrence relation connecting successive terms may give us good insights for some sequences. Such a recurrence suffices in most of the problems, especially those used in updating the terms. But it may be impractical to use a recurrence to calculate the coefficients for very large indices (say 10 millionth term). Given a recurrence for two sequences an and bn , how do we get a recurrence for an C bn or for c  an ?. This is where the merit of introducing a GF becomes apparent. It allows us to derive new GFs from existing or already known ones. P k An infinite sum is denoted by either explicit enumeration of the range as 1 kD0 pk t , or P as k0 pk t k where the upper limit is to be interpreted appropriately. Most of the following results are apparent when the coefficient of the power series are represented as sets. For example, if (a0 ; a1 ; : : :) and (b0 ; b1 ; : : :) are, respectively, the coefficients of A.t/ and B.t /, then (a0 ˙ b0 ; a1 ˙ b1 ; : : :) are the coefficients of the sum and difference A.t/ C B.t/ and A.t/ B.t /; while (c0 ; c1 ; : : :) are the coefficients of A.t /  B.t /, where c0 D a0 b0 , c1 D a0 b1 C a1 b0 , etc., cn D a0 bn C a1 bn 1 C    C an b0 . Two GFs, say F .t / and G.t/ are exactly identical when

20

2. OPERATIONS ON GENERATING FUNCTIONS

each of the coefficients of t k are equal for all k . This holds good for OGF, EGF, and other aforementioned GFs. Let F .t / and G.t / defined as F .t / D a0 C a1 t C a2 t 2 C    D G.t /

2

D b0 C b1 t C b2 t C    D

P1

kD0

P1

kD0

ak t k

(2.1)

k

(2.2)

bk t

be two arbitrary OGF and with compatible coefficients. Here, ak (respectively bk ) is the coefficient of t k in the power series expansion of F .t / (resp. G.t/).

2.1.1

EXTRACTING COEFFICIENTS

Quite often, the GF approach allows us to find a compact representation for a sequence at hand, and to extract the terms of a sequence from its GF using simple techniques (power series expansion, differentiation, etc.). Thus it is a two-way tool. Coefficients can often be extracted easily from a GF. But there are situations where we may have to use other mathematical techniques like partial fractions, differentiation, logarithmic transformations, synthetic division etc. Suppose a GF is in the form p.x/=q.x/ where p.x/ is either a polynomial or a constant. If the degree of p.x/ is less than that of q.x/, and q.x/ contains products of functions, we may split the entire expression using partial fractions to simplify the coefficient extraction. If the degree of p.x/ is greater than or equal to that of q.x/, we may use synthetic division to get an expression of the form r.x/ C p1.x/=q.x/, where the degree of p1.x/ is less than that of q.x/, and proceed with partial fraction decomposition of p1.x/=q.x/. These are illustrated in examples below.

2.1.2

ADDITION AND SUBTRACTION

The sum and difference of F .t / and G.t / are defined as F .t / ˙ G.t / D .a0 ˙ b0 / C .a1 ˙ b1 /t C .a2 ˙ b2 /t 2 C    D

1 X kD0

.ak ˙ bk /t k :

(2.3)

Thus, the above result can be extended to any number of GFs as F .t/ ˙ G.t/ ˙ H.t/ ˙    , where each of them are either OGF or EGF. Hence, F .t/ G.t/ C H.t/ is a valid OGF if each of them is an OGF. This is a special case of the linear combination discussed below. Some authors denote this compactly as .F G C H /.t /. This has applications in several fields like statistics where we work with sums of independently and identically distributed (IID) random variables.

2.1.3

MULTIPLICATION BY NON-ZERO CONSTANT

P P1 k This is called a scalar product or scaling, and is defined as cF .t/ D c 1 kD0 ak t D kD0 c  P 1 k k ak t D kD0 bk t where bk D c ak , and c is a non-zero constant. This technique allows us

2.1. BASIC OPERATIONS

21

to represent a variety of sequences using a constant times a single GF. For example, if there are multiple geometric series that differ only in the common ratio, we could represent all of them using this technique. As constant c is arbitrary, we could also divide a GF (OGF, EGF, Pochhammer GF, etc.) by a non-zero constant.

2.1.4

LINEAR COMBINATION

Scalar multiplication can be extended to a linear combination of OGF as follows. Let c and P k d be two non-zero constants. Then cF .t / C dG.t / D 1 kD0 .cak C dbk /t (where the missing operator is ). The constants can be positive or negative, which results in a variety of new GFs. This also can be extended to any number (n  2) of GFs. As a special case, the equality of two GFs mentioned in the first paragraph can be proved as follows. Suppose F .t/ and G.t/ are two GFs. Assume for simplicity that both are OGF. Then by taking c D C1 and d D 1, we have P P1 P1 k k F .t / G.t / D 1 bk /t k . Equating to zero kD0 ak t kD0 bk t , or equivalently kD0 .ak 0 0 shows that this is true only when each of the ak s and bk s are equal, proving the result. The same argument holds for EGF, Pochhammer GF, and other GFs. Example 2.1 (OGF of Arithmetic Progression) Find the OGF of a sequence (a; a C d; a C 2d; : : : ; a C .n 1/d; : : :) in arithmetic progression (AP) with initial term ‘a’ and common difference d . Solution 2.1 Write the OGF as F .t / D a C .a C d /t C .a C 2d /t 2 C    C .a C nd /t n C    :

This can be split into two separate OGFs as    F .t / D a 1 C t C t 2 C    C dt 1 C 2t C 3t 2 C    C nt n This gives F .t / D a=.1

2.1.5

t / C dt =.1

1

 C  :

(2.4)

(2.5)

t /2 .

SHIFTING

If we have a sequence (a0 ; a1 ; : : :), shifting it right by one position means to move everything to the right by one unit and fill the vacant slot on the left by a zero. This results in the sequence (0; a0 ; a1 ; : : :), which corresponds to b0 D 0; bn D an 1 for n  1. Similarly, shifting left means to move everything to the left by one position and discarding the very first term on the left, which corresponds to bn D anC1 for n  0. For finite sequence, we may have to fill the vacant terms on the right-most position by zeros. Let F .t/ D a0 C a1 t C a2 t 2 C    be the OGF for (a0 ; a1 ; : : :). Then tF .t / is an OGF for (0; a0 ; a1 ; : : :). In general t m F .t/ D ‚ …„ ƒ a0 t m C a1 t mC1 C a2 t mC2 C    . This is the OGF of (0; 0; : : : 0, a0 ; a1 ; a2 ; : : :) where the “zero

22

2. OPERATIONS ON GENERATING FUNCTIONS

block” in the beginning has m zeros. Simple algebraic operations can be performed to show that F .t /

a0

a1 t

a2 t 2



am

1t

m 1



=t m D am C amC1 t C amC2 t 2 C   

(2.6)

which is a shift-left by m positions. Shifting is applicable for both OGF and EGF, and will be used in the following paragraphs. Example 2.2 Let G.t / be the OGF of a sequence (a0 ; a1 ; : : :). What is the sequence whose OGF is .1 C t/G.t /? Solution 2.2 Write .1 C t /G.t / D G.t / C tG.t /. It follows easily that this is the OGF of (a0 ; a0 C a1 ; a1 C a2 ; : : :) because t G.t / is the aforementioned right-shifted sequence. Example 2.3 Find the OGF of the sequence .1; 3; 7; 15; 31; : : :). Solution 2.3 The k th term is obviously (2k P k the given sequence. Then F .t / D 1 kD1 .2 P1 k k 2 is 2t / 1 kD1 2 t D 2t C 4t C    D .1 t /. Hence, F .t / D 1=.1 2t / t=.1 t / 1, 2t / 1.

1) for k  1. Let F .t/ denote the OGF of 1/t k . Split this into two series. The first one P1 k 1. The second one is kD1 t D t=.1 which simplifies to .1 2t C 2t 2 /=Œ.1 t/.1

Example 2.4 Find the OGF and EGF of the sequence .12 ; 32 ; 52 ; 72 ; : : :/. Solution 2.4 Let F .t / denote the OGF of the given sequence. The k t h term is obviously .2k C P 2 k 1/2 for n  0. Hence, F .t / D 1 . Expand .2k C 1/2 D 4k 2 C 4k C 1 and split kD0 .2k C 1/ t P P P1 k 1 2 k k into three terms to get F .t / D 4 kD0 k t C 4 1 kD0 kt C kD0 t . All of these have already been encountered in previous chapter. Thus, we get F .t/ D 4t.1 C t/=.1 t/3 C 4=.1 t/2 C 1=.1 t /. P Next let H.t / denote the EGF of the given sequence. Then H.t/ D 1 .2k C 1/2 t k =kŠ. kD0 P1 2 2 Expand .2k C 1/ D 4k C 4k C 1 and split into three terms to get H.t/ D 4 kD0 k 2 t k =kŠ C P P1 k k 2 4 1 1/ C k and split the first term into two kD0 k t =kŠ C kD0 t =kŠ. Write k D k.k EGF’s. Then combine the first and second series. This results in H.t / D 4t 2

1 X kD2

tk

2

=.k

2/Š C 8t

This simplifies to H.t / D .4t 2 C 8t C 1/e t .

1 X kD1

tk

1

=.k

1/Š C

1 X kD0

t k =kŠ:

(2.7)

2.1. BASIC OPERATIONS

2.1.6

23

FUNCTIONS OF DUMMY VARIABLE

The dummy variable in a GF can be assumed to be continuous in an interval. Any type of transformation can then be applied on them within that region. A simple case is the scaling of t as ct in an OGF X X F .ct / D ak .ct /k D .c k ak /t k ; (2.8) k0

k0

which is an OGF of fc k ak g where c can be any real number or fractions of the form p=q . By taking c D 1=2, we get the series fak =2k g. Example 2.5 If F .t / D a0 C a1 t C a2 t 2 C    is an OGF, express the OGFs: (i) .a0 C a4 t 4 C a8 t 8 C a12 t 12 C    / and (ii) .a0 a2 t 2 C a4 t 4 a8 t 8 C    / in terms of F .t/. 4 Solution 2.5 As the first sequence contains powers of t 4 , this phas OGFF .t /. The second sequence has alternating signs. As the imaginary constant i D 1, the second sequence can be 2 2 4 generated using F ..i t / / because i D 1; i D C1, and so on.

Example 2.6 If F .t / D a0 C a1 t C a2 t 2 C a3 t 3 C    express the OGF .a0 C a1 t C a2 t 2 C a4 t 4 C a8 t 8 C a16 t 16 C    / in terms of F .t /. Solution 2.6 Writing t 4 D .t 2 /2 , t 8 D .t 2 /3 , etc., it is easy to see that the OGF of RHS is n F .t 2 / for n D 0; 1; 2; 3; : : :. Example 2.7 Find the OGF of the sequence whose nt h term is ..n C 1/=2n / for n  0. Solution 2.7 Let F .t / denote the OGF F .t / D

X n0

..n C 1/=2n / t n D

X n0

.n C 1/.t=2/n :

(2.9)

Putting s D t =2 shows that this is the OGF of .1 s/ 2 . Thus, the OGF of original sequence is .1 t =2/ 2 or 1=.1 t=2/2 . Another application of the change of dummy variables is in evaluating sums involv P ing binomial coefficients. Consider the sum nkD1 kn 11 ak . This sum is the coefficient of t n in the power series expansion in which the dummy variable is changed as t=.1 t/.  P Symbolically, nkD1 kn 11 ak D Œt n F .t =.1 t//, where F .t/ is the OGF of ak . Similarly,  Pn n k n 1 a D Œt n F .t =.1 C t //. kD1 . 1/ k 1 k

24

2. OPERATIONS ON GENERATING FUNCTIONS

2.1.7

CONVOLUTIONS AND POWERS

The literal meaning of convolution is “a thing, a form or shape that is folded in curved twist or tortuous windings that is complex or difficult to follow.” A convolution in GFs is defined for two sequences (usually different or time-lagged as in DSP) with the same number of elements as the sum of the products of terms taken in opposite directions. As an example, the convolution of .1; 2; 3/ and .4; 5; 6/ is 1  6 C 2  5 C 3  4 D 28. Consider the product of two GFs 1 X kD0

1 X

ak t k 

kD0

bk t k D

1 X kD0

ck t k where ck D

k X

aj bk

j:

(2.10)

j D0

P This is called the convolution, which can also be written as ck D i Cj Dk aj bi . As the order of P the OGF can be exchanged, it is the same as jkD0 ak j bj . The coefficients of a convolution all lie on a diagonal which is perpendicular to the main diagonal (Table 2.1). This shows that a convolution of two sequences with compatible coefficients can be represented as the product of their GFs. This has several applications in selection problems, signal processing, and statistics. This can be expressed as X .a0 bk C a1 bk 1 C    C ak b0 / t k : F .t /G.t / D (2.11) k0

Suppose there are two disjoint sets A and B . If the GF for selecting items from A is A.t/ and from B is B.t /, the GF for selecting from the union of A and B.A [ B/ is A.t/B.t/. This can be applied any number of times. Thus, if A.t/; B.t /, and C.t/ are three OGFs with compatible P coefficients, the OGF of A.t/B.t /C.t / has mt h coefficient um D i Cj CkDm ai bj ck . Table 2.1: Convolution as diagonal sum b0 a0

b 1x

a 0b 0

a0b1x

a 1x

a1b0x

a1b1

x2

a2

x2

x2

a2b1

x3

a3

x3

a 4x 4

a2b0

x3

a3b0 …

x4

a3b1 …

b2x2 a0b2

x2

a1b2

x3

a2b2

x4 x5

a3b2 …

b3x3 a0b3

x3

a1b3

x4

a2b3

x5 x6

a3b3 …

b4x4



x4



a1b4

x5



a2b4

x6



x7

… …

a0b4

a3b4 …

Example 2.8 Find the OGF of the product of .1; 2; 3; 4; : : :/ and .1; 2; 4; 8; 16; : : :/. Solution 2.8 The first OGF is obviously 1=.1 t /2 and second one is 1=.1 2t/. The convolution is the product of the OGF 1=Œ.1 t/2 .1 2t /. Use partial fractions to break this as

2.1. BASIC OPERATIONS 2

25

2

A=.1 t/ C B=.1 t/ C C =.1 2t /. Take .1 t/.1 t/ .1 2t/ as a common denominator and equate numerators on LHS and RHS to get A.1 t /.1 2t/ C B.1 2t/ C C.1 t/2 D 1. Now equate constant, coefficient of t , and t 2 to get 3 equations A C B C C D 1; 3A C 2B C 2C D 0; 2A C C D 0. Multiply the first equation by 2 and subtract from the second one to get A D 2, from which C D 4 and B D 1. Thus, the OGF of the product is 4=.1 2t/ 2=.1 t/ 1=.1 t/2 with RoC 0 < t < 1=2. P P1 k k Now consider two EGFs H1 .t / D 1 kD0 ak t =kŠ and H2 .t/ D kD0 bk t =kŠ. Their product H1 .t/H2 .t / has “exponential convolution” 1 X kD0

ck D

k X

k

ak t =kŠ 

aj bk

1 X

k

kD0

j kŠ=.j Š.k

j D0

bk t =kŠ D

1 X

ck t k =kŠ

(2.12)

kD0

! k X k j /Š/ D aj bk j

j:

(2.13)

j D0

 P As the order of the EGF can be exchanged, it is the same as ck D jkD0 kj ak j bj . As convolution is commutative and associative, we could write F .t/G.t/ D G.t/F .t/ and F .t /G.t /H.t / D G.t /H.t /F .t / D H.t /F .t /G.t /, etc. Similarly, product distributes over addition and subtraction. This means that F .t /ŒG.t / ˙ H.t / D F .t/G.t/ ˙ F .t/H.t/. Powers of P k 2 an OGF are repeated convolutions. Thus, if F .t / D 1 kD0 ak t then F .t/  F .t/ D F .t/ D P1 P k k kD0 ck t where ck D j D0 aj ak j , which can also be expressed as X .a0 ak C a1 ak 1 C    C ak a0 / t k : (2.14) ŒF .t /2 D k0

As an illustration of powers of OGF, consider the number of binary trees Tn with n nodes where one unique node is designated as the root. Assume that there are k nodes on the left subtree and n 1 k nodes on the right subtree (so that the total number of nodes is k C 1 C .n 1 k/ D n/. It can be shown that it satisfies the recurrence Tn D

n 1 X

Tk Tn

1 k

for

(2.15)

n1

kD0

because k can take any value from 0 (right skewed tree) to n 1 (left skewed tree). The form of the recurrence suggests that it is of the form of some power (Table 2.2). But as the indexvar P n is varying from 0 to n 1, it is not an exact power as shown below. Let T .x/ D 1 nD0 Tn x . Multiply both sides of (2.15) by x n and sum over n D 1 to 1 to get 1 X nD1

Tn x n D

1 X n 1 X nD1 kD0

Tk Tn

1 kx

n

for

n  1;

(2.16)

26

2. OPERATIONS ON GENERATING FUNCTIONS

Table 2.2: Summary of convolutions and powers Type

Expression ∞ a ti i=0 i

OGF Product

i=0



OGF Product (of 3)

a th h=0 h ∞

EGF Product

b ti i=0 i

i /i!



k ab j=0 j k–j

j

k aa j=0 j k–j

2

aiti





at i=0 i



bt j=0 j





OGF Power

Coefficient (ck)



∞ c tj j=0 j

a bc h+i+j = k, k ≥ 0 h i j

∞ b tj /j! j=0 j

∞ 2 a ti /i! i=0 i

EGF Power h ∞ a t ∗ h=0 h h!

EGF Product (of 3)

∞ ti b i=0 i i!



∞ tj c j=0 j j!

k k j=0 j

ajbk–j

k k j=0 j

ajak–j

k! a bc h+i+j = k h i j h!i!j!

where we have used x instead of t as dummy variable for convenience. As the LHS indexvar is varying from 1 onward, it is T .x/ T0 . Assume that T0 D 1. Take one x outside the summation on the RHS and write x n 1 D x k x n 1 k . Then the RHS is easily seen to be x.T .x//2 , so that 2 we p get T .x/ 1 D x.T .x// . Solve this as a quadratic equation in T .x/. This gives p T .x/ D .1 ˙ 1 4x/=.2x/. As T .x/ ! 1 as x ! 0, take negative sign to get T .x/ D .1 1 4x/=.2x/ as the required OGF for the number of binary trees with n nodes. See Chapter 4 for another application of convolutions to matched parentheses. Example 2.9 Find the OGF of Catalan numbers that satisfy the recurrence C0 D 1; Cn D C0 Cn 1 C C1 Cn 2 C    C Cn 1 C0 for n > 1. Prove that Catalan numbers are always integers. Solution 2.9 Let F .t / denote the OGF of Catalan numbers. Write the convolution as a sum P Cn D jnD01 Cj Cn 1 j . As per Table 2.2 entry row 2, this is the coefficient of a power. As done above, multiply by t n and sum over 0–1 to get 1 X nD0

Cn t n D

1 X n 1 X nD0 j D0

Cj Cn

1 j

t n for n  1:

(2.17)

Write t n D t j t n 1 j  t . Then the RHS is easily found to be the coefficients of t F .t/2 . As 2 C p0 D 1, we get F .t / D t F .t / pC 1. Solve this as a quadratic in F .t/ to get F .t/ D .1 ˙ 1 4t /=.2t /. As t ! 0; .1 C 1 4t /=.2t / ! 1 (as it is 1/0 form), but the other root is 0/0 form, which by L’Hospitals rule tends to 1. Alternatively, the plus sign results in negative coefficients but as Catalan numbers are always positive integers, we have to take the minus sign.

p

Hence, we take F .t / D .1 p 1 C t using p

because

1=2 0



2.1. BASIC OPERATIONS

4t /=.2t /, which is the same as in previous example. Expand

1

! ! 1 1 X X 1=2 k 1=2 k D t D1C t k k

1 C t D .1 C t/1=2

D 1. Now use

1=2 k

27



kD0

D 21

2k

. 1/k

1 X p 1Ct D1C 21

2k

(2.18)

kD1

1 2k 2 k 1

k 1

. 1/

kD1



=k to get ! 2k 2 =k t k : k 1

(2.19)

Replace t by 4t and subtract from 1 to get 1

p

1

4t D D

1 X kD1 1 X kD1

1 2k

2

. 1/

k 1

! 2k 2 =k. 1/k 22k t k k 1

! 2k 2 2 =k t k k 1

as . 1/k 1 . 1/k D . 1/2k 1 D . 1/ 1 D 1 and 21 2k 22k D 2. Write .2k and replace k 1 by k so that it varies from 0–1 to get 1 X kD0

2/ as 2.k

! 2k 2 =.k C 1/ t kC1 : k

1/

(2.20)

 P 2k k Now divide by 2t to get F .t / D 1 OGF. The quankD0 Œ1=.k C 1/ k t , which is the required     2n 2n 2n tity Œ1=.n C 1/ 2n is called Catalan number. This can also be written as . As nC1 n n nC1  is always less than 2n (as can be seen from Pascal’s triangle) this difference is always positive. n Moreover, both of them are integers showing that Catalan numbers are always integers. The first few of them are 1; 1; 2; 5; 14; 42; 132; 429; : : :.

Example 2.10 If m; n, and r are integers, prove the identity

Pr

kD0

m k



n r k



D



mCn r

.

Solution 2.10 As the LHS is the convolution of binomial coefficients, it suggests that it has OGF of the form .1 C t /k , for some integer k . We have shown in the last chapter that .1 C t/m  is the OGF of m . Hence, the OGF of LHS is .1 C t /m .1 C t/n or equivalently .1 C t/mCn , k which generates the RHS.

28

2. OPERATIONS ON GENERATING FUNCTIONS

2.1.8

DIFFERENTIATION AND INTEGRATION

A univariate GF can be differentiated or integrated in the dummy variable (which is assumed to be continuous). Similarly, bivariate and higher GF can be differentiated or integrated in one or more dummy variables, as they are independent. See Table 2.3 for a summary of various operations. Table 2.3: Summary of generating functions operations Name

Expression

GF



Add/subtract

k=0



Scalar multiply

F(t) ± G(t)

ak ± bk tk k=0

c*F(t)

cak

∞ c tk k=0 k

Convolution ∞

Left shift (EGF)

a k=0 k+1 ∞

Left shift (OGF)



a tk k=0 m+k



Index multiply

F ′(t)

tk/k!

(F(t) − F(0))/t

ak+1 tk

k=1

Left m-shift (OGF)

F (t)*G(t)

(F (t) − a0 − a1t … − am−1tm−1)/tm t F ′(t)

kak−1 tk

k=0



Tail sum Difference (OGF)

a0 +



F (t)/(1−t)

k a tk j=0 j

k=0

(1−t)F (t)

(ak − ak−1)tk

k=1

Differentiation of OGF

Consider the OGF F .t / D a0 C a1 t C a2 t 2 C    D

1 X

ak t k :

(2.21)

kD0

Differentiate w.r.t. t to get .@=@t /F .t / D a1 C 2a2 t C 3a3 t 2 C    D

1 X

.kak /t k

1

;

(2.22)

kD1

which is the OGF of (a1 ; 2a2 ; 3a3 ; : : :) or fbn D .n C 1/anC1 g for n  0. This can be stated as follows: If .a0 ; a1 ; a2 ; : : :/ , F .t /, then .a1 ; 2a2 ; 3a3 ; : : :/ , .@=@t/F .t/ D F 0 .t/. Literally, this means

2.1. BASIC OPERATIONS

29

that differentiation of an OGF wrt the dummy variable multiplies each term by its index, and shifts the whole sequence left one place. Next consider an EGF H.t / D a0 C a1 t=1Š C a2 t 2 =2Š C    D

1 X

ak t k =kŠ:

(2.23)

kD0

Differentiate w.r.t. t to get .@=@t /H.t / D a1 C a2 t =1Š C a3 t 2 =2Š C    D

1 X

ak t k

1

=.k

(2.24)

1/Š;

kD1

because k in the numerator cancels out with kŠ in the denominator leaving a .k that differentiation of an EGF is equivalent to “shift-left” by one position.

1/Š. This shows

Example 2.11 If F .t / and H.t / denote the OGF and EGF of a sequence (a0 ; a1 ; a2 ; : : :) find the sequence whose OGF are, (i) tF 0 .t / and (ii) tH 0 .t /. Solution 2.11 The OGF of F 0 .t/ is found above as (a1 ; 2a2 ; 3a3 ;    ). Multiplying t gives p 2 this by 0 2 3 1=3 3 tF .t/ D a1 t C 2a2 t C 3a3 t C    . This can be written as a1 t C a2 . 2t/ C a3 .3 t/ C    , in which the k th term is ak .k 1=k t /k . Next consider the EGF. As shown above, H 0 .t/ is the EGF of left-shifted sequence (a1 ; a2 ; a3 ; : : :). Multiply both sides by t to get t.@=@t /H.t / D a1 t C a2 t 2 =1Š C a3 t 3 =2Š C    D

where we have associated 1=.k

1 X

ak =.k

1/Št k ;

(2.25)

kD1

1/Š with the series term ak . This is the OGF of fak =.k

1/Šg.

Higher-Order Derivatives

Differentiation and integration can be applied any number of times. For example, consider the P k formal power series f .t / D 1=.1 t/ D 1 kD0 t . Take log of both sides and differentiate to get 0 the first derivative as f .t/=f .t / D 1=.1 t / so that f 0 .t / D 1=.1 t/2 . Successive differentiation gives f .n/ .t / D nŠ=.1 t /nC1 . A direct consequence of this result is the GF ! 1 X nCk 1 k t D 1=.1 t/n : (2.26) k kD0

Higher-order derivatives are used to find factorial moments of discrete random variables in the next chapter. Example 2.12 Use differentiation and shift operations to prove that t.1 C t/=.1 of the squares an D n2 .

t/3 is the GF

30

2. OPERATIONS ON GENERATING FUNCTIONS

P k Solution 2.12 Consider the OGF F .t / D .1=.1 t /2 / D 1 kD0 .k C 1/t . Multiply by t to get P kC1 t =.1 t /2 D 1 . Differentiate the LHS to get Œ.1 t/2 C 2t.1 t/=.1 t/4 . kD0 .k C 1/t The ‘2t’ term cancels out resulting in .1 t 2 /=.1 t/4 . Write .1 t 2 / D .1 C t/.1 t/ and canP 2 k cel one .1 t / to get .1 C t/=.1 t /3 . The RHS derivative is 1 kD0 .k C 1/ t . Multiply LHS 3 and RHS by t to get the LHS as t.1 C t/=.1 t / . Now RHS is the OGF of n2 .

Example 2.13 (k t h derivative of OGF) Find the GF of the sequence an D

nCm m



for m fixed.

Solution 2.13 Consider F .x/ D x nCm =mŠ. The first derivative is F 0 .x/ D .n C m/ x nCm 1 =mŠ. Repeat the differentiation m times to get F .k/ .x/ D .n C m/.n C  n m 1/.n C m 2/ : : : nx n =mŠ. This can be written as nCm x . Thus, the GF is m .1=mŠ/.@=@x/m .1=.1 x// D 1=.1 x/mC1 . Example 2.14 (k t h derivative of EGF) Prove that D k H.t/ D

P1

nD0

anCk t n =nŠ.

Solution 2.14 Consider H.t / D a0 C a1 t=1Š C a2 t 2 =2Š C a3 t 3 =3Š C    C ak t k =kŠ C    . Take the derivative w.r.t. t of both sides. As a0 is a constant, its derivative is zero. Use derivative of t n D n  t n 1 for each term to get H 0 .t/ D a1 C a2 t=1Š C a3 t 2 =2Š C    C ak t k 1 =.k 1/Š C    . Differentiate again (this time a1 being a constant vanishes) to get H 00 .t / D a2 C a3 t=1Š C a4 t 2 =2Š C    C ak t k 2 =.k 2/Š C    . Repeat this process k times. All terms whose coefficients are below ak will vanish. What remains is H .k/ .t/ D ak C akC1 t=1Š C akC2 t 2 =2Š C    . This can be expressed using the summation P n k notation introduced in Chapter 1 as H .k/ .t/ D 1 =.n k/Š. Using the change nDk an t of indexvar introduced in Shanmugam and Chattamvelli [2016], this can be written as P n H .k/ .t/ D 1 nD0 anCk t =nŠ. Now if we put t D 0, all higher-order terms vanish except the constant ak .

2.1.9

INTEGRATION

P k Integrating F .t / D a0 C a1 t C a2 t 2 C    D 1 kD0 ak t gives Z x X .ak F .t /dt D a0 x C a1 x 2 =2 C a2 x 3 =3 C    D 0

1 =k/ x

k

:

(2.27)

k1

Divide both sides by x to get Z x .1=x/ F .t /dt D .a0 =1/ C .a1 =2/x C .a2 =3/x 2 C    0

which is the OGF of the sequence fak =.k C 1/gk0 .

(2.28)

2.2. INVERTIBLE SEQUENCES

31

Next consider the EGF H.t / D a0 C a1 t =1Š C a2 t 2 =2Š C a3 t 3 =3Š C    D

1 X

ak t k =kŠ:

Integrate term by term to get Z x 1 X H.t /dt D a0 x C a1 x 2 =.1Š  2/ C a2 x 3 =.2Š  3/ C    D ak x kC1 =.k C 1/Š: 0

(2.29)

kD0

(2.30)

kD0

Integration of EGF is equivalent to shift-right by one position.  P Example 2.15 Evaluate the sum nkD0 kn =.k C 1/.  P Solution 2.15 Consider the OGF nkD0 kn =.k C 1/x k . With the help of above result, this can Rx be written as an integral .1=x/ 0 .1 C t /n dt . As the integral of .1 C t/n is .1 C t/nC1 =.n C 1/, we get Œ.1 C x/nC1 1=Œ.n C 1/x as the result. Example 2.16 Find the OGF of the sequence 1, 1/3, 1/5, 1/7, : : :. Solution 2.16 Consider the sequence F .x/ D 1 C x 2 C x 4 C x 6 C    , which has OGF .1 x 2 / 1 . Integrate the RHS term by term to get x C x 3 =3 C x 5 =5 C x 7 =7 C    , which has the R desired coefficients. As the LHS is .1 x 2 / 1 dx , either put x D sin./, dx D cos./d , or use partial fractions to break this into two terms A=.1 x/ C B=.1 C x/. This gives A C B D 1 and A B D 0, from which A D B D 1=2. Thus .1 x 2 / 1 D .1=2/Œ1=.1 x/ C 1=.1 C x/. Now integrate term-by-term on the RHS to get 12 Œ log.1 x/ C log.1 C x/ D 12 log..1 C x/=.1 x//, using log.a/ log.b/ D log.a=b/. This is the OGF of the original sequence.

2.2

INVERTIBLE SEQUENCES

Two GFs F .t / and G.t / are invertible if F .t /G.t / D 1. The G.t/ is called the “multiplicative inverse” of F .t /. Consider the sequence (a0 ; a1 ; : : :). Let G.x/ be the multiplicative-inverse with coefficients (b0 ; b1 ; : : :), so that F .t /G.t / D 1. This gives c0 D a0 b0 , from which b0 D 1=a0 . In P general, bn D . nkD1 ak bn k /=a0 . If a0 ¤ 0, it is invertible and coefficients can be calculated recursively. This shows that a necessary condition for an OGF to be “invertible” is that the constant term is nonzero. Example 2.17 Find the inverse of the sequence .1; 1; 1; : : :/. Solution 2.17 Let G.t / be the inverse with coefficients (b0 ; b1 ; : : :). Then a0 b0 D 1 implies that b0 D 1. Similarly, a0 b1 C a1 b0 D 0 implies that 1  b1 C 1  1 D 0 or b1 D 1. All other coefficients are zeros. For example, in b2 C b1 C b0 D 0 put b0 D 1 and b1 D 1, so that b2 D 0. Similarly, all higher coefficients are zeros. Hence, the inverse is .1; 1; 0; 0; : : :/.

32

2. OPERATIONS ON GENERATING FUNCTIONS

2.3

COMPOSITION OF GENERATING FUNCTIONS

If F .x/ and G.x/ are two GFs, the composition is defined as F .G.x//. Note that F .G.x// is not the same as G.F .x//. Consider F .x/ D 1=.1 x/ and G.x/ D x.1 C x/. Then F .G.x// is 1=.1 x.1 C x//. The general result of composition requires higher powers of GF. Consider a discrete branching process where the size of the nth generation is given by P n 1 th Xn D X individual. If all Yk ’s are assumed kD0 Yk , where Yk is the offspring distribution of k to be IID, it is easy to see that   P ŒXn D kjXn 1 D j  D P Y1 C Y2 C    C Yj : (2.31) Using total probability law of conditional probability, this becomes P ŒXn D k D

1 X j D0

  P Y1 C Y2 C    C Yj P ŒXn

1

D j:

(2.32)

Let n .t/ denote the PGF of the size of nt h generation. Then n .t/ D

X n 1 X kD0

P .Xn D k/ t k :

(2.33)

From this it follows that n .t / D n 1 ..t //.

2.4

SUMMARY

This chapter introduced various operations on GFs. This includes arithmetic operations, linear combinations, left and right shifts, convolutions, powers, change of dummy variables, differentiation and integration. These are applied in various problems in subsequent chapters.

CHAPTER

3

Generating Functions in Statistics After finishing the chapter, readers will be able to    • explain the probability generating functions. • describe the CDF generating functions. • explore moment and cumulant generating functions. • comprehend characteristic functions. • acquaint with mean deviation generating functions. • discuss factorial moment generating functions.

3.1

GENERATING FUNCTIONS IN STATISTICS

GFs are used in various branches of statistics like distribution theory, stochastic processes, etc. A one-to-one correspondence is established between the power series expansion of a GF in one or more auxiliary (dummy) variables, and the coefficients of a known sequence. These coefficients differ in various GFs as shown below. GFs used in discrete distribution theory are defined on the sample space of a random variable or probability distribution. They can be finite or infinite, depending on the corresponding distribution. Multiple GFs like PGF, MGF, etc. can be defined on the same random variable, as these generate different quantities as shown in the following paragraphs. Some of these GFs can be obtained from others using various transformations discussed in Chapter 2. The GF technique is especially useful in some distributions in investigating inter-relationships and asymptotics. As an example, consider the Poisson distribution with parameter . If the average number of occurrences of a rare event in a certain time interval .t; t C dt / is , the Poisson probabilities gives us the chance of observing exactly k events in the same time interval, if various events occur independently of each other. The PGF of Poisson distribution is infinitely differentiable, and it is easy to find factorial moments because the k t h derivative of PGF is k times the PGF. The GFs can be used to prove the additivity property of Poisson, negative binomial, and many other distributions [Shanmugam and Chattamvelli,

34

3. GENERATING FUNCTIONS IN STATISTICS

2016]. Although the previous chapter used x as the dummy variable in a GF, we will use t as the dummy variable in this chapter because x is regarded as a random variable, and used as a subscript.

3.1.1

TYPES OF GENERATING FUNCTIONS

There are four popular GFs used in statistics—namely (i) probability generating function (PGF), denoted by Px .t /; (ii) moment generating function (MGF), denoted by Mx .t/; (iii) cumulant generating function (CGF), denoted by Kx .t/; and (iv) characteristic function (ChF), denoted by x .t/. In addition, there are still others like factorial moment GF (FMGF), inverse moment GF (IMGF), inverse factorial moment GF (IFMGF), absolute moment GF, as well as GFs for odd moments and even moments separately. These are called “canonical functions” in some fields. The PGF generates the probabilities of a random variable, and is of type OGF. The rising and falling FMGFs are also of type OGF. The MGF generates moments, and is of type EGF. It has further subdivisions as ordinary MGF, central MGF, FMGF, IMGF, and IFMGF. The CGF and ChF are also related to MGF. Some of these (MGF, ChF, etc.) can also be defined for an arbitrary origin. The CGF is defined in terms of the MGF as Kx .t/ D ln.Mx .t//, which when expanded as a polynomial in t gives the cumulants. Note that the logarithm is to the base e.ln/. As every distribution does not possess a MGF, the concept is extended to the complex domain by defining the ChF as x .t/ D E.e i tx /. If all of them exists for a distribution, then Px .e t / D Mx .t / D e Kx .t / D x .it/:

(3.1)

This can also be written in the alternate forms Px .e it / D Mx .it/ D e Kx .i t / D x . t/ or as Px .t / D Mx .ln.t // D e Kx .ln.t // D x .i ln.t// (see Table 3.1).

3.2

PROBABILITY GENERATING FUNCTIONS (PGF)

The PGF is extensively used in statistics, econometrics, and various engineering fields. It is a compact mathematical expression in one or more dummy variables, along with the unknown parameters, if any, of a distribution. Definition 3.1 A GF in which the coefficients are the probabilities associated with a random variable is called the PGF. As mentioned in Chapter 1, the PGF is not a function that generates probabilities when particular values are plugged in for the dummy variable t . But when expanded as a power series in the auxiliary variable, the coefficient of t k is the probability of the random variable X taking the value k . This is one way the PGF of a random variable can be used to generate probabilities.

3.2. PROBABILITY GENERATING FUNCTIONS (PGF)

35

Abbreviation

Symbol

Table 3.1: Summary table of generating functions

Definition E=exp oper.

PGF

Px(t)

E(tx)

Probabilities

∂ pk = ∂t k Px(t)|t=0/k!

CDFGF

Fx(t)

E(tx/(1– t))

Cumulative probabilities

∂ Fk =k!1 ∂t k Px(t)/(1– t)|t=0

MGF

Mx(t)

E(etx)

Moments

∂ µ k = ∂t k Mx(t)|t=0

CMGF

Mz(t)

E(et(x-μ))

Central moments

∂ µk = ∂t k Mz(t)|t=0

ChF

φx(t)

E(eitx)

Moments

∂ ikµ k = ∂t k φx(t)|t=0

CGF

Kx(t)

log(E(etx))

Cumulants

∂ µk = ∂t k Kx(t)|t=0

FMGF

Γx(t)

E ((1 + t)x)

Factorial moments

µ(k) =∂t∂k Γx(t)|t=0

MDGF

Dx(t) 2E(tx/(1 – t)2)

Mean deviation

See Section 3.5

Generates What

How Obtained (t = dummy-variable) k

k

k

k

k

k

k

Probability generating function (PGF) is of type OGF. MGF and ChF are of type EGF. MGF need not always exist, but characteristic function always exists. Falling factorial moment is denoted as µ(k) = E(x(x – 1) (x − 2) … (x − k + 1)).

It is defined as Px .t/ D E.t x / D

X x

t x p.x/ D p.0/ C p.1/ t C p.2/ t 2 C    C p.k/ t k C    ;

(3.2)

where the summation is over the range of X . This means that the PGF of a discrete random variable is the weighted sum of all probabilities of outcomes k with weight t k for each value of k in its range. Obviously the RHS is a finite series for distributions with finite range. It usually results in a compact closed form expression for discrete distributions. It converges for jtj < 1, and appropriate derivatives exist. Differentiating both sides of (3.2) k times w.r.t. t gives .@k =@t k /Px .t/ D kŠp.k/C terms involving t . If we put t D 0, all higher order terms that have “t ” or higher powers vanish, giving kŠp.k/. From this p.k/ is obtained as .@k =@t k /Px .t/j t D0 =kŠ. If the Px .t/ involves powers or exponents, we take the log (w.r.t. e ) of both sides and differentiate k times, and then use the following result on Px .t D 1/ to simplify the differentiation. The PGF

36

3. GENERATING FUNCTIONS IN STATISTICS

is immensely useful in deriving key properties of a random variable easily. As shown below, it is related to CDFGF and MDGF. Example 3.1 (PGF special values Px .t D 0/ and Px .t D 1/) Find Px .t D 0/ and Px .t D 1/ from the PGF of a discrete distribution. P Solution 3.1 As k p.k/ being the sum of the probabilities is one, it follows trivially by putting t D 1 in (3.2) that Px .t D 1/ D 1. Put t D 0 in (3.2) to get Px .t D 0/ D p.0/, the first probability. Similarly, put t D 1 to get the RHS as Œp.0/ C p.2/ C    C Œp.1/ C pŒ3 C pŒ5 C    .

Example 3.2 (PGF of arbitrary distribution) Find the PGF of a random variable given below, and obtain its mean. x 1 2 3 p.x/ 1/2 1/3 1/6 Solution 3.2 Plug in the values directly in Px .t/ D E.t x / D

3 X

t x p.x/

(3.3)

xD1

to get Px .t / D t=2 C t 2 =3 C t 3 =6. Differentiate w.r.t. t and put t D 1 to get the mean as 1=2 C 2t =3 C 3t 2 =6j tD1 D 1=2 C 2=3 C 3=6 D 5=3. Example 3.3 (PGF of uniform distribution) Find the PGF of a discrete uniform distribution, and obtain the difference between the sum of even and odd probabilities. Also obtain the mean. Solution 3.3 Take f .x/ D 1=k for x D 1; 2; : : : k for simplicity. Then Px .t/ D 1=kŒt 1 C t 2 C    C t k . Take t outside, and apply formula for geometric progression to get Px .t/ D .t=k/.t k 1/=.t 1/. Take log and differentiate. Then put t D 0, to get the mean as .k C 1/=2. Example 3.4 (PGF of Poisson distribution) Find the PGF of a Poisson distribution, and obtain the difference between the sum of even and odd probabilities. Solution 3.4 The PGF of a Poisson distribution is x

1 X

x

 x



1 X

.t/x =xŠ D e

 t

Œ1 t 

:

(3.4)

Put t D 1 in (3.4) and use the above result to get the desired sum as exp. Œ1 exp. 2/.

. 1// D

Px .t/ D E.t / D

xD0

t e

 =xŠ D e

xD0

e

De

3.2. PROBABILITY GENERATING FUNCTIONS (PGF)

37

Example 3.5 (PGF of geometric distribution) Find the PGF of a geometric distribution with PMF f .xI p/ D q x p; x D 0; 1; 2 : : :, and obtain the difference between the sum of even and odd probabilities. Solution 3.5 As x D 0; 1; 2; : : : 1 values, we get the PGF as Px .t / D E.t x / D

1 X xD0

t x qx p D p

1 X xD0

.qt /x D p=.1

qt /:

(3.5)

Now P [X is even] D q 0 p C q 2 p C    D pŒ1 C q 2 C q 4 C     D p=.1 q 2 / D 1=.1 C q/, and P [X is odd] D q 1 p C q 3 p C    D qpŒ1 C q 2 C q 4 C     D qp=.1 q 2 / D q=.1 C q/. Using the above result, the difference between these must equal the value of Px .t D 1/. Put t D 1 in (3.5) to get p=.1 q. 1// D p=.1 C q/, which is the same as 1=.1 C q/ q=.1 C q/ D p=.1 C q/. There is another version of geometric distribution with range 1–1 with PMF f .xI p/ D q x 1 p [Shanmugam and Chattamvelli, 2016]. In this case the PGF is pt=.1 qt/. Closed-form expressions for Px .t / are available for most of the common discrete disR tributions. They are seldom used for continuous distributions because t x f .x/dx may not be convergent. Example 3.6 (PGF of BINO.n; p/ distribution) Find the PGF of BINO.n; p/ distribution  with PMF f .xI n; p/ D xn p x q n x , and obtain the mean. Solution 3.6 By definition ! n X n x n Px .t/ D E.t / D p q x xD0 x

! n X n t D .pt/x q n x xD0

x x

x

D .q C pt/n :

(3.6)

The coefficient of t x gives the probability that the random variable takes the value x . To find the mean, we take log of both sides. Then log.Px .t// D n  log.q C pt/. Differentiate both sides w.r.t. t to get Px0 .t /=Px .t/ D n  p=.q C pt /. Now put t D 1 and use Px .t D 1/ D 1 to get the RHS as n  p=.q C p/ D np as q C p D 1. Example 3.7 (PGF of NBINO.k; p/ distribution) Find the PGF of negative binomial distribution NBINO.k; p/, and obtain the mean. Solution 3.7 By definition 1 X xCk Px .t / D E.t / D x xD0 x

! 1 X 1 k x x xCk p q t D x xD0

! 1

p k .qt/x :

(3.7)

38

3. GENERATING FUNCTIONS IN STATISTICS

As p k is a constant, this is easily seen to be Œp=.1 qt/k . Take log and differentiate w.r.t. t and put t D 1 to get E.X / D kq=p . By setting p D 1=.1 C r/ and q D 1 p D r=.1 C r/ we  get another form of negative binomial distribution NBINO.k; 1=.1 C r// as xCkx 1 Œ1=.1 C r/k Œr=.1 C r/x with PGF 1=Œ1 C r.1 t/k . The PGF of NBINO.k; p/ can directly be derived by putting x D tq D t .1 p/ in the expansion 1=.1 x/k and multiplying both sides by p k . Example 3.8 (PGF of logarithmic distribution) Find the PGF of logarithmic.q/ distribution, and obtain the mean. Solution 3.8 By definition Px .t/ D

1 X xD1

q x t x =.x log.p// D . 1= log.p//

1 X xD1

.qt/x =x D log.1

qt/= log.1

q/; (3.8)

where q D 1 p and t < 1=q for convergence. This distribution is a special limiting case of negative binomial distribution when the zero class is dropped, and the parameter k of negative binomial distribution ! 0. h i  kp k =.1 p k / 0; q; .k C 1/q 2 =2Š C .k C 1/.k C 2/q 3 =3Š C    : (3.9) When k ! 0, a factorial term will remain in the numerator which cancels out with the denominator, leaving a k in the denominator. For example, .k C 1/.k C 2/q 3 =3Š becomes .0 C 1/.0 C 2/q 3 =3Š D 1:2q 3 =3Š D q 3 =3. Divide the numerator and denominator of the term outside the bracket by p k to get k=p k 1. Use L’Hospitals rule once to get this limit as 1=.ln.p//. Thus, the negative binomial distribution tends to the logarithmic law in the limit. Differentiate the PGF w.r.t. t to get @=@tPx .t / D . 1= log.p// q=.1 qt/. Put t D 1 to get the mean as q=Œlog.p/.1 q/ D q=Œp log.p/. Example 3.9 (PGF of hypergeometric distribution) Find the PGF of a hypergeometric dis N m N  tribution m = n . k n k Solution 3.9 The PGF is easily obtained as ! m X m N Px .t / D x n xD0

! ! m N x = t x n

(3.10)

because the probabilities outside the range .x > m/ are all zeros. The hypergeometric series is P .k/ b .k/ x k defined in terms of the rising Pochhammer EGF as F .xI a; b; c/ D k a c .k/ where a.k/ kŠ denotes rising Pochhammer number. A property of the hypergeometric series is that the ratio of two consecutive terms is a polynomial. More precisely, the ratio of .k C 1/th to k th term is a

3.2. PROBABILITY GENERATING FUNCTIONS (PGF)

39

polynomial in k . If such a polynomial is given, we could easily ascertain the parameters of the hypergeometric function from it. Consider the ratio of two successive terms of hypergeometric distribution f .k C 1/=f .k/ D .k

m/.k

n/=Œ.k C N

m

n C 1/.k C 1/

(3.11)

which is always written with “k ” as the first variable. Now drop each k , and collect the remaining values to find out the corresponding hypergeometric function parameters as F . m; nI N   m n C 1I x/. From the PMF we have that f .0/ D N n m = Nn . This should be used as multi  plier of the hypergeometric function to get f .x/ D N n m = Nn F . m; nI N m n C 1I x/.   From this we get the PGF as N n m = Nn 2 F1 . n; mI N m n C 1I t/. Example 3.10 (PGF of power-series distribution) Find the PGF of power-series distribution, and obtain the mean and variance. Solution 3.10 The PMF of power-series distribution is P .X D k/ D ak  k =F ./ where k takes any integer values. By definition Px .t / D .1=F . //

1 X xD1

ax  x t x D .1=F . //

1 X xD1

ax . t/x D F . t/=F ./:

(3.12)

Differentiate w.r.t. t to get Px0 .t/ D  F 0 . t /=F . /, from which the mean  D Px0 .t/j t D1 D  F 0 . /=F . /. The variance is found using V .X / D E.X.X 1// C E.X/ ŒE.X/2 . Consider F 00 .t / D  2 F 00 . t /=F . /. From this the variance is obtained as F 00 .1/ C F 0 .1/ ŒF 0 .1/2 D  2 F 00 ./=F . / C F 0 . /=F . / ŒF 0 . /=F . /2 .

3.2.1

PROPERTIES OF PGF .r/

1. P .0/=rŠ D @r =@t r Px .t /j tD0 D P ŒX D r. Px .t/ is infinitely differentiable in t for jtj < 1. Differentiate Px .t/ D E.t x / w.r.t. t r times r to get @t@ r Px .t/ D X Œx.x 1/ : : : .x r C 1/t x r  f .x/: E Œx.x 1/ : : : .x r C 1/t x r  D (3.13) xr

The first term in this sum is Œr.r 1/ : : : .r r C 1/t r r f .x D r/ D ŒrŠt 0 f .x/ D rŠf .x D r/. By putting t D 0, every term except the first vanishes, and the RHS becomes r rŠf .x D r/. Thus, @t@ r Px .t D 0/ D rŠf .x D r/. 2. P .r/ .1/=rŠ D EŒX.r/ , the r t h falling factorial moment (page 56). By putting t D 1 in (3.13), the RHS becomes EŒx.x 1/ : : : .x r C 1/, which is the r th falling factorial moment. This is sometimes called the FMGF (see Section 3.10 on page 56).

40

3. GENERATING FUNCTIONS IN STATISTICS

3.  D E.X / D P 0 .1/, and 02 D E.X 2 / D P 00 .1/ C P 0 .1/. The first result follows directly from the above by putting r D 1. As X 2 D X.X the second result also follows from it.

1/ C X ,

4. V(X) = P 00 .1/ C P 0 .1/Œ1 P 0 .1/. This result follows from the fact that V .X / D EŒX 2  EŒX2 D EŒX.X 1/ C EŒX EŒX 2 . Now use the above two results. R 1 5. t Px .t /dt j t D1 D E. XC1 /. R R Consider t Px .t /dt D t E.t x /dt . Take expectation operation outside the integral and integrate to get the result. This is the first inverse moment (of X C 1), and holds for positive random variables. 6. PcX .t/ D PX .t c /. This follows by writing t cX as .t c /X . 7. PX ˙c .t / D t ˙c  Px .t /. This follows by writing t X ˙c as .t ˙c /t X . 8. PcX ˙d .t / D t ˙d  PX .t c /. This follows by combining two cases above. 9. Px .t / D MX .ln.t//. From (3.1), we have Px .e t / D Mx .t/. Write t 0 D e t so that t D ln.t 0 / to get the result. 10. P.X ˙/= .t/ D t ˙= PX .t 1= /. This is called the change of origin and scale transformation of PGF. This follows by combining (6) and (7). 11. If X and Y are IID random variables, P.XCY / .t/ D Px .t/Py .t/ and P.X Px .t /PY . t /.

3.3

Y / .t/

D

GENERATING FUNCTIONS FOR CDF

As the PGF of a random variable generates probabilities, it can be used to generate the sum of left tail probabilities (CDF). Definition 3.2 A GF in which the coefficients are the cumulative probabilities associated with a random variable is called the CDF generating function (CDFGF). We have seen in Theorem 1.1 (page 9) that if F .x/ is the OGF of the sequence (a0 ; a1 ; : : : ; an ; : : :), finite or infinite, then F .x/=.1 x/ is the OGF of the sequence (a0 ; a0 C

3.3. GENERATING FUNCTIONS FOR CDF

41

a1 ; a0 C a1 C a2 ; : : :). By replacing ai ’s by probabilities, we obtain a GF that generates the sum of probabilities as 0 1 1 k X X @ pj A t k D p0 C .p0 C p1 /t C .p0 C p1 C p2 /t 2 C    : (3.14) G.t / D kD0

j D0

This works only for discrete distributions. Example 3.11 (CDFGF of geometric distribution) Obtain the CDFGF of a geometric distribution with PMF f .xI p/ D q x p; x D 0; 1; 2 : : :. Solution 3.11 The PGF of a geometric distribution is derived in Example 3.5 as p=.1 qt/ from which the CDFGF is obtained as G.t / D p.1 t / 1 =.1 qt/. Expand both .1 t/ 1 and .1 qt / 1 as infinite series’ and combine like powers to get     G.t / D p 1 C t .1 C q/ C t 2 1 C q C q 2 C t 3 1 C q C q 2 C q 3 C    : (3.15) Write .1 C q C q 2 C q 3 C    C q k / as .1 q kC1 /=.1 q/, and cancel .1 q/ D p in the numerator to get the CDFGF of geometric distribution as      Dx .t/ D 1 C t 1 q 2 C t 2 1 q 3 C t 3 1 q 4 C    : (3.16) The CDF P ŒX  k of geometric distribution is related to the incomplete beta function as Ip .1; k/ for k D 1; 2; 3; : : :. This is useful to mitigate underflow problems in computer memory when k is very large (CDF is sought in extreme right tail). As Ip .1; k/ D 1 Iq .k; 1/, where q D 1 p , this can also be used to speed-up the computation. Example 3.12 (CDFGF of Poisson distribution) Obtain the CDFGF of a Poisson distribution with f .xI / D e  x =xŠ, x D 0; 1;    ; 1. Solution 3.12 The PGF of Poisson distribution is found in Example 3.4 as e Œ1 t  . From this the CDFGF follows easily as e .1 t / =.1 t/. Write the numerator as e  e t . Now expand e t and .1 t / 1 as infinite series and collect coefficients of like powers to get h      i   Dx .t / D 1 C t e  C t 2 =2Š e  Œ1 C  C t 3 =3Š e  1 C  C 2 =2Š C    : (3.17) Example 3.13 (CDFGF of negative binomial distribution) Obtain the CDFGF of a negative  n x 1 binomial distribution with PMF f .xI n; p/ D xCn p q ; x D 0; 1; 2; : : :. x Solution 3.13 The PGF of negative binomial distribution is derived in Example 3.7 as Œp=.1 qt /n from which the CDFGF is obtained as G.t / D p n .1 t/ 1 .1 qt/ n . As the CDF P ŒX  k of negative binomial distribution is Ip .n; k C 1/ for k D 0; 1; 2; : : : this can be used when k is very large.

42

3. GENERATING FUNCTIONS IN STATISTICS

3.4

GENERATING FUNCTIONS FOR SURVIVAL FUNCTIONS

As the PGF of a random variable generates probabilities, it can be used to generate the sum of right tail probabilities, called survival function (SF). Definition 3.3 A GF in which the coefficients are the survival probabilities associated with a random variable is called the SF generating function (SFGF). First consider a finite distribution with PMF f .x/ and range 0–n. Let a0 ; a1 ; : : : ; an be the corresponding probabilities (coefficients). As t n Px .1=t/ reverses the probabilities, t n Px .1=t /=.1 t/ is the SFGF. The SF is defined as P ŒX > k (where the x D k case is not included) when the CDF is defined as P ŒX  k. Example 3.14 (SFGF of BINO.n; p/ distribution) Find the SFGF of BINO.n; p/ distribu tion with PMF given by f .xI n; p/ D xn p x q n x , x D 0; 1;    ; n. Solution 3.14 By definition ! n X n x n Px .t / D E.t / D p q x xD0 x

! n X n t D .pt/x q n x xD0

x x

x

D .q C pt/n :

(3.18)

Using the above result, the SFGF follows as t n =.1 t / .q C p=t/n . Taking t n as common denominator, this simplifies to .p C qt /n =.1 t /, which is the SFGF of binomial distribution. Example 3.15 (SFGF for geometric distribution) Obtain the SFGF of a geometric distribution with PMF given by f .xI p/ D q x p; x D 0; 1; 2; : : : ; q D 1 p . Solution 3.15 First find P ŒX > n to find the SF of a geometric distribution: SF.n/ D P .X > n/ D q nC1 p C q nC2 p C    :

Take q nC1 p as a common factor, and use .1 C q C q 2 C    / D 1=.1 q nC1 p=.1 q/ D q nC1 . From this the SFGF follows as SF x .tI p/ D q C q 2 t C q 3 t 2 C    D q=.1

Thus, the SFGF is q=.1

qt/

(3.19) q/ to get the SF as

where q D 1

p:

qt /. If the SF is defined as P ŒX  k, this becomes 1=.1

(3.20) qt/.

Example 3.16 (SFGF for Poisson distribution) Obtain the SFGF of a Poisson distribution with PMF f .xI / D e  x =xŠ, x D 0; 1;    ; 1.

3.5. GENERATING FUNCTIONS FOR MEAN DEVIATION

43

Œ1 t 

Solution 3.16 The PGF of Poisson distribution is found in Example 3.4 as e . From this .1 t / the CDFGF follows easily as e =.1 t /. Write the SF as SF D 1 CDF . Multiply both sides by t n and sum over the range to get h i Dx .t / D 1=.1 t/ e .1 t / =.1 t/ D 1 e .1 t / =.1 t/: (3.21) Example 3.17 (SFGF for negative binomial distribution) Obtain the SFGF of a negative bi nomial distribution with PMF f .xI k; p/ D xCkx 1 p k q x ; x D 0; 1; 2; : : :. Solution 3.17 The PGF of negative binomial distribution is derived in Example 3.7 as Œp=.1 qt /k from which the SFGF is obtained as G.t / D Œ1 p k .1 qt/ k .1 t/ 1 . As the SF of negative binomial distribution is Iq .c C 1; k/ for k D 0; 1; 2; : : : this can be used when k is very large.

3.5

GENERATING FUNCTIONS FOR MEAN DEVIATION

It is shown in page 40 that Px .t /=.1 t / is the CDFGF. A generating function for MD (MDGF) is found from CDFGF as follows. Write the CDFGF as  Gx .t / D g0 C g1 t C g2 t 2 C g3 t 3 C    ; (3.22) where gk denotes the sum of probabilities starting with the first, and g0 D p0 . Divide both sides by .1 t/, and denote the LHS Gx .t /=.1 t/ by Hx .t/.  Hx .t/ D h0 C h1 t C h2 t 2 C h3 t 3 C    ; (3.23) where hk D g0 C g1 C    C gk , and h0 D g0 . The above step is valid using the same reasoning for CDFGF given above with pk D gk . This series also is absolutely convergent, as sum of the probabilities are all less than or equal to one for any discrete distribution. Now consider the MD of a discrete distribution with range [a,b]: EjX

j D

b X xDa

jx

(3.24)

jp.x/

where a is the lower and b is the upper limit of the distribution, and a could be 1 and b could be C1. As the mean of the random variable X is E(X)=, it follows that E(X )=0, where E() is the expectation operator. Split the range of summation from a to bc, and from de to b: E.X

/ D

bc X xDa

.x

/p.x/ C

b X

.x

xDde

/p.x/ D 0:

(3.25)

44

3. GENERATING FUNCTIONS IN STATISTICS

where bc denotes the floor operator (greatest integer less than ), and de denotes the ceil operator (least integer greater than ), (see Chapter 4, § 4.1.1, pp. 62). Note that de = bc+1. P Pbc Write this as bxDde .x /p.x/ D /p.x/, and put in (3.24) to get xDa .x EjX

j D

bc X

.

x/p.x/

xDa

bc X

.x

/p.x/ D 2

xDa

bc X

.

x/p.x/:

xDa

Apply the summation term-by-term inside the bracket to get 0 1 bc X EjX j D 2 @F.bc/ xp.x/A :

(3.26)

xDa

Sum the terms inside the bracket in reverse order of index variable as bc X xDa

xp.x/ D .bc/  p.bc/ C .bc

1/ C    C a  p.a/:

1/  p.bc

(3.27)

Collect alike terms on the RHS to get bc X xDa

xp.x/ D   F.bc/

where F.bc/ D p.bc/ C p.bc term cancels out, and we get

bc X kDa

k  p.k/;

(3.28)

1/ C    C p.a/. Now substitute in (3.26). The F.bc/

EjX

j D 2

bc X

k p.k/:

(3.29)

kDa

Write (3.29) as two summations EjX and substitute

Px

i Da

j D 2

bc X x X

p.i/

(3.30)

xDa i Da

p.i / D F.x/ to get MD D 2

bc X

F .x/;

(3.31)

xDa

where  is the arithmetic mean, and F .x/ is the CDF. If the mean is not an integer, a correction term 2ıF .c/, where ı D  c , and c D bc must be added to get the correct MD. This correction term reduces to F .bc/ when  is a half-integer (eg: 3.5).

3.6. MD OF SOME DISTRIBUTIONS

45

Theorem 3.1 (Mean deviation generating function (MDGF)) The MD of a discrete random variable is twice the coefficient of t bc 1 in the power series expansion of .1 t/ 2 Px .t/ where  is the mean, bc denotes the integer part of , and Px .t / is the PGF, with a correction term 2ıF .bc/ added to it when  is not an integer. In other words, the MDGF is 2 Px .t/=.1

3.6

t /2 .

MD OF SOME DISTRIBUTIONS

This section derives the MD of some discrete distributions using above theorem. For each of the following distributions, a correction term 2ıF .bc/ must be added when the mean is not an integer, where ı is the fractional part  bc, and F ./ denotes the CDF. It then compares the results obtained using Equation (3.12) and Theorem 3.1 to check if they tally.

3.6.1

MD OF GEOMETRIC DISTRIBUTION

The PGF of geometric distribution is p=.1 qt /. Expanding .1 qt/ 1 as an infinite series we get Px .t / D p.1 C qt C q 2 t 2 C q 3 t 3 C    /. The CDFGF of a geometric distribution is obtained using 1 C q C q 2 C    q k D .1 kC1 q /=.1 q/ D .1 q kC1 /=p as G.t / D 1

qCt 1

 q2 C t 2 1

 q3 C t 3 1

  q4 C   

(3.32)

because p in the numerator cancels out with 1 q D p in the denominator. Denote .1 q kC1 / P P by gk , and obtain the MDGF with coefficients hk D gk D .1 q kC1 /. As the mean of a geometric distribution is q=p , we can simply fetch the coefficient of t bc 1 D t bq=pc 1 in H.t/, P 1 and multiply by 2 to get the MD as 2 bq=pc .1 q kC1 /, where bq=pc denotes the integer kD0 part. Results are shown in Table 3.2.

3.6.2

MD OF BINOMIAL DISTRIBUTION

The PGF of binomial distribution BINO.n; p/ is .q C pt /n . Hence, the MDGF is obtained as 2.q C pt /n =.1 t/2 . The mean is  D np , and let k D bnpc. Expand the denominator as an infinite series, and collect the coefficient of t k 1 to get the MD as k X MD D 2 .k i D0

Results are shown in Table 3.3.

! n n i i i/ q p where k D bnpc: i

(3.33)

46

3. GENERATING FUNCTIONS IN STATISTICS

Table 3.2: MD of geometric distribution p

q/p

Equation (3.24)

0.001 0.010 0.050 0.100 0.150 0.200 0.300 0.400

999 99 19 9 5.666 4 2.333 1.5

735.390849541927765 73.2064682546458982 15.5052868275794165 6.9735688020000008 5.1787793749999995 3.2768000000000006 2.3000000000000003 1.5999999999999996

Theorem 3.1 735.390849541928333 73.2064682546459409 15.5052868275794182 6.9735688020000017 5.1787793749999995 3.2768000000000006 2.3000000000000003 1.5999999999999996

The results are found to tally up to several decimal places.

Table 3.3: MD of binomial distribution n

p

10 10 10 10 15 15 15 25 25 25 25 35 35 35 35

0.1 0.2 0.3 0.4 0.1 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4

λ = np Equation (3.24) 1.0 2.0 3.0 4.0 1.5 4.5 6.0 2.5 5.0 7.5 10.0 3.5 7.0 10.5 14.0

0.3486784401000001 0.4831838208000002 0.5603386571999996 0.6019743743999999 0.3088366981419736 0.6121447677514255 0.7437513790586878 0.4785986584612353 0.7840604101089519 0.8388488344905607 0.9669476321691824 0.6067834363029919 0.9323498859332381 1.0176197430103460 1.1475610716258595

Theorem 3.1 0.3486784401000001 0.4831838208000003 0.5603386571999998 0.6019743743999999 0.3088366981419736 0.6121447677514255 0.7437513790586879 0.4785986584612353 0.7840604101089519 0.8388488344905606 0.9669476321691826 0.6067834363029919 0.9323498859332382 1.0176197430103460 1.1475610716258591

The results are found to be correct up to the 15th decimal place.

3.6. MD OF SOME DISTRIBUTIONS

3.6.3

47

MD OF POISSON DISTRIBUTION

The PGF of Poisson distribution P ./ is e .t 1/ . According to Theorem 3.1, the MDGF is the coefficient of t bc 1 in the power series expansion of 2e .t 1/ =.1 t/2 . Expanding the denominator as a power series and collecting the coefficient of t bc 1 gives MD D 2e



k X .k i D0

i/i = i Š where k D bc:

(3.34)

Results are shown in Table 3.4. Table 3.4: MD of Poisson distribution λ

Equation (3.24)

Theorem 3.1

0.70 1.40 2.10 2.80 3.60 5.40 7.50 10.20 15.30 20.40

0.6952194253079733 0.9666600986510974 1.1340689820508654 1.3349024947487851 1.5297787134010581 1.8664703972214189 2.1972574824620423 2.5472122622731961 3.1252711614667779 3.6102016794214631

0.6952194253079733 0.9666600986510974 1.1340689820508656 1.3349024947487851 1.5297787134010581 1.8664703972214194 2.1972574824620423 2.5472122622731961 3.1252711614667779 3.6102016794214631

The results are found to be correct up to the 14th decimal place.

3.6.4

MD OF NEGATIVE BINOMIAL DISTRIBUTION

 n x 1 The PGF of negative binomial distribution xCn p q is .p=.1 qt//n . Hence, the MDGF n 1 n n 2 is obtained as 2p .1 qt / =.1 t / . The mean is  D nq=p , and let k D bnq=pc. Expanding the denominator as a power series and collecting the coefficient of t b.nq=p/c 1 gives k X MD D 2p n .k i D0

Results are shown in Table 3.5.

! i Cn 1 i i/ q where k D bnq=pc: n 1

(3.35)

48

3. GENERATING FUNCTIONS IN STATISTICS

Table 3.5: MD of negative binomial distribution n

p

Equation (3.24)

Theorem 3.1

5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 15 25

0.05 0.05 0.05 0.05 0.05 0.10 0.10 0.10 0.10 0.10 0.20 0.20 0.20 0.20 0.20 0.30 0.30 0.30 0.30 0.30 0.40 0.40 0.40

35.3314245800100437 49.8663822747720644 60.9783705724452503 70.3320661142619059 78.5660039707308897 17.7808172669112530 24.8331820975239204 30.2315309200687778 34.7779951761383117 38.7813037771337150 8.9992554825852515 12.2980395873617585 14.8313565373676042 16.9677116292828813 18.8501947636393652 6.7308110759283917 8.4480946703729014 9.6706171799619973 11.6615242500303573 12.5086180380679686 5.0909203660800006 7.5620265857420819 9.2857207235338404

35.3314245800100224 49.8663822747720573 60.9783705724452645 70.3320661142619201 78.5660039707308755 17.7808172669112672 24.8331820975239133 30.2315309200687778 34.7779951761383046 38.7813037771337221 8.9992554825852551 12.2980395873617638 14.8313565373676024 16.9677116292828778 18.8501947636393616 6.7308110759283926 8.4480946703729014 9.6706171799619973 11.6615242500303555 12.5086180380679721 5.0909203660800006 7.5620265857420819 9.2857207235338421

The results are found to be correct up to the 13th decimal place.

3.7

MOMENT GENERATING FUNCTIONS (MGF)

The MGF of a random variable is used to generate the moments algebraically. Let X be a discrete random variable defined for all values of x . As e tx has an infinite expansion in powers of x as e tx D 1 C .tx/=1Š C .tx/2 =2Š C    C .tx/n =nŠ C    , we multiply both sides by f .x/, and take

3.7. MOMENT GENERATING FUNCTIONS (MGF)

49

expectation on both sides to get 8P tx < x e p.x/ tx Mx .t/ D E.e / D R 1 tx : 1 e f .x/dx

if X is discreteI if X is continuous:

In the discrete case this becomes Mx .t / D

1 X xD0

tx

e f .x/ D 1 C

1 X xD0

.tx/=1Šf .x/ C

1 X xD0

.tx/2 =2Šf .x/ C    :

(3.36)

P k Replace each of the sums 1 xD0 x f .x/ by k to obtain the following series (which is theoretically defined for all values, but depends on the distribution): Mx .t/ D 1 C 01 t=1Š C 02 t 2 =2Š C    C 0k t k =kŠ C    :

(3.37)

Analogous result holds for the continuous case by replacing summation by integration. By choosing jt j < 1, the above series can be made convergent for most random variables. Theorem 3.2 The MGF (page 48) and the PGF are connected as Mx .t/ D Px .e t /, and MX .t D 0/ D Px .e 0 / D Px .1/ D 1. Proof. This follows trivially by replacing t by e t in (3.2). Note that it is also applicable to continuous random variables. Put t D 0, and use e 0 D 1 to get the second part.  Example 3.18 (MGF of binomial distribution from PGF) If the PGF of BINO.n; p/ is .q C pt /n , obtain the MGF and derive the mean. Solution 3.18 The MGF can be found from Equation (3.6) by replacing t by e t . This gives Mx .t/ D .q C pe t /n . Take log to get log.Mx .t // D n  log.q C pe t /. Next, differentiate as above: MX0 .t /=Mx .t/ D n  pe t =.q C pe t /. Put t D 0 to get the mean as np . Take log again to get log.MX0 .t// log.Mx .t // D log.np/ C t log.q C pe t /. Differentiate again, and denote MX0 .t / simply by M 0 , etc. This gives M 00 =M 0 M 0 =M D 1 pe t =.q C pe t /. Put t D 0 throughout and use M 0 .0/ D np and M.0/ D 1 to get M 00 .0/=np np D 1 p or equivalently M 00 .0/ D .q C np/  np . Finally, use  2 D MX00 .0/ ŒMX0 .0/2 D .q C np/  np .np/2 D npq .

3.7.1

PROPERTIES OF MOMENT GENERATING FUNCTIONS

1. MGF of an origin changed variate can be found from MGF of original variable Mx˙b .t/ D e ˙bt  Mx .t/:

This follows trivially by writing EŒe tŒx˙b  as e ˙bt  EŒe tx .

(3.38)

50

3. GENERATING FUNCTIONS IN STATISTICS

2. MGF of a scale changed variate can be found from MGF of original variable as (3.39)

Mcx .t/ D Mx .c  t/:

This follows trivially by writing EŒe t cx  as EŒe .ct /x . 3. MGF of origin-and-scale changed variate can be found from MGF of original variable as Mcx˙b .t / D e ˙bt  Mx .c  t/:

(3.40)

This follows by combining both cases above. Theorem 3.3 (MGF of a sum) The MGF of a sum of independent random variables is the product of their MGFs. Symbolically MXCY .t/ D Mx .t/  My .t/. Proof. We prove the result for the discrete case. MX CY .t/ D E.e t.xCy/ / D E.e tx e ty/ /. If X and P P Y are independent, we write the RHS as x e tx f .x/  y e ty f .y/ = Mx .t/  My .t/. The proof for the continuous case follows similarly. This result can be extended to any number of pairwise independent random variables.  P Q If X1 ; X2 ; : : : ; Xn are independent, and Y D i Xi then My .t/ D i MXi .t/. Example 3.19 (Moments from Mx .t /) Prove that E.X/ D Mx .t/j t D0 .

@ @t

Mx .t/j t D0 and E.X 2 / D

@2 @t 2

Solution 3.19 We know that Mx .t/ D E.e tx /. Differentiating (3.37) w.r.t. t gives @t@ Mx .t/ D @ E.e tx / D E. @t@ e tx / D E.xe tx / because x is considered as a constant (and t is our variable). @t Putting t D 0 on the RHS we get the result, as e 0 D 1. Differentiating a second time, we get @2 Mx .t/ D @t@ E.xe tx / D E.x 2 e tx /. Putting t D 0 on the RHS we get Mx00 .t D 0/ D E.x 2 /. @t 2 Repeated application of this operation allows us to find the k t h moment as Mx.k/ .t D 0/ D E.x k /. This gives  2 D Mx00 .t D 0/ ŒMx0 .t D 0/2 . Example 3.20 (MGF of NBINO.k; p/ distribution) Find the MGF of negative binomial distribution NBINO.k; p/, and obtain the mean. Solution 3.20 By definition, 1 X xCk tx Mx .t / D E.e / D x xD0

! 1

p k q x e tx

1 X xCk D x xD0

1

!

p k .qe t /x :

(3.41)

As p k is a constant, this is easily seen to be Œp=.1 qe t /k . Take log and differentiate w.r.t. t , and put t D 0 to get E.X / D kq=p . By setting p D 1=.1 C r/ and q D 1 p D r=.1 C r/, we get  NBINO.k; 1=.1 C r// with PMF as xCkx 1 Œ1=.1 C r/k Œr=.1 C r/x with MGF 1=Œ1 C r.1 e t /k .

3.7. MOMENT GENERATING FUNCTIONS (MGF)

51

Example 3.21 (MGF of a Poisson distribution) Find the MGF for central moments of a Pois     son distribution. Hence, show that rC1 D  1r r 1 C 2r r 2 C    rr 0 . P tx Solution 3.21 First consider the ordinary MGF defined as Mx .t/ D E.e tx / D 1 kD0 e P t t t 1 e  x =xŠ D e  kD0 .e t /x =xŠ D e  e e D e e  D e .e 1/ . As the mean of a Poisson distribution is  D , we use the Property 1 to get Mx

Differentiate

 .t/ D e

P1

j D0

1 X

t

Mx .t/ D e

j t j =j Š D e .e

j jt j

j D0

1

t

t 1/

=j Š D e .e

t

t .e t 1/

e

D e .e

t

t 1/

D

1 X

j t j =j Š:

(3.42)

j D0

by t to get

t 1/

 et

 1 D  et

1

1 X

j t j =j Š:

(3.43)

j D0

P P1 k j Expand e t as an infinite series. The RHS becomes  1 kD1 t =kŠ j D0 j t =j Š. This can P1 P1 j Ck r be written as  kD1 j D0 j t =Œj ŠkŠ. Equate coefficients of t on both sides to get h i r 1 r 2 rC1 =rŠ D  1Š.r 1/Š C 2Š.r 2/Š C    C rŠ.r0 r/Š . Cross-multiplying and identifying the binomial coefficients, this becomes " ! ! ! # r r r rC1 D  r 1 C r 2 C    C 0 : (3.44) r 1 2

Example 3.22 (MGF of power-series distribution) Find the MGF of power-series distribution. Solution 3.22 The PMF of power-series distribution is P .X D k/ D ak  k =F ./ where k takes P tx k any integer values. Then Mx .t/ D E.e tx / D 1 kD0 e ak  =F ./. Theorem 3.4 Prove that if EŒjXjk  exists and is finite, then EŒjX jj  exists and is finite for each j < k. Proof. We prove the result for the continuous case. The proof for discrete case follows easily R by replacing integration by summation. As EŒjX jk  exists, we have x jX jk dF .x/ < 1. Now consider an arbitrary j < k for which Z

1 1

jxjj dF .x/ D

Z

C1 1

jxjj dF .x/ C

Z jxj>1

jxjj dF .x/:

(3.45)

52

3. GENERATING FUNCTIONS IN STATISTICS

As j < k; jxjj < jxjk for jxj > 1. Hence integral (3.45) becomes Z

1 1

jxjj dF .x/ < 

Z

C1

Z

1 C1 1

jxjj dF .x/ C dF .x/ C

Z

Z

jxj>1

jxj>1

jxjk dF .x/

jxjk dF .x/:

(3.46)

The RHS of (3.46) is upper bounded by 1 C EŒjXjk , and is < 1. This proves that the LHS exists for each j . 

3.8

CHARACTERISTIC FUNCTIONS

The MGF of a distribution may not always exist. Those cases p can be dealt with in the complex domain by finding the expected value of e itx , where i D 1, which always exist. Thus, the ChF of a random variable is defined as 8P i tx < 1 px if X is discreteI xD 1 e ChF D E.e i tx / D R 1 : e itx f .x/dx if X is continuous: xD 1

We have seen above that the ChF, if it exists, can generate the moments. Irrespective of whether the random variable is discrete or continuous, we could expand the ChF as a McClaurin series as 1 X x .t/ D j0 .it /j =j Š D .0/ C t  0 .0/ C .t 2 =2Š/ 00 .0/ C    ; (3.47) j D0

which is convergent for an appropriate choice of t (which depends on the distribution). As x .t/ R1 in the continuous case can be represented as x .t / D 1 e i tx dF .x/, successive derivatives w.r.t. R t gives i n x n dF .x/ D i n 0n . Define ı .n/ .x/ as the nth derivative of the delta function. Then the PMF can be written as an infinite sum as f .x/ D

1 X

. 1/j j0 ı .j / .x/=j Š:

(3.48)

j D0

Characteristic functions are used to study various properties of a random variable analytically. Example 3.23 (ChF of Poisson distribution) Find the ChF of the Poisson distribution. P i tx Solution 3.23 Consider x .t/ D E.e i tx / D 1 e  x =xŠ. Take e kD0 e P it 1  it x  e it e it  mation to get e e De D e .e 1/ . kD0 .e / =xŠ D e



outside the sum-

3.8. CHARACTERISTIC FUNCTIONS

3.8.1

53

PROPERTIES OF CHARACTERISTIC FUNCTIONS

Characteristic functions are Laplace transforms of the corresponding PMF. As all Laplace transforms have an inverse, we could invert it to get the PMF. Hence, there is a one-to-one correspondence between the ChF and PMF. This is especially useful for continuous distributions as shown below (Table 3.6). There are many simple properties satisfied by the ChF. 1. .t / D . t /, .0/ D 1, and j.˙t /j  1. In words, this means that the complex conjugate of the ChF is the same as that obtained by replacing t with t in the ChF. The assertion .0/ D 1 follows easily because this makes e itx to be 1. 2. axCb .t / D e ibt x .at /. This result is trivial as it follows directly from the definition. 3. If X and Y are independent, axCby .t/ D x .at /  y .bt/. Putting a D b D 1, we get xCy .t/ D x .t /  y .t/ if X and Y are independent. 4. x .t/ is continuous in t , and convex for t > 0. This means that if t1 and t2 are two values of t > 0, then x ..t1 C t2 / =2/  12 Œx .t1 / C x .t2 /. 5. @n x .t/=@t n j t D0 D i n E.X n /. Table 3.6: Table of characteristic functions Distribution Bernoulli Binomial

Density Function

Characteristic Function

px(1 − p)1−x

q + peit

n x

pxqn−x

(q + peit)n

Negative binomial

x+k−1 x

Poisson

e−λλx/x!

exp(λ(eit − 1))

Rectangular

f(x)=Pr[X=k]=1/N

(1 − eitN)/[N(e−it − 1)]

Geometric

qxp

p/(1 − qeit)

qx/[− x log p]

ln (1 − qeit)/ln (1 − q)

Logarithmic Multinomial

pkqx

k px i n!/Πki=1 xi! * Πi=1 i

pk(1 − qeit)−k

k

pjeitj

n

Example 3.24 (Symmetric random variables) Prove that the random variable X is symmetric about the origin iff the ChF x .i t / is real-valued for all t .

54

3. GENERATING FUNCTIONS IN STATISTICS

Solution 3.24 Assume that X is symmetric about the origin, so that f . x/ D f .x/. Then R for a bounded and odd Borel function g.x/ we have g.x/dF .x/ D 0. As g.x/ is odd, this R is equivalent to sin.tx/dF .x/ D 0. Hence, x .t / D E.e itx / D EŒcos.tx/ is real. Also as  x .t / D x . t / D  x .t/ D x .t /, FX .x/ and F X .x/ are the same. Remark 3.1 The characteristic function uniquely determines a distribution. The inversion theorem provides a means to find the PMF from the characteristic function R1 as f .x/ D 1=.2/ 1 x .i t /e i tx dt . Uniqueness Theorem

Let random variables X and Y have MGF Mx .t / and My .t/, respectively. If Mx .t/ D My .t/8t , then X and Y have the same probability distribution. This is very similar to the corresponding result for PGFs.

3.9

CUMULANT GENERATING FUNCTIONS

The CGF is slightly easier to work with for exponential, normal and Poisson distributions. It P is defined in terms of the MGF as Kx .t/ D ln.Mx .t // D j1D1 kj t j =j Š, where kj is the j t h cumulant. This relationship shows that cumulants are polynomial functions of moments (low order cumulants can also be exactly equal to corresponding moments). For example, for the general univariate normal distribution with mean 1 D  and variance 2 D  2 , the first and second cumulants are, respectively, 1 D  and 2 D  2 . Theorem 3.5 Prove KaX Cb .t / D bt C Kx .at / . Proof. KaX Cb .t / D log.MaX Cb .t // D log.e bt MX .at // D bt C log.MX .at// D bt C Kx .at/ using log.ab/ D log.a/ C log.b/, and log.e x / D x .  Theorem 3.6 The CGF of an origin and scale changed variable is K.X Kx .t = /.

/= .t/

D . =/t C

Proof. This follows from the above theorem by setting a D 1= and b D = . The cumulants can be obtained from moments and vice versa. This holds for cumulants about any origin (including zero) in terms of moments about the same origin. 

3.9.1

RELATIONS AMONG MOMENTS AND CUMULANTS

 P The central and raw moments are related as k D E.X /k D jkD0 kj . /k j j0 [Shanmugam and Chattamvelli, 2016, Ch. 8]. As the CGF of some distributions are easier to work with, we can find cumulants and use the relationship with moments to obtain the desired moment.

3.9. CUMULANT GENERATING FUNCTIONS

Theorem 3.7 The r th cumulant can be obtained from the CGF as r D

55

@r Kx .t / j t D0 . @t r

Proof. We have Kx .t / D

1 X rD0

r t r =rŠ D 0 C 1 t C 2 t 2 =2Š C    :

(3.49)

As done in the case of MGF, differentiate (3.49) k times and put t D 0 to get the k t h cumulant.  Example 3.25 (Moments from cumulants) Prove that 1 D 1 , 2 D 2 D  2 , and 3 D 3 D E.X /3 . Solution 3.25 We know that Kx .t / D log.Mx .t // or equivalently Mx .t/ D exp.Kx .t//. We expand Mx .t/ D 1 C t =1Š1 C t 2 =2Š02 C t 3 =3Š03 C    and substitute for Kx .t/ also to get ! 1 1 X X 0 r r r t =rŠ D exp r t =rŠ : (3.50) rD0

rD0

Differentiate n times, and put t D 0 to get 0nC1

! n X n 0 D n j j D0

j j C1 :

(3.51)

Put n D 0; 1; : : : to get the desired result. There is another way to get the low-order cumulants. Truncate Mx .t / as 1 C t=1Š1 C t 2 =2Š02 C t 3 =3Š03 . Expand the RHS using log.1 C x/ D x x 2 =2 C x 3 =3 x 4 =4 : : : where x D .t=1Š/ 1 C .t 2 =2Š/ 02 C .t 3 =3Š/ 03 , and collect similar terms to get Kx .t / D 1 t C .02

21 /t 2 =2Š C .03

31 02 C 231 /t 3 =3Š C    :

(3.52)

Compare the coefficients of t k =kŠ to get 1 D 1 , 2 D 2 21 D  2 , 3 D .03 31 02 C 231 / D E.X /3 .   Next write Mx .t / as 1 C tx=1Š C .tx/2 =2Š C    , expand Kx .t/ D log.Mx .t// as an infinite series to get 1 X rD0

  r t r =rŠ D tx=1Š C .tx/2 =2Š C   

 2 tx=1Š C .tx/2 =2Š C    =2Š C    :

(3.53)

Equate like coefficients of t to get nC1 D

0nC1

! n 1 X n 0 n j

j D0

j j C1 :

(3.54)

56

3. GENERATING FUNCTIONS IN STATISTICS

3.10 FACTORIAL MOMENT GENERATING FUNCTIONS There are two types of factorial moments known as falling factorial and rising factorial moments. Among these, the falling factorial moments are more popular. The k th (falling) factorial moment of X is defined as EŒX.X 1/.X 2/    .X k C 1/ D EŒXŠ=.X k/Š, where k is an integer  1. It is easier to evaluate for those distributions that have an xŠ or €.x C 1/ in the denominator (e.g., binomial, negative binomial, hypergeometric, and Poisson distributions). The factorial moments and ordinary moments are related through the Stirling number of the first kind as follows: r r X X XŠ=.X r/Š D s.r; j /X j ) 0.r/ D s.r; j /j0 : (3.55) j D0

j D0

A reverse relationship exists between ordinary and factorial moments using the identity X r D Pr P j /Š as 0r D jr D0 S.r; j /0.j / where S.r; j / is the Stirling number of j D0 S.r; j /X Š=.X second kind. FCGF can analogously be defined as FK x .t/ D ln.FM x .t//, where FM x .t/ is the FMGF, and ln denotes log to the base e . There are two ways to get (falling) factorial moments. The simplest way is by differentiating the PGF (see Section 3.2.1 in page 39). As Px .t / D E.t x / D E.e x log.t/ / D MX .log.t//, we could differentiate it k times as in Equation (3.13), and put t D 1 to get factorial moments. We define it as €x .t/ D E Œ.1 C t /x , because if we expand it using binomial theorem we get   E Œ.1 C t/x  D E 1 C tx C t 2 x.x 1/=2Š C t 3 x.x 1/.x 2/=3Š C    : (3.56) Obviously €x .t /j tD1 D 1, €x0 .t/j t D1 D E.X / D . Factorial moments are obtained by taking term by term expectations on the RHS. The rising factorial moment is defined as EŒX.X C 1/.X C 2/ : : : .X C k 1/ D EŒ.X C k 1/Š=.X 1/Š. An analogous expression can also be obtained for rising facto P kCx 1 k rials using the expansion .1 t/ x D 1 t . Taking term-by-term expectations kD0 k   x 2 3 as E Œ.1 t/  D E 1 C tx C t x.x C 1/=2Š C t x.x C 1/.x C 2/=3Š C    , we get rising FMGF. We could also get rising factorial moments from P x .t/ D E.t x /. Differentiating  it once gives P 0 x .t / D E xt x 1 . From this we get P 0 x .1/ D E. x/. Differentiating it r times, we get P .r/ 1/. x 2/ : : : . x r C 1/t x r /. Putting t D 1, this bex .t / D E . x. x comes r P .r/ 1/ D . 1/r .r/ : (3.57) x .1/ D . 1/ E Œx.x C 1/ : : : E.x C r Both (rising and falling) FMGFs are of OGF type. Replacing the summation by integration gives the corresponding results for the continuous distributions. To distinguish between the two, we will denote the falling factorial moment as E.X.k/ / or .k/ , and the rising factorial moment as E.X .k/ / or .k/ . Unless otherwise specified, factorial moment will mean falling factorial moment .k/ .

3.10. FACTORIAL MOMENT GENERATING FUNCTIONS

Example 3.26 (Factorial moments of the Poisson distribution) Find the k ments of the Poisson distribution, and obtain the first two moments.

th

57

factorial mo-

Solution 3.26 By definition, .k/ D

1 X

x.x

1/.x

2/ : : : .x

xD0

De

 k

De

 k

1 X





xDk 1 X yD0

x

k

=.x

k C 1/e

 x

 =xŠ

k/Š

y =yŠ D e

 k 

 e D k :

(3.58)

Another way is using the MGF. Write the MGF as exp. / exp.t/. The k t h derivative is easily found as exp. /k exp.t / D k exp. .1 t //. Now put t D 1 to get the result. Alternatively, we could obtain the FMGF directly, and get the desired moments: FMGFx .t / D EŒ.1 C t /x  D

1 X xD0

.1 C t/x e

 x

 =xŠ D e



1 X xD0

Œ.1 C t/x =xŠ D e t : (3.59)

The k t h factorial moment is obtained by differentiating this expression k times and putting t D 0. We know that the k th derivative of e t is k e t , from which k t h factorial moment is obtained as k . Putting k D 1 and 2 gives the desired moments. Theorem 3.8 The factorial moments of a Poisson distribution are related as .k/ D r .k In particular .k/ D .k 1/ . Proof. This follows easily because .k/ =.k

r/

D k =k

r

D r .

r/ .



Example 3.27 (FMGF of the geometric distribution) Find the FMGF of the geometric distribution. Solution 3.27 By definition, FMGF is: x

E Œ.1 C t /  D

1 X xD0

x x

.1 C t / q p D p

1 X xD0

Œq.1 C t/x D p=Œ1

q.1 C t/:

(3.60)

As 1 q D p , the denominator becomes Œ1 q.1 C t / D p qt . Hence FMGF = p/(p-qt). The k th factorial moment is obtained by differentiating this expression k times and putting t D 0. We know that the k t h derivative of 1=.ax C b/ is ar rŠ. 1/r =.ax C b/rC1 . Hence, k t h

58

3. GENERATING FUNCTIONS IN STATISTICS

derivative of 1=Œ1 q.1 C t / is rŠq r =p rC1 , as q D 1 p . This gives the k t h factorial moment as prŠq r =p rC1 D rŠ.q=p/r . Alternatively, find E.t x /, differentiate k times and put t D 1 to get the same result. Example 3.28 (FMGF of the negative binomial distribution) Find the FMGF of the negative binomial distribution. Solution 3.28 We have seen in page 37 that the PGF of negative binomial distribution is .p=.1 qt //k . As the PGF and FMGF are related as FMGF.t/ D PGF.1 C t/, we get the FMGF as .p=.1 q.1 C t///k . As 1 q D p , this can be written as .p=.p qt//k or .1 qt =p/ k .

3.11 CONDITIONAL MOMENT GENERATING FUNCTIONS (CMGF) Consider an integer random variable that takes values  1. We define a sum of independent P random variables as SN D N i D1 Xi . For a fixed value of N D n, the distribution of the finite sum Sn can be obtained in closed form for many distributions when the variates are independent. The conditional MGF can be expressed as X MX jY .t/ D f .xjy/e tx : (3.61) x

Replacing t by “it ” gives the corresponding conditional characteristic function. If the variates are mutually independent, this becomes MS jN .tjN / D ŒMx .t/N . Example 3.29 (Compound Poisson distribution) Find the MGF of the compound Poisson P distribution Y D N kD1 Xk where N has Poisson distribution with mean  and each Xk ’s have Poisson distribution with mean k . P Solution 3.29 The PMF of Y is Pr.Y D r/ D j1D0 P .Y jN /P .N D j /. After simplification this becomes G.t / D exp. Œ1 exp. .1 t ///.

3.12 GENERATING FUNCTIONS OF TRUNCATED DISTRIBUTIONS The GFs of truncated distributions can easily be found from that of non-truncated ones. As an example, the classical binomial distribution has PGF .q C pt/n and MGF .q C pe t /n . The corresponding PGF and MGF of zero-truncated distribution are, respectively, .q C pt/n =.1 q n / and .q C pe t /n =.1 q n /. The PGF of a binomial distribution truncated at k is .q C pt/n =.1

Pk

3.13. CONVERGENCE OF GENERATING FUNCTIONS

 n

59

p j q n j /. The k -truncated MGF is obtained by replacing each t with exp.t/. A similar expression results other distributions. h for i Thus, a k -truncated Poisson distribution has PGF Pk j exp. Œ1 t/= 1 j D0 exp. / =j Š (Table 3.7). j D0 j

Table 3.7: Summary table of zero-truncated generating functions Distribution Binomial Poisson

PGF

z-PGF

z-MGF

(q + pt)n

(q + pt)n/(1 − qn)

(q + pet)n/(1 − qn)

exp(− λ[1 − t]) e(− λ[1 − t])/[1 − e−λ]

Geometric

p/(1−qt)

Negative Binomial

[p/(1−qt)]k

p/[q(1−qt)]

e− λ[1 − et]/[1 − e−λ] p/[q(1−q exp(t))]

[p/(1−qt)]k/(1 − pk) 1/(1−pk)[p(1−q exp(t))]k

z-PGF is zero-truncated PGF. Higher order truncations will bring additional terms in the denominator.

3.13 CONVERGENCE OF GENERATING FUNCTIONS Properties of GFs are useful in deriving the distributions to which a sequence of GFs converge. Let Xn be a sequence of random variables with ChF Xn .t /. If limn!1 Xn .t/ converges to a unique limit, say x .t/ for all points in a neighborhood of t D 0, then that limit determines the unique CDF to which the distribution of Xn converge. Symbolically, lim  n .t / n!1 X

D x .t/ ) lim FXi .x/ D F .x/: n!1

(3.62)

3.14 SUMMARY This chapter introduced various GFs encountered in statistics. These have wide applications in many other fields including astrophysics, fluid mechanics, spectroscopy, polymer chemistry, bio-informatics, and various engineering fields. Examples are included to illustrate the use of various GFs. The classical PGFs of discrete distributions are extended to get CDFGF, SFGF, and MDGF. This is then used to find the mean deviation by extracting just one coefficient Œt bc 1 2Px .t /=.1 t/2 , where  is the mean. Results are tabulated for binomial, Poisson, geometric and negative binomial distributions.

CHAPTER

4

Applications of Generating Functions After finishing the chapter, readers will be able to    • apply generating functions in algebra. • explore generating functions in computing. • comprehend applications of generating functions in chemistry. • discuss applications in combinatorics. • explore generating functions in number theory, graph theory. • describe applications in statistics, genomics, management, etc.

4.1

APPLICATIONS IN ALGEBRA

There are many applications of GFs in algebra. GFs can be used to find the number of solutions to a single linear equation. Consider a simple example of an equation x C y C z D r , where x; y; z are non-negative integers, and r is a constant. Consider the OGF .1 C t C t 2 C t 3 C    C t r /. Then there is a one-to-one correspondence between the number of solutions to the above equation, and the coefficient of t r in the power-series expansion of .1 C t C t 2 C t 3 C    C t r /3 . This can be greatly simplified if further restrictions are placed on the possible values that x , y , and z can take. As an example, if each variable is restricted to be an integer in the range 0  fx; y; zg  c , our OGF reduces to .1 C t C t 2 C t 3 C    C t c /3 . If each of them is restricted to be real numbers in Œ0; 1 range, this becomes Œt r .1 C t/3 . In general, if we have an equation x1 C x2 C    C xn D r where the xi0 s are in Œ0; 1, the number of solutions is Œt r .1 C t/n which  is nr . If the number of non-negative integer solutions are sought, we get the OGF .1 C t C  P nCr 1 r t 2 C t 3 C    C t r /n D 1 t . Similar approach holds for other suitable restrictions. rD0 r Next consider a constrained linear equation of the form x C y C z D r where x is even integer, y is  0 and 0  z  1. As x is allowed to be only even integers, it has OGF 1 C t 2 C t 4 C    D 1=.1 t 2 /. As y can take any nonnegative integer value, it has OGF 1=.1 t/. As z can take values 0 or 1 only, its OGF is .1 C t/. Hence, the required number is the coefficient of

62

4. APPLICATIONS OF GENERATING FUNCTIONS r

t in .1=.1 t 2 //.1=.1 t //.1 C t/. Write .1 t 2 // D .1 C t/.1 t/ and cancel .1 C t/ from numerator and denominator, to get F .t / D 1=.1 t/2 . The coefficient of t r is .r C 1/.

4.1.1

SERIES INVOLVING INTEGER PARTS

The integer part or “floor” of a decimal number is the greatest integer less than or equal to that number. This is denoted by the floor-operator bxc in computing, and allied fields. Thus, b3:14c D b3c D 3. The least integer greater than or equal to x is called the “ceil,” and denoted by dxe operator. Thus, d3:14e D 4. When x is an integer, the ceil and floor operators return x itself .dxe D bxc D x/. Otherwise, these satisfy dxe bxc D 1, d xe D bxc, bxc D dxe. The fractional part is .x bxc/ when x is not an integer. Thus, we have bx C 12 c C bxc D b2xc. Series involving integer parts occurs in many problems. Consider a tournament (like tennis) with n teams. If n is even, we could evenly divide the teams into two groups of n=2 each and identify the winner in each game. Those who win in the first round are again paired for the second round, and the process goes on until one final round decides who wins the game. Thus, the total number of games played is b.n C 1/c=2 C b.n C 1/c=22 C b.n C 1/c=23 C    C 1. A team (persons) that does not have a matching-pair when there are an odd number of teams will play with one of the winners, thereby reducing the number of teams by one. For example, if there are 7 teams, there will be .3 C 1/ D 4 games in the first tournament, 2 games in the second and a final. Several applications in graph theory also use these operators. For instance, the number of edge-crossings of a complete bipartite graph Km;n with m vertices in the first set, and n vertices in the other is given by bn=2cb.n 1/=2c bm=2cb.m 1/=2c. Example 4.1 (Sum of series involving integer parts) Find the OGF of b.n C 1/c=2; b.n C 2/c=22 ; b.n C 22 /c=23 , : : :, b.n C 2k /c=2kC1 ; : : :, and evaluate the sum. P P kC1 Solution 4.1 Write F .t / D 1 2k /=2kC1 ct k as 1 C 1=2/ct k . Now use kD0 b.n C kD0 b.n=2 P P 1 1 kC1 k bx C 12 c D b2xc bxc to get F .t / D kD0 bn=2k c t k ct . The coefficients kD0 bn=2 P1 k kC1 k form a telescopic series (Chapter 1) kD0 Œbn=2 c bn=2 ct . Put t=1 so that the alternate terms cancel out giving bnc as the answer.

4.1.2

PERMUTATION INVERSIONS

Collaborative filtering is a technique used by several e-commerce sites to rank the preferences of several customers, and filter out maximum matches among a set of customers. This is then used to suggest potential purchases (unselected items) that a customer may be interested in, by using the purchases made by other similar customers. Thus, a person who buys a programming book gets a list of similar books purchased in the past by other customers who also bought that book. Similarly, some e-com sites have a product rating option in which a customer who purchased an item can give a score (say 5-star to 1-star) using which the site recommends other similar items

4.1. APPLICATIONS IN ALGEBRA

63

that people who have rated it compatibly have liked or purchased. The items can be anything like food, electronics, books, movies, music, etc. But the following discussion assumes that there are n items to be ranked from the same domain. If thousands of rankings are already available, we could come up with an order based on the frequency as .a1 < a2 <    < an / where a1 is the item that got the least vote or preference and an is the item with greatest vote. Consider a set of n labeled distinct elements that can be ordered among themselves using a relational operator. There are nŠ permutations of n elements. This means that we could use numerical comparison when elements are real numbers, modulus comparison for complex numbers, alphabetic comparison when elements are alphabets or characters, string comparison for strings, distance comparison for geometric objects and so on. A permutation of the elements is an arrangement of them in a particular order. Suppose we compare all possible pairs of elements except .k; k/. As the first element can be compared with .n 1/ other elements, second with .n 2/ other elements and so on, the penultimate element can be compared with the last one, we have a total of 1 C 2 C    C .n 1/ set of pairs .j; k/ where j ¤ k . This simplifies to  m D n.n 1/=2 D n2 . Out of the m pairs, we call a pair .j; k/ as an inversion if the corresponding elements are out-of-order (i.e., the ordering operator returns a FALSE result). The concept of inversion is also used in many other fields like computational biology and genomics, where two DNA sequences are said to be in inversion when there is an alignment of reverse complements of a sequence. Obviously, there is only one permutation without inversions (namely  the sorted sequence .a1 < a2 <    < an //. Likewise, there is only one permutation with n2 inversions (reverse-sorted sequence). Here j and k are integers (position of elements) whereas comparison takes place between the elements present at respective positions in a permutation. For instance, consider the set S D fD; E; C; A; Bg where each letter denotes an element. As .D; E/ is in proper order, it is not an inversion (as also .A; B/ in position .4; 5/). But all others .D; C /; .D; A/; .D; B/ are not in order so that the first 3 inversions are .1; 3/; .1; 4/; .1; 5/. Similarly, .2; 3/; .2; 4/; .2; 5/; .3; 4/, and .3; 5/ are also inversions. Thus, there are 8 inversions out of 10 possibilities. Clearly, the number of inversions depends on the permutation. It is a measure of the deviation or disagreement from a base case. This means that permutations with just one inversion is very close to the base case than those with more than one inversion. Permutations with more and more inversions deviate further away from the base case. This has applications in recommendation systems, computer sorting algorithms like bubble-sort and insertion-sort that tries to sort an arbitrary array by exchanging the data values that are “out-of-order” elements. A plausible way to count the inversions is to use a divide-and-conquer strategy. First divide the permutation P into P1 of size n m as .a1 ; a2 ; : : : ; an m / and P 2 (an mC1 ; an mC2 ; : : : ; an ) of size m. Then we count the inversions in P1 and P 2, separately. These are called “intra-blockinversions.” Afterward, we count the inversions of the form .u; v/ such that u is in P1, and v is in P 2 and it is an inversion. This is called “inter-block-inversions.” Counting the inversions is faster if the elements are re-ordered in “base-case order” (which means in increasing or decreas-

64

4. APPLICATIONS OF GENERATING FUNCTIONS

ing order for numbers, and in alphabetic order for literals and strings). Inter-inversions can be skipped if both P1 and P 2 are sorted and the last element of P1 and first element of P 2 does not form an inversion. In other words, there is no overlap between the boundaries of P1 and P 2. As in the case of merge-sort, each sub-permutation (P1 and P 2 initially) is further subdivided and the process repeated. Although this technique is useful to count the number of inversions, the following paragraph outlines a GF approach to the same problem. Denote all permutations of n elements with k inversions by I.n; k/. For k < n, it satisfies the recurrence relation I.n; k/ D I.n 1; k/ C I.n; k 1/. The OGF for the sequence I.n; k/ is given as the polynomial product    F .t / D .1 C t / 1 C t C t 2 1 C t C t 2 C t 3 : : : 1 C t C t 2 C    C t n 1 ; (4.1) P.n2/ which in summed form is kD0 I.n; k/t k . Using the product notation, this can be expressed as Qn 1 F .t / D kD1 .1 C t C t 2 C    C t k /.

4.1.3

GENERATING FUNCTION OF STRIDED SEQUENCES

Suppose the GF of a sequence (a0 ; a1 ; a2 ; : : : ; an ; : : :) is F .t/. It was shown in Chapter 2 that we could express the GF of odd and even terms (i.e., stride 2 terms) separately using F .t/. How do we find the GF of a strided sequence of lag other than 2? For example, what is the P GF of k0 a3k t 3k in terms of F .t /? This can be found using nt h roots of unity. Let ! n D 1 define the nt h roots of unity with k th root denoted by !k for k D 0; 1; : : : ; n 1. These are given by !k D exp.2 i k=n/ where “i ” is the imaginary constant. If k divides n, !j D 1 for j D P 0; 1; 2; : : : ; .k 1/. Otherwise .1=k/ jkD01 !j D 0. Thus, we have the relationship connecting P 1 the sum of the root of unity as .1=k/ k! n D1 !j D 1 if k divides n and 0 otherwise. From this Pk 1 we get F .t / D .1=k/ j D0 F .t !j /.  PŒn=3 n Example 4.2 (Generating function of strided sequence) Find the GF for . 1/k 3k kD0 and obtain a compact expression for the sum, where n is an integer and k is less than dn=3e.

Solution 4.2 Consider the cube roots of unity 1, !; ! 2 . As the given sum involves binomial coefficients, we express the GF as F .t / D .1 C t/n . Using the above result, G.t/ D .1=3/ŒF .t/ C F .t !/ C F .t ! 2 /. To find a closed form expression for the given sum we put p t D 1 in G.t/ to get G.p 1/ D .1=3/ŒF . 1/ C F . !/ C F . ! 2 /. Denote these by !1 D .1 3i/=2 and !2 D n=2 .1 C 3i/=2. Substitute and simplify to get 3 .2=3/ cos.n=3/ as the result.

4.2

APPLICATIONS IN COMPUTING

Computer algorithms are step-by-step methods to solve a practical problem using a computing device. Multiple methods may exist for solving the same problem. One common example is

4.2. APPLICATIONS IN COMPUTING

65

the sorting problem. Many algorithms (like quicksort, heapsort, mergesort, radixsort, insertion sort, bubblesort, etc.) exist for data sorting. There are three primary concerns when choosing an appropriate algorithm (i) time complexity, (ii) space complexity, and (iii) coding complexity. Although some of the algorithms are easy to code, they may not have good performance for very large data sets. One example is bubble-sort algorithm which performs poorly when data size increases, but may be a reasonable choice when data size is always small (say < 15). This is where algorithm analysis steps in. If we could represent the complexity of an algorithm in terms of the input size, we could easily decide which algorithm to choose by comparing the complexity of different algorithms. The complexity of an algorithm is expressed as a mathematical function (in terms of the input size which is a number (size of data) in sorting algorithms, but is a function of a pair of numbers in 2D matrices (rows x columns) and graphs (nodes x edges)). Higher dimensional problems like 3D bin-packing, n-D matrix multiplications, etc. use more parameters.

4.2.1

MERGE-SORT ALGORITHM ANALYSIS

The merge-sort algorithm uses the divide-and-conquer paradigm. The idea is very simple to understand. Suppose we need to arrange 64 (chosen for convenience, as it is a power of 2) school kids in increasing order of their heights. They are first divided arbitrarily into 2 groups of 32 each, say A and B. Each of the groups are further subdivided into 4 batches of 16 as A1, A2, B1, and B2. This process of sub-dividing goes on to get 8 batches of 8 each, 16 batches of 4 each, and finally 32 batches of 2 each. Each of the subgroups must be uniquely identified. Now each group of 2 students are arranged in increasing order of height. Next they are regrouped into the same group from which they were separated, to get back 4 students in each batch. As they are already pair-wise arranged, it is a simple matter of merging the two sorted subgroups to get sorted quadruplets. This process of merging is continued until all 64 students are sorted by height. The last merging step involves two groups of 32 students in sorted order. This is the principle of merge-sort. If C.n/ denotes the complexity of arranging n entities (n D 64 in our problem) then C.n/ D C.dn=2e/ C C.bn=2c/ C kbn=2c; (4.2) where k is a constant close to 1 that takes care of sublist merging, and contingency costs (in case the merging is done by a separate function call), and n 2 (merge-sort requires minimum 2 elements). This recurrence captures the worst-case complexity of sorting n elements irrespective of the nature of the data (whether it is random or sorted). To solve this recurrence, we assume that n is a power of 2. Substitute n D 2m to get C.2m / D 2C.2m 1 / C k.2m 1 / because both d2m =2e and b2m =2c are 2m 1 . Divide both sides by 2m to get C.2m /=2m D C.2m 1 /=2m 1 C k1 where k1 =k/2. If D.m/ denotes C.2m /=2m , we could write this in terms of D.m/ as D.m/ D D.m 1/ C k1 . Multiply both sides by t m , and add over m D 1 to 1 to get P P P m m F .t / D 1 Dt 1 1/t m 1 C k1 1 t D mD1 D.m/t mD1 D.m mD1 t . This becomes F .t/Œ1

66

4. APPLICATIONS OF GENERATING FUNCTIONS

k1 t=.1 t /. From this F .t / D k1 t=.1 t /2 . The coefficient of t m is k1 m. As we substituted D.m/ D C.2m /=2m , we need to extract the coefficient of n D 2m (and not of m) in the resulting expression, and multiply by n D 2m . This gives C.n/ D nk1 log2 .n/. As k1 is a constant close to 1, we get the complexity of merge-sort algorithm as n log2 .n/ (see Knuth [2011], Sedgewick and Wayne [2011], and Sedgewick and Flajolet [2013]). This gives us the following lemma.

Lemma 4.1 If the OGF of a nonlinearly transformed recurrence D.m/ D C.2m /=2m is of the form ct =.1 bt /2 where c is a small constant, and b is close to 1, then the complexity of the corresponding algorithm is b n n log2 .n/. Put b D 1 to get the complexity of merge-sort algorithm as n log2 .n/.

4.2.2

QUICK-SORT ALGORITHM ANALYSIS

The quick-sort algorithm, invented by C.A.R. Hoare in 1962, is a popular sorting technique that uses divide-and-conquer strategy. It divides an array of size n to be sorted into two subsets (sub-arrays) using a pivot (partitioning element) in such a way that all elements to the left of the pivot are less than (or equal) to it, and all elements to the right of it are greater than it. This means that after successful completion of a pass through the data, the pivot will occupy its correct position so that it can be ignored in subsequent passes. This results in two subarrays whose total size is .n 1/. If they are not already sorted, we apply the same technique by choosing separate pivot elements from within their respective regions, and partitions them again. This process is continued until there are no more subarrays left for partitioning. The above discussion assumed that the pivot is an element of the array to be sorted. In fact, the pivot can be any value between the minimum and maximum of the values to be sorted. As an example, it can be the average of 3 randomly chosen data values for numeric data, and median of three for text or character data. For simplicity, it is assumed in the following discussion that the pivot need not be a value present in the array. If C.n/ denotes the average complexity of sorting n elements using quicksort, C.n/ D k1 .n C 1/ C .1=n/

n 1 X kD1

.C.k/ C C.n

k//;

(4.3)

where C(1) = 1, and k1 .n C 1/ denotes the cost complexity of partitioning using the pivot.1 The literal meaning of this expression is that we have to examine all the elements (because the pivot may not belong to the array) with resulting complexity n C 1 to partition it using the pivot,2 and the resulting subarrays are of sizes k and .n k/ where k can take any of the values 1 through n 1. If k D 1, one of the subarrays will be of size 1, and the other of size n 1 (which is a worst case). Similarly, if k D n 1, the other sublist is of size 1. Ideally we would like to partition in such a way that both subarrays are more or less of the same size. But this rarely happens in 1 Some 2 The

authors use C(0)=1 which is meaningless as the list to be sorted is empty when n=0. ‘(n+1)’ term accounts for the possibility of left and right pointers crisscrossing beyond the meeting point.

4.2. APPLICATIONS IN COMPUTING

67

practice, unless the array to be sorted is already “nearly sorted,” and middle element is chosen as pivot. Split Equation (4.3) into two, and put j D n k in the second term so that j varies from n 1 down to one. Due to symmetry of the resulting sublists, we could write the above as P 1 C.n/ D k1 .n C 1/ C .2=n/ nkD1 C.k/. Multiply both sides by n to get n C.n/ D k1 n.n C 1/ C 2

n 1 X

C.k/:

(4.4)

kD1

Replace n by .n 1/ in (4.4), and subtract the resulting expression from it to get nC.n/ .n 1/C.n 1/ D 2k1 n C 2C.n 1/. Rearrange to get nC.n/ D .n C 1/C.n 1/ C 2k1 n. Divide both sides by n.n C 1/. This gives C.n/=.n C 1/ D C.n 1/=n C 2k1 =.n C 1/. This can be solved iteratively by writing it as C.n/ D .1 C 1=n/C.n 1/ C 2k1 , which is a special case of the Panjer recursion [Panjer, 1981] used in insurance claim size and claim number distributions. If F .t / is the GF of the sequence, we know from Chapter 2 that C.n/=.n C 1/ is the coeffiR R R cient of .1=t / F .t /dt . Multiply throughout by t to get F .t/dt D t F .t/dt 2k1 log.1 t/, R P n because the last term is 2k1 1 t/ D .2k1 =t/ log.1 t/ nD0 t =.n C 1/ D .2k1 =t / dt=.1 (Chapter 2). This gives F .t / D 2k1 =.1 t /2 Œ1 log.1 t /. This shows that the time complexity of quicksort algorithm is expressible in terms of harmonic numbers as O.2nHn /, because the OGF of Hn D 1 C 1=2 C    C 1=n is .1=.1 t// log.1=.1 t//, or equivalently log.1 t /=.1 t/. But it is the usual practice to express the complexity in terms of bigR nC1 O notation as O(n log.n/). Express Hn as 1 dx=x . This gives Hn D ln.n/ C C  where

D 0:577215166 is Euler’s constant and  is O.1=.2n//, which tends to zero for large n. This gives rise to the following lemma. Lemma 4.2 If the integral of the OGF of a recurrence is of the form c log.1 t/=.1 t/ where c is a small constant, then the complexity of the corresponding algorithm is n log2 .n/.

4.2.3

BINARY-SEARCH ALGORITHM ANALYSIS

The binary search is a popular algorithm for searching a key (data value) in a sorted array. It works by first finding the middle element (say mid) of the sorted array, and comparing the key with that element. The upper part (values greater than mid) could be totally ignored if the key is less than mid. On the other hand, if key is greater than mid, we could totally ignore the lower part. We call this resulting part as the target block. Then the new middle element of the target block is found. The same process is repeated in each target block. This process is continued to bring down the size of the target block eventually to one. A success code is returned if the key matches that element. Otherwise, we report that the key is not found in the array. Let C.n/ denote the complexity of searching a key in a sorted array of size n. Then C.n/ D C.bn=2c/ C 1, for n 2 which is exact for odd n, and is an upper bound for even n (because when n is even, the mid element divides the array with n/2 elements on one partition and (n/2)-1 elements on the

68

4. APPLICATIONS OF GENERATING FUNCTIONS

other). As done above, let n D 2m , so that C.bn=2c/ D C.b2m 1 c). Let B.m/ D C.2m / so that B.m/ D B.m 1/ C 1. Multiply both sides by t k and sum from 1 to 1 (because n 2 ) m 1) to get the OGF as F .t / D tF .t / C t =.1 t/, from which F .t/ D t=.1 t/2 . The coefficient of t m is m D log2 .n/. This gives us the following lemma. Lemma 4.3 If the OGF of a nonlinearly transformed recurrence B.m/ D C.2m / is of the form ct =.1 t /2 where c is a small constant close to 1, then the complexity of the corresponding algorithm is m D log2 .n/.

An astute reader will notice that the OGF obtained for merge-sort and binary search algorithms are exactly identical (ie. ct =.1 t /2 ). But the merge-sort algorithm uses a nonlinear transformation B.m/ D C.2m /=2m , whereas the binary search algorithm uses B.m/ D C.2m /. This is the reason why the complexity of merge-sort algorithm is O(n lg.n/) and that of binary search algorithm is O(lg.n/). The worst-case complexity for binary search algorithm occurs when a key is not present in the sorted array.

4.2.4

WELL-FORMED PARENTHESES

Complex computer program statements in high-level languages are formed using parentheses. This not only reduces the program size but could result in speedy execution. Each left parenthesis appearing on the right side of statements (or on function calls) should have a matching right parenthesis. As an example, r1 D . b C sqrt.b  b 4  a  c//=.2  a/ is a valid program statement with matching parentheses.3 The number of well-formed parentheses can be enumerated using Catalan number C.n/ (which is discussed below). This can also be used in counting products (of real numbers, complex numbers, compatible matrices, etc.) around which parentheses can be put. Thus, if A; B; C are compatible matrices, we could compute the product as .A  B/  C or A  .B  C /. Similarly, laddered exponents (or repeated exponents) are expres2 sions in which the exponent is repeated many times. Consider a two-level exponent 43 . It can be 2 evaluated as either 4.3 / = 262144 or as .43 /2 = 4096. This shows that braces can alter the meaning of a repeated exponent expression. If there are n C 1 terms in such a laddered expression, there exists C.n/ ways to put brackets among them.

4.2.5

FORMAL LANGUAGES

Consider a formal language L accepted by a deterministic finite automata say M . Label the start state as 0. Let nk denote the number of words in L of length exactly k . This is the same as the number of directed paths of length exactly k from 0 (start state) to the terminal state (any accepting state). Let N denote the transition matrix whose .i; j /t h entry is the number of transitions (directed edges) from state i to state j of the automaton. A GF for L can be defined P as G.z/ D k0 nk t k . 3 Here

sqrt is a library function (e.g.: in math library)

4.3. APPLICATIONS IN COMBINATORICS

4.3

69

APPLICATIONS IN COMBINATORICS

GFs are extensively used in enumeration problems. Some of them were discussed in Chapters 1 and 2. In this section, we provide another application in proving combinatorial identities [Graham et al., 1994, Grimaldi, 2019].

4.3.1

COMBINATORIAL IDENTITIES

GFs are powerful tools to prove a variety of combinatorial identities. The tools and techniques developed in Chapter 2 can be used to easily prove some of the identities that may other P wise require meticulous arithmetic work. The inversion theorem an D nkD0 kn bk ” bn D  Pn n k n a where a0 and b0 are nonzero may also be needed in some problems. kD0 . 1/ k k Example 4.3 Prove that

Pn

kD0

n k

2k



D 3n for n  1.

Solution 4.3 Consider the OGF of 2k get the result. Example 4.4 Prove that

Pn

kD1

k

n k



n k



D n2n

as F .t / D 1

Pn

kD0

2k

n k



D .2x C 1/n . Put x D 1 to

for n  1.

 Solution 4.4 We know that kn has GF .1 C t/n and multiplying an OGF by 1=.1 t/ results in the OGF of the partial sums of coefficients. In addition, the derivative of an OGF generates  the sequence kak . Start with F .t / D .1 C t/n . Then F 0 .t / D n.1 C t/n 1 is the OGF of k kn .  P Multiplying this by 1=.1 t/ results in the OGF of partial sums sj D jkD0 k kn as n.1 C t/n 1 =.1 t /. This is a convolution of respective OGFs so that Œx n fn.1 C t/n 1 =.1 t/g is what  P we are looking for. By the convolution theorem in Chapter 2, this is n nkD11 n k 1 D n2n 1 .

Example 4.5 Prove that n

Pm

1 kD0 .

1/k

n k



is divisible by both m and

n m



.

 Solution 4.5 We know that . 1/k kn has the OGF .1 x/n . We have already seen that the partial sums of these coefficients can be obtained by dividing this by .1 x/. Thus, the OGF of the given sequence is the .m 1/th term of .1 x/n =.1 x/ D .1 x/n 1 . This is precisely   n 1 n 1 . 1/m m . Now consider n m D n.n 1/Š=Œ.m 1/Š.n m/Š. Multiply numerator and 1 1 denominator by m, write the denominator as mŠ, and write the numerator as m  nŠ to get   n n m  nŠ=ŒmŠ.n m/Š D m m . This shows that the given sum is divisible by both m and m .

Example 4.6 If S is a set with n elements, prove that the total number of ways to choose an even number of elements is equal to the total number of ways to choose an odd number of elements.

70

4. APPLICATIONS OF GENERATING FUNCTIONS

    Solution 4.6 Mathematically this is equivalent to proving that n0 C n2 C n4 C    D n1 C      n C n5 C    D 2n 1 for n  1. Add n1 C n3 C n5 C    to both sides so that the LHS be3     Pn comes kD0 kn which is easily seen to be 2n . The RHS is 2Œ n1 C n3 C n5 C    . As the LHS    is 2n it follows that n1 C n3 C n5 C    D 2n 1 .

Example 4.7 Prove that

 n 2 0

C

 n 2 1

C

 n 2 2

C  C

 n 2 n

2n n

D



for n  1.

          Solution 4.7 As nr D nn r , write the LHS as n0 nn C n1 n n 1 C n2 n n 2 C    C nn n0 . This is the convolution of .1 C x/n with .x C 1/n which has OGF .1 C x/2n , which generates  2n . n

Example 4.8 Prove that

Pm

j D0

n k j



k m j



D

n m



for every k and m  n.

Solution 4.8 Obviously, the expression on the LHS is the coefficient of mt h term in the convolution of two binomial terms .1 C t /n k and .1 C t /k . The OGF of the product as discussed in  n Chapter 2 has coefficient on the LHS. But as the product is .1 C t/n which has mth term m . This proves the result. Example 4.9 If F .t / and G.t / are two arbitrary GFs that satisfy the relationship F .t/ D 1 1 t G. 1 t t / then prove that G.t / D 1Ct F . 1Ct /. 1 t Solution 4.9 Change the variable as s D t =.1 t /, so that 1 C s D 1=.1 t/, and t D s=.1 C s/ to get F .s=.1 C s// D .1 C s/G.s/. Divide both sides by .1 C s/, and replace s by t to get the result. R1 Example 4.10 Use the integral 0 .1 C x/nC1 dx D .2nC1  Pn n nC1 1/=.n C 1/. kD0 k =.k C 1/ D .2

1/=.n C 1/ to prove that

 P Solution 4.10 Consider the binomial expansion .1 C x/n D nkD0 kn x k . Integrate both sides R1 R1 P w.r.t. x from 0–1 to get 0 .1 C x/n dx D nkD0 kn 0 x k dx . The LHS is .1 C x/nC1 =.n C  P 1/j10 D .2nC1 1/=.n C 1/. The RHS is nkD0 kn =.k C 1/x kC1 j10 . When x is 0, every term is zero, and when x D 1 we get the LHS of the above.

Example 4.11 Prove Vandermonde identity

Pk

j D0

m j



n k j



D



mCn k

using GFs.

  Solution 4.11 We know that m is the coefficient of .1 C x/m and mCn is the coefficient of j k .1 C x/mCn . Clearly, the LHS is the convolutions of two binomials. Write .1 C x/m .1 C x/n D .1 C x/mCn . Expand the LHS and use the product of GF discussed in Chapter 2 to get the result.

4.3. APPLICATIONS IN COMBINATORICS

4.3.2

71

NEW GENERATING FUNCTIONS FROM OLD

It was shown in Chapter 2 that a variety of transformations can be applied to existing GFs to get new ones. One simple case is to find the GF of even and odd terms. If G.x/ D a0 C a1 x C a2 x 2 C    C an x n C    , we know that .G.x/ C G. x//=2 is the OGF of a0 C a2 x 2 C a4 x 4 C    C a2n x 2n C    and .G.x/ G. x//=2 is the OGF of a1 x C a3 x 3 C a5 x 5 C    C a2nC1 x 2nC1 C    . This technique can be applied to any OGF or EGF. Example 4.12 Find the GF and obtain an explicit expression for the nth term of a sequence P with a0 D 1 and nkD0 ak an k D 1. Solution 4.12 The structure of the terms suggest that it must be a convolution (more precisely the square of a GF). It was shown in Chapter 2 that the nth term of the square of a GF is a P convolution. Thus, we get F .t /2 D n0 t n D 1=.1 t /. Take square root to get F .t/ D .1 n P t/ 1=2 as the required GF. This can be expanded as an infinite series F .t/ D n0 . 1/n 1=2 t . n  n 1=2 n Hence, an D . 1/ n . This can be simplified as an D .1:3:5: : : : :.2n 1//=.2 nŠ/. Example 4.13 If (a0 ; a1 ; : : : ;) is a given sequence, and bn D sequence bn .

Pn

n kD0 k



ak , find the EGF of the

P n Solution 4.13 Let H(t) denote the EGF of bn . Then H.t / D 1 nD0 bn t =nŠ. As the product  Pk k of two EGF’s has coefficient ck D j D0 j aj bk j , this is the coefficient in the convolution e t  H.t / because e t is the EGF with all coefficients (ak0 s ) as ones.

4.3.3

RECURRENCE RELATIONS

Recurrence relations were briefly introduced in Chapter 1. They may be associated with a numerical sequence or functions with parameters. As examples, Fn D Fn 1 C Fn 2 is a recurrence connecting three terms of a sequence, whereas .x C 1/f .xI n C 1; p/ D .n x/f .xI n; p/ is a recurrence relation satisfied by the PMF of a binomial distribution with parameters n and p . They are useful tools to find distribution functions, moments, and inverse moments of statistical distributions [Chattamvelli and Jones, 1996]. Unless otherwise stated, a recurrence relation will mean a relation connecting arbitrary terms of a sequence in the rest of this chapter. Definition 4.1 A recurrence relation is an algebraic expression that relates the nth term of an ordered sequence as a function of one or more prior terms along with a function of the index n. Recurrence relations allows us to compute a new term of a sequence using one or more other known terms. By convention, the term with the highest index is expressed as a linear combination of prior terms. This allows forward computation of terms using known prior values, which may be stored in temporary arrays or simple variables (that are updated in each successive pass).

72

4. APPLICATIONS OF GENERATING FUNCTIONS

The order of a recurrence relation is the difference between the maximum index, and minimum index in it. As an example, an D 2an 1 C 3n is a first order recurrence relation. Similarly, Fn D Fn 1 C Fn 2 is of order 2 because nt h term depends on .n 2/t h term. The degree of a recurrence relation is the highest power involved in the coefficient. Thus, an2 D an 1  anC1 is of degree two. Population growth models use the logistic map defined as anC1 D an .1 an / where 0 <   4, and jak j 2, where c is a constant and n is the size of the square matrices to be multiplied together (usually n is assumed to be a power of 2). The complexity is assumed to be a constant when n  2. This results in the time complexity O.nlog2 .7/ /, which is approximately O.n2:81 / where O./ indicates the Big-O notation.

4.3. APPLICATIONS IN COMBINATORICS

73

Example 4.14 Find OGF for the recurrence can D banC1 C anC2 , if a0 D 0 and a1 D 1. Solution 4.14 Let F .t / be the OGF. Then multiplying both sides by t n , and summing over n D 0 to 1 gives cF .t / D bŒF .t / a0 =t C ŒF .t / a0 a1 t=t 2 . Solve for F .t/ to get F .t/ D a0 C t.a1 C a0 b/=.1 C bt ct 2 /. Put the initial values a0 D 0 and a1 D 1 to get F .t/ D t=.1 C bt ct 2 /. Example 4.15 Find the OGF for the Fibonacci numbers defined as Fn D Fn F0 D 1 and F1 D 1. Hence obtain the OGF for (F0 ,0,F2 ,0,F4 ,0,   ).

1

C Fn

2,

with

Solution 4.15 Let F .t / be the OGF F0 C F1 t C F2 t 2 C    . Then multiplying both sides of P P1 n n Fn D Fn 1 C Fn 2 by t n , and summing over n D 2 to 1 gives 1 nD2 Fn t D nD2 Fn 1 t C P1 n 2 3 nD2 Fn 2 t . The first term on the RHS is F1 t C F2 t C    . Take t as a common factor to get this in the form t(F .t / F0 ). The second term is obviously t 2 F .t/. The LHS is (F .t / F0 F1 t ). Substitute F0 D F1 D 1 to get (F .t / 1 t ) = t(F .t/ 1) + t 2 F .t/. Re2 2 2 arrange to get F .t /[1p t -t ] = 1, from which F .t / = 1/[1-t -t 2]. Solve (1-t -t ) = ( t )(1= C t ) to get k D . 1 ˙ 5//2 for k=1,2. Write F .t / = 1/[1-t -t ] as -1/[(x-p 1 )(x-2 )]. pSplit using partial fractions as p A/(x-1 )+B/(x-2 ) and solve for A and B to get A=-1/ 5, B=1/ 5. Thus we 1 get 1/[1-t -t 2 ] = 1/p 5 . 1=.x 1 / C 1=.x 2 //. Expand using 1/(x-a) to p= (-1/a) (1-x/a) p  1 nC1 1 nC1 n 1 get the RHS as 1/ 5 .1 / .2 / t . Now  D 2=. 1 C 5/ = .1 C 5/=2 (by p 1 p multiplying numerator and denominator by . 1 5/ 2 1 D .1 5/=2p . From p). Similarly p this we obtain the Fibonacci number as Fn D .Œ.1 C 5/=2nC1 Œ.1 5/=2nC1 /= 5. As there is a one-to-one correspondence among the recurrence and its OGF, one could be obtained from the other using a simple trick. Write the OGF with powers of the dummy variable in decreasing order, and the coefficient of highest term as 1. Form the recurrence by negating the coefficients of the dummy variable. As an example, if the OGF is 1/(x2 +d x + c), the recurrence is an D dan 1 can 2 and vice-versa. The OGF for alternating Fibonacci numbers (F0 , 0, F2 , 0, F4 , 0,    ) is obtained using Theorem 1.2 in page 11 as .F .t / C F . t //=2 D 1=Œ1 t t 2  C 1=Œ1 C t t 2 . If the initial term F0 is assumed as zero, the entire sequence gets shifted by one position to the right, resulting in the OGF F .t / D t =Œ1 t t 2 .

4.3.4

TOWERS OF HANOI PUZZLE

The Towers of Hanoi (ToH) puzzle involves three identical vertical pegs (rods), and several discs of varying diameters that are all stacked upon an initial peg (source peg) in decreasing order of size. The challenge is to move all disks from the source peg (X) to the target peg (Z) using the other peg (Y) as an intermediary. The moves must satisfy the following conditions: (i) only one disk can be moved at a time (that is at the top of any peg), (ii) at no point in time should

74

4. APPLICATIONS OF GENERATING FUNCTIONS

a disk of larger diameter be atop a smaller one on any of the pegs, and (iii) only disks sitting at the top of any peg can be moved. There exists an elegant solution using recursion. Suppose we somehow move the top n-1 disks from the source peg X to the intermediate peg Y using target peg Z as an auxiliary. Then, it is a simple matter to move the largest diameter disk from source peg X to target peg Z. Now we have n-1 disks still sitting on the peg Y. If we re-label the source and intermediate pegs (exchange the role of X and Y), it should be possible to move n-1 disks from Y to Z using X as auxiliary. If C.n/ denotes the minimum number of moves needed to solve the puzzle, we get the recurrence C.n/ D 2C.n 1/ C 1, because C.n 1/ is the number of moves needed to first move the top n-1 disks from X–Y, and then from Y–Z. Obviously, C.1/ D 1. Multiply both sides by t n , and sum over n from 1 to 1 (as the number of P P P discs must be at least one) to get n C.n/t n D 2 n C.n 1/t n C n t n . Take one t outside P n the first term of the RHS, and use 1 t/ to get F .t/ D 2tF .t/ C t=.1 t/. This nD1 t D t=.1 gives F .t / D t=Œ.1 t/.1 2t /. Write this as A/(1-2t)+B/(1-t). Take (1-t)(1-2t) as a common denominator and compare the coefficients to get A+B=0 and -2B-A=1. This results in A=+1 and B=-1, so that the nth term is C.n/ D 2n 1. For n=1,2, and 3 the number of moves are 1, 3 and 7 respectively.

4.4

APPLICATIONS IN GRAPH THEORY

Graph theory is a branch of discrete mathematics in which inter-relationships (adjacency, incidence, containment, common traits, etc.) between entities are represented graphically, where nodes represent entities, and edges connecting them indicates relationships. A labeled graph is one with distinct labels assigned to the vertices (nodes). The labels can be numbers, letters, names, formulas (like chemical formulas), icons, or emojis, and the purpose is to uniquely identify them. Two labeled graphs are isomorphic if there is a one-to-one mapping between them (the nodes) that preserves the labels. A graph without self-loops (a mapping from a node to itself ) is called a simple graph (otherwise it is multi-graph). All graphs in the following discussion are assumed to be simple graphs. A connected graph is one in which at least one path exists between any pair of nodes. Otherwise, it is called unconnected (or disconnected) graph. By convention, the null graph and singleton graph (with one node) are considered to be connected. Trees are special types of connected graphs in which there is just one path between any pair of nodes. A disconnected tree has a special name called forest (which is a collection of trees).

4.4.1

GRAPH ENUMERATION

 If there are n vertices available, any subset of the n2 pairs of vertices (possible edges) should result in a graph. But all of them need not result in connected graphs. Consider all possible n   labeled graphs. As there are n2 possible edges for n nodes, there exist .k2/ ways of choosing k edges from those. Summing this over all possible values of k gives the GF for the number of

4.4. APPLICATIONS IN GRAPH THEORY

P. / n 2

75

. / n 2

n t k . This has closed form .1 C t/.2/ . n Put t D 1 to get the total number of simple labeled graphs with n nodes as 2.2/ . This is the series A006125 in online encyclopedia of integer sequences (oeis.org/A006125). Let C.n/ denote the P.n2/ number of connected simple labeled graphs with n nodes. It has the GF Gn .t/ D kD0 Gn;k t k where Gn;k is the number of connected simple labeled graphs with n vertices and k edges. If this is divided into two sub-graphs with k and n k components, it will satisfy the recurrence ! n 1  X n 2  k C.n/ D 2 1 Ck Cn k for n > 2; C.1/ D 1: (4.5) k 1

simple labeled graphs with n nodes as Ln .t/ D

kD0

k

kD1

This can be solved to get the recurrence relation C.1/ D C.2/ D 1; C.n/ D 2. / n 2

n

n 2 X n kD0

k

! 1 .n 2

/C

k 1 2

kC1

for n > 2;

(4.6)

which is obtained by counting the total number of disconnected graphs and subtracting from  n 2.2/ , being the total number of possible graphs (OEIS A001187). Note that n k2 1 is to be interpreted as zero when n k 1 is less than 2. Analogous results for directed graphs (digraphs)  are obtained by replacing n2 D n.n 1/=2 by n.n 1/. Thus, there are 2n.n 1/ labeled digraphs on n nodes. If m > n.n 1/, the total number of labeled digraphs with m edges is the same as P 1/ the number of digraphs with n.n 1/ m edges. The GF is given by G.t/ D n.n Dn;k t k kD0 where Dn;k is the number of directed graphs with n nodes and k edges [Stanley, 2015]. Chromatic polynomials used in graph coloring is an alternating series which can be regarded as a Mobius function, or represented using Pochhammer falling factorial notation. For a complete graph Kn with n nodes, and   n colors, this takes the simple form Kn ./ D .

1/.

2/ : : : .

n C 1/;

(4.7)

from which a falling Pochhammer GF for chromatic polynomials follows as FKn .I t/ D

1 X

./.n/ t n :

(4.8)

nD1

Graph Enumeration Using EGF

The EGF is better suited because the number of graphs increases rapidly as a function of node size n. Let C.t / denote the EGF for connected labeled graphs, and D.t/ denote the EGF for P n labeled graphs (which need not be connected). Then D.t / is given by D.t/ D 1 2.2/ t n =nŠ. nD0 P n.n 1/=2 n They are related as D.t / D exp.C.t / 1/. This gives C.t/ D 1 C ln. 1 t =nŠ/ nD0 2 where ln denotes log to the base e . Expand the infinite series to get C.t/ D 1 C ln.1 C t C

76

4. APPLICATIONS OF GENERATING FUNCTIONS 2

2t =.2Š/ C 8t 3 =.3Š/ C 64t 4 =.4Š/ C    /. Now use log.1 C x/ D x x 2 =2 C x 3 =3    to get 1 C t C t 2 =.2Š/ C 4t 3 =.3Š/ C 38t 4 =.4Š/ C    . This shows that the number of connected graphs with k vertices is 1, 1, 1, 4, 38, etc., for k D 0; 1; 2; : : : (OEIS A001349).

4.4.2

TREE ENUMERATION

Enumerating trees can also be done easily using GFs. To illustrate, suppose we need to count the number of binary trees with n vertices,4 or the number of ways in which a binary tree with n nodes can be drawn on a plane (planar tree). This is given by the Catalan number discussed in Chapter 2 [Wilf, 1994]. Let C.n/ denote the number of binary trees with n nodes. There is only one tree (a singleton node) when n D 1, so that C.1/ D 1. Otherwise, count the number of nodes to the left of the root, and those to the right. If there are j nodes in the left-subtree, there must be n 1 j nodes in the right-subtree. As j can vary between 0 and n 1, we get C.n/ D C.0/C.n 1/ C C.1/C.n 2/ C    C C.n 1/C.0/. This is the convolution which is  already solved in Chapter 2. Thus, C.n/ D 2n =.n C 1/. The number of ordered binary trees on n n vertices is C.n 1/. The total number of labeled trees on n vertices is nn 2 . This is known as Caylay’s formula. For rooted-trees this becomes nn 1 . The GF for the number of rooted   P1 k trees with n vertices is r.t / D t exp =k . The GF for counting general trees can be kD1 r t  obtained from this using Otter’s formula as T .x/ D r.x/ r.x/2 r x 2 .

There are many other interpretations to the Catalan number. Suppose an epidemic is spreading fast in a community. Males and females are equally likely to be infected. If it is known that exactly the same number of males and females have been infected, the cumulative number of male patients is at least as large as number of females is given by C.n/. Another application on matched-parentheses is discussed on page 68. When two entirely different problems have the same GF, there must exist a one-to-one bijection between them. Thus, GFs are also used in other branches of science like theory of functions in mathematics, enumeration of organic compounds in chemistry, equivalent classes, etc.

4.5

APPLICATIONS IN CHEMISTRY

Organic chemistry is the study of carbon compounds. An organic molecule is an assemblage of distinct atoms in which some atoms are linked to others by “valency bonds.” Each single carbon atom has four valencies (four other atoms can be bonded to it). An atom valence in chemistry is the same as vertex degree in graph theory. The sum of the valencies of all vertices of a connected graph is twice the number of edges. This implies that the number of odd vertices in a connected graph is even. Molecular graphs (also called constitutional graphs) used in chemistry are graphs of chemical structures (the hydrogen atoms are usually suppressed as it can be inferred from the rest of the graph). Six bondings are possible when two carbon atoms form a direct link 4 we

may enumerate using total nodes, internal nodes or leaf nodes, or using out-degree of each non-leaf node.

4.5. APPLICATIONS IN CHEMISTRY

77

(called C-C link). Hydrocarbons are compounds that contain only carbon and hydrogen atoms (also called paraffins). Straight-chain (2D) hydrocarbons are called alkanes that have only C-C and C-H bonds. Chemical graph theory is a branch of mathematical chemistry that deals with applications of graph theory in chemistry (especially organic chemistry, where nodes represent atoms, and edges represent covalent two-electron bonds).

4.5.1

POLYMER CHEMISTRY

Polymers are high-molecular-weight organic compounds bonded or aggregated together by many smaller molecules called monomers. Several materials found in nature such as proteins, starch, synthetic fibers used in clothing, etc. are polymers. Plastics of various types, resins and rubbers are also polymers in wide use. Synthetic polymers account for more than half of the compounds produced by the chemical industry. A few monomer units are subjected to chemical reactions under controlled conditions to form polymers. The kinetic approach uses high-pressure tubular reactors that use a catalyst (usually oxygen) to bond the monomers. This is called addition reaction or chain growth. Another approach in which two or more molecules are combined, and a stable small molecule (like water or carbon dioxide) is eliminated, is called condensation reaction or step growth. Isomers are compounds that have identical formula but different structure or spatial arrangement. Working with polymers mathematically is a challenge due to the variety of structures. The PMF has been used for a long time to study them. The number, weight, and chromatographic probability mass functions are the most popular PGF in common use. The molecular weight distribution is usually assumed to follow the log-normal distribution. If the length of a molecule is assumed to follow a discrete distribution with PMF P .N /, the correP n sponding PGF is F .t / D 1 nD0 t P .N D n/. Example 4.16 Consider a polymer formed using 3 monomers say P, Q, R. Suppose P and Q can be used any number of times, but R can be used only an even number of times. Find the number of polymers of length n that can be formed using P, Q, and R. Solution 4.16 We will use EGF for each of the possible choices. As P and Q can be used any number of times, the EGF for them is exp.x/. As R can be used only an even number of times,  it has EGF 1 C t 2 =2Š C t 4 =4Š C t 6 =6Š C    . As all possible choices are allowed, the EGF for  our problem is the product e t e t 1 C t 2 =2Š C t 4 =4Š C t 6 =6Š C    . Using the result in Chapter 2,   this can be simplified to e 2t e t C e t =2. Split this into two terms to get e 3t C e t =2. Expand each term as an infinite series, and collect the coefficient of nt h term to get an D .3n C 1/=2. If  R can be used only an odd number of times, the corresponding EGF is e 2t e t e t =2. This gives an D .3n 1/=2. Both of them are integers because the last digit of 3n is always 1, 3, 7, or 9, so that addition or subtraction of 1 from it results in an even integer, which when divided by 2 results in an integer. Consider the problem of forming polymers using condensation approach,

78

4. APPLICATIONS OF GENERATING FUNCTIONS

where one type of monomer is present. The OGF for the total number of ways in which this P  k m process can happen is Px .m; t / D , where m is the total number of distinct polymers. k1 t Taking t as a common factor, this can be written as Œt=.1 t/m so that the coefficient ak D  n 1 . k

4.5.2

COUNTING ISOMERS OF HYDROCARBONS

Alkanes are hydrocarbons with general structure Cn H2nC2 , where C denotes Carbon atom, and H denotes Hydrogen atom, subscripts denote number of atoms of each type, and n  1 is an integer.5 Thus, for every carbon atom in an alkane, there are 2n C 2 Hydrogen atoms (examples are CH4 D Methane, C2 H6 D Ethane, C3 H8 D Propane, C4 H10 D Butane), so that they are saturated (no free space exists for other atomic bonds). A tree ensues when every atom symbol (Carbon or Hydrogen) is replaced by a node, and every chemical bond by an undirected edge. Because of saturation, a tree representation (called molecular graph) assumes that carbon atoms have degree 4, and hydrogen atoms have degree 1. Alcohols are hydroxycarbons with general structure Cn H2nC1 OH. Thus a special type of tree called a “1-4 tree” is used to model such molecules. This tree has a property that each node has degree 1 or 4. A GF for alkanes (using unrooted 1-4 trees) is found in three steps. First find a GF for alcohols with n carbon, 2n C 1 hydrogen, and one OH group as D.t / D 1 C t C t 2 C 2t 3 C 4t 4 C 8t 5 C 17t 6 C 39t 7 C 89t 8 C 211t 9 C    ; (4.9)    which satisfies the recurrence relation D.t / D 1 C .t=6/ D.t/3 C 3D t 2 D.t/ C 2D t 3 . Next find a GF for rooted 1-4 trees with root as carbon atom (valency 4). This GF is given by G.t / D t C t 2 C 2t 3 C 4t 4 C 9t 5 C 18t 6 C    (4.10)   which satisfies the recurrence relation G.t / D .t =24/ D.t/4 C 6D.t/2 D t 2 C 8 D.t/3 D.t/ C 2  3D t 2 C 6D t 4 . From this the GF for alkanes is found as F .t / D 1 C t C t 2 C t 3 C 2t 4 C 3t 5 C 5t 6 C 9t 7 C 18t 8 C 35t 9 C 75t 10 C 159t 11 C    (4.11)   which satisfy F .t / D G.t / C D.t / D.t /2 D t 2 =2. There are no isomers for carbon count n D 1; 2; 3, but for n D 4, there exists two distinct isomers, and for n D 5 there are three isomers. The isomer count increases rapidly thereafter. Thus, there are 75 isomers for n D 10 and 366319 isomers for n= 20.

Example 4.17 Suppose a compound is to be formed from Carbon, Hydrogen, and Oxygen atoms only. Assume that two of them can be used with a lower bound restriction (any number 5 Alkenes

have double-bonds and have structure Cn H2n whereas Alkynes have triple-bonds and structure Cn H2n

2.

4.6. APPLICATIONS IN EPIDEMIOLOGY

79

of atoms  k ), but one of them (say Oxygen) must always be even (either zero, or an even number of atoms). Find a GF for the number of compounds that can be formed, if there are a total of n atoms. Solution 4.17 As Oxygen should occur an even number of times, it has EGF 1 C t 2 =2Š C t 4 =4Š C    . This EGF is already discussed in Chapter 1 and it is .exp.t/ C exp. t//=2. As the other two atoms can occur k or more times, the EGF is t k =kŠ C t kC1 =k C 1Š C    . Add the Pk 1 j missing k terms (including the constant) to express this in the form .exp.t/ j D0 t =j Š/. Thus, the required OGF is the product of individual OGFs. This gives F .t/ D ..exp.t/ C  Pk 1 j 2 exp. t//=2/ exp.t / j D0 t =j Š . The displaced geometric distribution discussed in page 37 finds applications in polymer chemistry. As an example, suppose a molecule has exactly k monomer residues. Then there exist k-1 reactions with probability p k 1 followed by a non-reaction, so that the PMF becomes f .x/ D p x 1 .1 p/. This has PGF pt =.1 qt / where q=1-p.

4.6

APPLICATIONS IN EPIDEMIOLOGY

Spread of infectious diseases among multiple species in a community are modeled using statistical techniques. Certain assumptions are often made in such modeling (transmission distribution is independent and identical, loop-backs do not exist, transmission is not rampant). For simplicity, it is assumed in the following discussion that a single species (humans) is involved (inter-species spread are not considered as in transmission through mosquitoes and insects). An outbreak indicates the beginning of an epidemic in a region at a certain time interval when one or more persons are infected from outside the population under study. A “contact” means that an infected person comes into contact with an uninfected person, who also gets the disease. This contact need not be physical (bodily), as there are diseases spread through contaminated water (Cholera), close physical proximity through air (TB, influenza, chickenpox, some viruses like SARS, Chikungunya), or sharing or handling of food (through an infected restaurant worker; e.g., Salmonella, Shigella, Campylobacter), items (like beds and sofas) or devices. There are also diseases that spread through multiple media (e.g., influenza through air, and intimate bodily contact). If a person is more likely to infect one of their contacts than to become infected, it is called asymmetric and otherwise symmetric. As an example, health-care workers (HCW), care-givers in community centers, and visitors are more likely to be infected by a patient, than otherwise. Whenever an individual transmits the disease to an uninfected one (offspring), it is captured into a graph, where nodes represent individuals, and an arc (edge) represents who infected whom. In other words, all individuals who comes into contact with an infected person (conditional contacts) are not part of the graph unless they also get infected. The edges can be directed or undirected. An undirected graph is used to model the occurrences rather than the

80

4. APPLICATIONS OF GENERATING FUNCTIONS

direction of spread (for which directed graphs are used). Asymmetry may arise among males and females (as there are diseases with gender susceptibility, in which case two types of nodes are distinguished (say red and black or circle and square), or two separate graphs (one for each gender) are used). The spread can be one-to-one, one-to-many, or many-to-one (many-to-many are not considered as it can be modeled using many-to-one). Edge weights are probabilities (of transmission), contact duration, difference between ages of individuals, etc. Quite often, a special node (called start node that represents who originated the disease) is marked away to model the spread. Such epidemics are called single-source epidemic. But an epidemic may outbreak simultaneously with n individuals with absolutely no connectivity at all (multi-source epidemic). One popular model used in epidemiology is the SIR model in which a population of N individuals is divided into three mutually exclusive groups called Susceptible, Infected, and Removed (recovered (or dead)). Although dead persons cannot infect others, inclusion of them can throw more insight in some models. Moreover, in computer simulations, a dead person will necessitate a node and arcs removal (incident on the node) in the corresponding graph. This can easily be done if the graph is represented as an adjacency or incidence matrix. A person recovered from an infection is either returned back to the susceptible group (this is called SIS model which stands for Susceptible, Infected, Susceptible), or returned to a conditional group (some diseases like chickenpox and smallpox have a property that a recovered person will not get infected either throughout their lifetime, or during a fixed period). There are several quantities that can be modeled if the semi-directed degree distribution, and the probabilities of disease transmission to offspring are known. Examples are the number of active infections in a generation, number of recovered persons during a fixed time period, number of recovered and dead persons, proportion of infected population, etc. The PGF approach is a mathematically sound method used in epidemiology and many other medical fields. It can be used to predict disease emergence, progression, and extinction. This has important implications in health care policy making, online monitoring and evaluation of disease progression, exceedences from known threshold transmission rates, developing effective plans of action, and in feeding information into early warning systems. Prior data on such epidemics may also be used to find the probabilities that an epidemic will last for a certain period of time before it is contained (brought under control), or in predicting patient deaths. To use the GF approach, it is assumed that the rate of transmission are IID random variables. First consider a simple model that pertains to an infected individual. If pk denotes the probability that an individual (a node) infects k others (k offspring) during a fixed time interval (before recovery P or death), we could model it using the PGF as F .t / D k0 pk t k , where p0 is the chance that an infected person does not transmit it to another. The expected number of infected individuals is then given by F 0 .t /j tD1 . If independence of random variables are assumed, all disease trans-

4.6. APPLICATIONS IN EPIDEMIOLOGY

81

missions caused by an individual can be modeled using the geometric law. The PGF has closed form in such cases. As mentioned in Chapter 3, the probability that a single individual caused k infections can then be obtained from the PGF as pk D .1=kŠ/.@=@t/k F .t/j t D0 . As shown in Chapter 3, they can also be used to find higher-order moments, and other statistics.

4.6.1

DISEASE PROGRESSION AND CONTAINMENT

If an infected person, or source (as in water-born or non-human carrier based epidemic) transmits the disease to m offspring (persons who contracts the disease from that person or source), we could consider each of them as independent events that follow the same distribution. Then the progress can be modeled as sums of IID random variables, whose coefficients are found using convolution (Chapter 2). As dividing an OGF by .1 t/ results in an OGF with partial sums (CDFGF discussed in Chapter 3) as coefficients, we could estimate the probabilities of disease extinction and steadiness, as well as the number of generations (n) using PGF techniques. The constant coefficient in PGF or CDFGF of the convolution can be used to check for containment of the spread because other higher coefficients (in PGF) will become negligible when offsprings no longer spread the disease. Assume that all individuals at some offspring generation n stops further disease transmissions (p0 goes up to one). In that case the probability of the convolutional sum will satisfy p D P .t /jt D p , and the complementary probability (disease established as a steady pandemic) is the solution of 1 p D P .t /jt D p , as p0 in this case will tend to zero. This is because p P .t /=.1 t/ (the coefficients are p times the CDF because P .t/=.1 t/ is CDFGF) will result in the convolution of the coefficients of our OGF and .1; 1; 1; : : :/, which when equated to p will force the convolutional sum to be 1 so that “leftover probabilities” are zeros. Our aim then is to estimate such a p . If P .t / has closed form, this can be obtained either using analytical methods or iterative procedures. If the in-degrees (number of incoming edges to a node; X ) and out-degrees (number of outgoing edges from a node; Y ) are distinguished in a undirected network, the PGF becomes P1 P .s; t / D j;kD0 p.j; k/s j t k where p.j; k/ is the joint PMF that a randomly chosen individual has j contacts before being infected and produces k offsprings (i.e., infects k others who does not have the disease). Then the marginal PGF of X (in-degree) is given by P .s; 1/, and that of Y (out-degree) is given by P .1; t /. They are independent if P .s; t/ D P .s; 1/P .1; t/ for all values of s and t . If directed and undirected in- and out-degrees are separately considered, we need three dummy variables (one for incoming edges, two for outgoing directed and undirected edges). The incoming edges can also be considered separately as directed and undirected). An advantage of this approach is that the mean in-degrees (as in the case of HCW infections from patients), mean out-degrees (average number of persons to whom an infected person has transmitted the disease), etc., can be separately found out.

82

4. APPLICATIONS OF GENERATING FUNCTIONS

4.7

APPLICATIONS IN NUMBER THEORY

Number theory is a branch of mathematics that study various properties of numbers (usually positive integers). A partition of an integer n is a sum of positive numbers that add up to n. A combinatorial argument can be used to find the number of partitions of n. Write n as a linear list of n number of 1’s with a space in-between. For example 3 D 1 1 1. Obviously, there are n 1 spaces present. Let a slash character denote a division of the n ones into groups. If there are n 1 spaces, there exist n 1 ways to place a single slash. This is the number of ways to select one slash   from n 1 which is n 1 1 . Similarly, 2 slashes can be placed in n 2 1 ways, and so on. There is    just one way to put n 1 slashes in n 1 spaces. Add them up to get n 1 1 C n 2 1 C    C nn 11 which is .1 C 1/n 1 D 2n 1 .

Consider a positive integer n > 1. Let p.n/ denote the number of partitions where repetitions are allowed, and Q.n/ denote the partition into distinct parts. By convention, the elements are written in decreasing order (from largest to smallest). Obviously, p.0/ D Q.0/ D 1. As parts can be repeated in p.n/, a 1 can appear any number of times from 0 up to n, so that it can be captured by the power series 1 C t C t 2 C t 3 C    C t n . Similarly, the number of two’s can be captured by 1 C t 2 C t 4 C    , and so on. Using the product rule, the   OGF is the product 1 C t C t 2 C t 3 C    C t n 1 C t 2 C t 4 C    : : : .1 C t n /. This is the  P Q1 P n same as n0 p.n/t n D kD1 1= 1 t k . This can also be written as 1 nD1 p.n/t D 1 C    P1 k t / 1 t 2 : : : 1 t k . Similarly, the GF for Q.n/ is given by kD1 t = .1 X n0

n

Q.n/t D 1 C

1 X

h k t .2/ = .1

t/ 1

  t2 : : : 1

tk

i

kD1

D1Ct C

1 X kD2

 t k .1 C t / : : : 1 C t k

1



which is convergent for t < 1. The GF can be used to prove that the total number of partitions of a positive integer n into distinct parts is the same as the number of partitions of n into odd parts. Here “odd parts” means that all parts are odd integers. For example, 8 D 7 C 1 D 5 C 3 D 5 C 1 C 1 C 1 contain only odd integers, whereas 8 D 6 C 2 D 4 C 2 C 2 are “even partitions.” Let n be an integer  1. The GF for the number of parQ titions of n into distinct parts is p.x/ D n1 .1 C x n /. Now consider the partition of  Q n into odd parts. The OGF is o.x/ D n1 1 x 2n 1 . Use .a C b/.a b/ D a2 b 2 ,  where a D .1 x n / and b D .1 C x n / to get 1 x 2n = .1 x n / D .1 C x n /. Now p.x/ D  Q Q n x 2n = .1 x n /. Split this into the product of numerator terms, n1 .1 C x / D n1 1  Q Q and product of denominator terms to get p.x/ D n1 1 x 2n = n1 .1 x n /. Split the Q denominator product into “odd-terms product,” and “even-terms product” as n1 .1 x n / D    Q Q Q x 2n 1 x 2n . Substitute in the above and cancel out n1 1 x 2n from n1 1 n1 1  Q numerator and denominator to get p.x/ D 1= n1 1 x 2n 1 . This proves the result.

4.8. APPLICATIONS IN STATISTICS

4.8

83

APPLICATIONS IN STATISTICS

Some of the applications of GFs in statistics appear in Chapter 3. Some more applications are introduced in this section.

4.8.1

SUMS OF IID RANDOM VARIABLES

Sums of independent random variables occur in many practical applications. The PGF in such cases is obtained using convolution, if they are assumed to be identically distributed. Consider a queuing model in which Xk is the service time needed for customer k . If n customers arrive in a P fixed time period t , the total service time needed is Sn D nkD1 Xk . In an insurance domain, let Xk denote the number of accidents, deaths, or claims received. If n customers arrive in a fixed P time period t , the total accidents, deaths, or claims received is Sn D nkD1 Xk . Similarly, let n denote the number of customers who visit an ATM machine for cash withdrawal. If Xk is the P amount withdrawn by customer k in a fixed time interval (say 24 hours), then Sn D nkD1 Xk is the total amount dispensed by the ATM. P If X1 ; X2 ; : : : ; Xn are IID random variables with PGF pk .t/, the sum Sn D nkD1 Xk has PGF PSn .t / D p1 .t /p2 .t / : : : pn .t/. When all of them have the same distribution, we get PSn .t / D Œp.t /n . If the PGF of the sum of a finite number of random variables is known, we could derive the distribution of the component random variables using independence and uniqueness assumptions. Theorem 4.1 If X1 ; X2 ; : : : ; XN are IID random variables with PGF Gxi .t/ where N is an integer-valued random variable with PGF FN .t / which is independent of Xi0 s, then the sum SN D X1 C X2 C    C XN has PGF given by HSN .t/ D FN .Gxi .t//.    Proof. By definition HSN .t/ D E t SN D E EN fixed t SN jN . As N is an integer valued  random variable, this can be written as En2N E t SN jN P .N D n/. Now expand SN to  P P n X1 CX2 CCXN get P .N D n/. This can be written as n2N E t n2N ŒGxi .t/ P .N D P n/ D FN .Gxi .t //. This allows us to write PSN .x/ D n2N pn P .Sn D yjN D n/ D P Œt x  n2N pn ŒG.t /n . Differentiate w.r.t. t and put t D 0 to get E .SN / D E.N /E.X/.  Example 4.18 A fair die with faces marked 1 – 6 is thrown, and the number N that turns up is noted. Then a coin with probability of head is p is tossed N times. Find the OGF for the number of heads that turns up. Solution 4.18 We know that the probability of getting N is 1/6 as the die is fair, where N D P 1; 2; : : : ; 6. This gives the OGF as P .t / D 6kD1 t k =6. As the second part involves Bernoulli tri-

84

4. APPLICATIONS OF GENERATING FUNCTIONS

als N times, we have a BINO.N; p/ distribution with PGF .q C pt/N . As N hPis a random varii P6 6 N n able, we find the PGF of the number of heads as nD1 pn Œq.t/ D .1=6/ .q C pt/ . nD1

4.8.2

INFINITE DIVISIBILITY

A statistical distribution X is called infinite divisible iff for each positive integer n, it can be represented as a sum of n IID random variables Sn D X1 C X2 C    C Xn , where each of the Xk ’s have a common distribution. Additivity and infinite divisibility are related because some distributions are infinite divisible using location parameters. One common example is the    normal law N 1 ; 12 C N 2 ; 22 D N 1 C 2 ; 12 C 22 , if they are independent. Other stable distributions (e.g., Cauchy distribution) are also infinitely divisible by location parameter. The characteristic function of such distributions have an inherent property that it is the nt h power of some characteristic function due to the fact that the characteristic function of a sum of IID random variables is the product of individual characteristic functions. Symbolically, .t / D Œn .t/n . But the converse need not be true. Consider the binomial distribution with characteristic function .q C pe it /n . If n is finite, we could write this as a convolution only in a finite number of ways because n must also be positive. As an example of infinite divisibility, the characteristic function of a Poisson distribution with parameter  can be written as    1=n exp  e i t 1 D exp =n e it 1 , showing that the Poisson distribution is infinitely divisible. Steutel and Van Harn [1979] proved that a distribution with non-negative support (i.e., it takes values 0; 1; 2; : : :) with p0 > 0 is infinitely divisible iff it satisfies .n C 1/pnC1 D

n X kD0

qk pn

k;

for n D 0; 1; 2; : : : ;

(4.12)

P1 where qk are non-negative, and kD0 qk =.k C 1/ < 1. As the RHS represents the  coefficients of a convolution of the power series p0 C p1 t C p2 t 2 C    C pn t n C     q0 C q1 t C q2 t 2 C    C qn t n C    , and .n C 1/pnC1 is the .n C 1/t h term of .t@=@t/F .t/ where F .t / is the PGF of the distribution, we could state the above as follows: “A distribution with PGF F .t / is infinitely divisible if .t@=@t /F .t/ can be represented as a convolution of F .t / and G.t /, where G.t / is the PGF of qk ’s.” As the PGF of Poisson distribution is F .t / D exp..t 1//, .t @=@t /F .t / D t exp..t 1//, so that G(t) =  t giving q1 D , and qk D 0 for all other k.

4.8.3

APPLICATIONS IN STOCHASTIC PROCESSES

Consider a population of organisms that undergoes a birth-death process. The model is purely deterministic if the reproduction occurs at a constant rate. In practice, however, the reproduction is random as many extraneous factors may affect it. Such processes are called stochastic processes. Assume that h is a short period of time during which reproduction occurs with probability h.

4.9. APPLICATIONS IN RELIABILITY

85

Time is measured in discrete units. This will of course depend on the organism under study. The size of the population in current generation t determines its size in the next generation .t C h/. There exist two possibilities if the population count is to reach N at time .t C h/. Either it is N at time t , and no birth occurs during the time interval h, or it is .N 1/ at time t and 1 birth occurs during the interval h. Probability of more than 1 birth occurring during interval h is precluded by choosing h small enough for each species. Hence, the probability of .N 1/ species to increase in size to N in time interval .t; t C h/ is .N 1/h. Likewise, the probability of N species to increase in size to N C 1 in time interval .t; t C h/ is N h. Then the complementary probability 1 N h will denote probability of no increase. Denote by pN .t/ the probability that the population is of size N at time t . Then pN .t C h/ D pN .t/.1

N h/ C pN

1 .t/.N

1/h:

(4.13)

Divide both sides by h and take the limit h ! 0 to get pN .t/ = h D @pN .t/=@t D NpN .t/ C .N 1/pN 1 .t/: (4.14)  The solution is pN .t / D Nn0 1 exp . n0 t/ .1 exp. t//N n0 where n0 D N.0/. This is the negative binomial distribution discussed in Chapter 3. As it is a special case of the geometric distribution, we get pN .t / D exp. t /.1 exp. t//N 1 when n0 D 1. The CDFGF and SFGF are also useful in continuous statistical distributions because the tail areas of central chi-square and beta distributions are related to the sum of tail-probabilities of Poisson and binomial distributions, respectively ([Chattamvelli, 2012], [Shanmugam and Chattamvelli, 2016]). Lt h!0 ŒpN .t C h/

4.9

APPLICATIONS IN RELIABILITY

A component in a system is considered as either working, or non-working in reliability theory. Let the probability that it is working be p . Then q D 1 p denotes the probability of a nonworking component. This is called a binary system. An extension in which each component is perfectly working, partially working, and non-working is called a multi-state system. A system with n identical components6 that works only if k out of n of them works is called a k -outof-n system. Obviously, k is less than n. One example is in internal combustion engines with identical spark-plugs. An automobile with six spark plugs may work properly when two of them are down. Some machines may work with a lower performance when the number of components further goes down (as in automobiles or multi-processor based parallel computers), but there is a limit on the number of non-working components in electronics, computer networking, etc. It is assumed that all components are independently and identically distributed in practical modeling 6 Theoretically

the components need not be exactly identical as in distributed computing or network routing.

86

4. APPLICATIONS OF GENERATING FUNCTIONS

of k -out-of-n systems, because it greatly simplifies the underlying mathematics. The probability that the number of working components is at least k is given by ! n X n j n j R.k; n/ D p q : (4.15) j j Dk

This represents the survival probability of a binomial distribution BINO.n; p/. A GF for this could be obtained using the SFGF discussed in Chapter 2. Replace k by k C 1 in (4.15) and  subtract from it to get the recurrence R.k; n/ R.k C 1; n/ D kn p k q n k . This has a representation in terms of the incomplete beta function. Using the symmetry property of beta distribution, (4.15) can also be represented as ! n X j 1 j k k R.k; n/ D p q : (4.16) k 1 j Dk

Several variants of this system are in use in various disciplines. As an example, some machine learning ensemble-models are built using several (say n) base models. If m among the base models have error rate  < 1=2, we could get an estimate of the error made by the ensemble-model using independence of base models. Suppose each model is used for data classification. Then P Œensemble error D P Œm=2 or more models misclassify data D  k Pn n p/n k , where p denotes the probability of error made by each of the base kDm=2 k p .1 models. As another example, consider the “n-version” systems that are mission critical software, firmware or hardware systems independently developed by different groups of persons who do not communicate among each other. Different groups may use different tools and technologies (like programming languages or operating systems in n-version software system or processors from different manufacturers in n-version hardware systems). The systems are tested simultaneously, and a voting mechanism is used to decide which of the n systems produced the best output. Assume that all versions have the same reliability r . If majority-voting mechanism is used, the reliability can be expressed as ! n X n j R.k; n/ D r .1 r/n j (4.17) j j Ddke

which is structurally similar to Equation (4.15). The same technique is also used in fault-tolerant software programs where different software versions are studied for failure events that are assumed to be independent.

4.9.1

SERIES-PARALLEL SYSTEMS

Suppose a system is composed of several parts in series. This means that the parts are connected in series in the case of hardware systems. But in the case of software systems, this means that

4.10. APPLICATIONS IN BIOINFORMATICS

87

there are sequential modules to be executed one after another. Assume that each part is comprised of mi components in parallel. Such parallel components increase the reliability of the system because if one of them becomes non-functional, the others will spring into action, and keep the system going. In the case of software systems, this may mean that the same functionality is implemented using several “functions” that use different algorithms or programming languages. If one function throws an exception (recoverable run-time error), we could call one of the other identical functions. Such systems are called series-parallel systems. Let ri denotes the reliability of component Di . If there are n components in series, the reliability of the entire Q system is nkD1 rk . As each series component is comprised of mi parallel components, the reliability at stage i is .1 ri /mi . Assume for simplicity that each of the ri are the same .r/. Then Q the reliability of the entire system is nkD1 .1 r/mk . To find the GF for this expression, we take P log to get log.F .t // D nkD1 .1 r/mk t k . This has closed form expression when either each of the mi ’s are the same, or they are distributed according to one of the well-known series.

4.10 APPLICATIONS IN BIOINFORMATICS GFs are used in various places in bioinformatics. As an example, suppose there are restrictions on how many of each type of .A; C; G; T / (where A D Adenine, C D Cytosine, G D Guanine, and T D Thymine) are allowed in a protein combination. These can be incorporated into a GF as described in what follows.

4.10.1 LIFETIME OF CELLULAR PROTEINS Assume that the lifetime of cellular proteins is exponentially distributed. If there are n independent protein molecules under observation, the mean time for protein degradation is the mean of the minimum of n IID exponential random variables, which is 1=.n/, because the CDF of the minimum is 1 exp. nx/. Let Tn denote the time until the final protein deP grades. Then Tn D nkD1 1=.k/. Take 1= as a constant outside. This shows that the OGF Pn is F .t / D .1=/ kD1 t k =k D .1=/Hn , where Hn is the harmonic number, already discussed above. GFs are also used for the enumerating the number of k -noncrossing RNA structures.

4.10.2 SEQUENCE ALIGNMENT Sequence alignment is a popular technique in proteomics, bioinformatics, and genomics. A string will denote a sequence of letters from a fixed alphabet. Position-specific scoring matrix (PSSM), also called a profile weight matrix is often used in these fields (e.g., amino acids in proteomics, or nucleotide sequences within DNA in genomics) to capture the information contained in multiple alignments. Let W denote a weight matrix where n columns contain alignment data, and m rows contain the letters of the alphabet. Then wij denotes the score for letter of type ai and in column j . As OGF often uses positive coefficients, the PSSM score may

88

4. APPLICATIONS OF GENERATING FUNCTIONS

have to be scaled to positive real numbers. GFs used in bioinformatics and genomics are of the weighted-type in which the dummy variable (Chapter 1) is raised to wij to get the weighted OGF m X F j .t/ D pk t wkj ; (4.18) kD1

where pk is the probability assigned to the distribution of letters in the string, and the alignment of sequences are assumed to be IID random variables. Assuming independence of letters, the OGF for PSSM match is given by the product of individual OGFs as F .t / D

n X m Y

pk t wkj :

(4.19)

j D1 kD1

The coefficient of t s gives the probability of getting a score s when F .t/ is expanded as a power series.

4.11 APPLICATIONS IN GENOMICS The PGF can be used for enumerating the number of crossovers in computational genetics. Mapping functions used in genetics are mathematical functions to quantify the relative positions of genetic markers arranged linearly on a map. This avoids the problem of recombination fractions not being additive for two-locus models. The earliest known mapping function is called the Haldane map (1919) as m D f .x/ D log.1 2x/=2 for 0  x < 0:5, which has the inverse map x D .1 exp. 2jmj//=2. If M.x/ denotes a mapping of physical distances to genetic distances, F .t / denotes the PGF of the distribution of crossovers,  denotes the mean number of crossovers, and pk denotes the probability of k crossovers, we could define M.x/ in terms of PGF as M.x/ D .1 F .1 2x=//=2: (4.20) Assume that the number of crossovers follow the binomial distribution BINO.n; p/. Then from Chapter 3, F .t / D .q C pt /n where q D 1 p . Substitute t D 1 2x= where  D np , and cancel out p , and use p C q =1 to get the PGF as F .1 2x=/ D .1 2x=n/n . Substitute in (4.20) to get the map M.x/ D Œ1 .1 2x=n/n  =2 with M.0/ D 0. Putting n D 1 gives M.x/ D x . This is called complete interference, as there is only one possible crossover. As Ltn!1 .1 a=n/n D exp. a/, the above expression approaches the well-known Haldane map. Simplified expression can be obtained when the number of crossovers are odd or even, as discussed in Shanmugam and Chattamvelli [2016, Ch. 6]. Probability of more than k or less than k crossovers, where k is a fixed constant < n=2 can also be found. A zero-truncated binomial distribution is used when the probability of “no crossover” is to be ignored. Catalan numbers discussed in Chapter 2 also finds applications in genetics. If crossovers are restricted to adjacent

4.12. APPLICATIONS IN MANAGEMENT

89

positions only, the number of possible ways in which single crossovers could occur is given by the Catalan number. Many diversity measures and indices are also used in genomics. One simple example is Pm 2 the gene diversity measure H defined as C  H D 1 kD1 xk , where n is the number of individuals in a population, m is the number of alleles at a locus, xk is the frequency of k t h allele, and C is either .1 1=n/ or .1 1=.2n// depending on whether the species under consideration is self-fertilizing or not. The multiplier C acts as a small-sample correction factor. When Pm Pm 2 2 all alleles have equal frequency, we have xk D 1=m so that 1 kD1 xk D 1 kD1 .1=m/ D 2 1 m=m D .1 1=m/. Recurrences can easily be developed for n fixed and m varying, or vice versa. This reduces to the heterozygosity average7 when C D 1. Recurrences are also used in equilibrium proportion for adjacent generations, and in inbreeding models of genetics. For example, when alleles are drawn randomly from N individuals in a population, the probability that the same allele is drawn twice is 1=.2N /, and the two alleles drawn are different is 1 1=.2N /. If f t denotes the inbreeding coefficient for generation t , the probability that two alleles are identical by descent satisfy the recurrence f t C1 D C C .1 C /f t , where C D 1=.2N /. Subtract both LHS and RHS from 1 to get 1 f tC1 D .1 C /.1 f t /. Put .1 f t / D g t to get g tC1 D .1 C /g t , which is a geometric progression with common difference 1 C , so that it can be solved as in Chapter 2 to get g t D .1 C /t g0 , where g0 is the initial heterozygosity. Substitute back g t D .1 f t / to get .1 f t / D .1 C /t .1 f0 /.

4.12 APPLICATIONS IN MANAGEMENT GFs are not very popular in management science, as in other fields (exceptions are in pricing and dynamic market models discussed below). This is perhaps due to the fact that the majority of GFs encountered in management sciences are simple in structure, for which first- or second-order recurrence relationships exist. This is precisely the reason for the popularity of recurrences in banking and finance, accounting, portfolio management, pricing dynamics, and other fields. A first-order recurrence relation relates an arbitrary term (other than the initial one) in a sequence to the adjacent (previous or next) term in the same sequence. It is called linear if it does not involve square-roots, fractional powers or higher-order terms. In other words, it is of the form AnC1 D CAn C D where C and D are constants. The initial term A0 must be known to start the iterations. A first-order linear recurrence is used in flat-rate, unit-cost, and reducing-balance depreciation models. As an example, the simple interest can be represented as a first-order recurrence AnC1 D An C Pr =100 if r is the annual interest rate in percentages, and as AnC1 D An C Pr =12 if r is a fraction .0 1/, and interest is computed monthly. The divisor must be carefully chosen depending upon whether the interest rate is specified in percentage or 7 Different genes that occupy the same locus are called alleles, and individuals in a population with two different alleles for some gene is called heterozygous for that gene.

90

4. APPLICATIONS OF GENERATING FUNCTIONS

as a fraction, and the period is yearly, half-yearly, quarterly or monthly. Similarly, flat rate and unit cost depreciations are linear recurrences AnC1 D An Pr =100 where P is the initial value of an asset, AnC1 is asset value after n years. These are called linear growth and decay models. A geometric sequence is a set of ordered numbers (real or complex) in which each term (except the first) is obtained by multiplying the previous term by a fixed number, called the common ratio. It can be finite or infinite. Quite often, they are finite in management sciences. Expressions involving geometric progressions appear at different places in economics and finance. Examples are geometric growth and deprecation models like present-value models, future-value (or terminal-value) models, dividend discount models, discounted cash-flow models used in finance, and price elasticity of demand models used in economics. If P is the principal amount deposited in a compound interest scheme that yields r percent interest rate per annum, the accumulated amount at the end of n years is given by A D P .1 C r=100/n . The corresponding recurrences for geometric growth and decay models are AnC1 D An R where R D .1 C r=100/ for compound interest, and r is the interest rate per compounding period. The balance depreciation models8 are similar, except that R D .1 r=100/. If Ak denotes the accumulated amount at the end of k t h P year, an OGF for accumulated amount can be found as F .t/ D nkD1 Ak t k . For convenience, it is assumed that n is very large (tends to 1) because the GF method does not care even if the sequence goes to infinity as our aim is just to extract one or two coefficients from it. Substitute P k k for Ak , and take P as a constant to get F .t / D P 1 .1 C r=100/t. kD0 .1 C r=100/ t D P =Œ1 This is a compact expression which is amenable to further operations. When this is expanded as an infinite series, the coefficient of t n will give the accumulated amount at the end of n years. Suppose we wish to determine how much is the growth from mt h year to nt h year where m < n. This is given by Œt n fP =Œ1 .1 C r=100/tg Œt m fP =Œ1 .1 C r=100/tg. It satisfies the recurrence relationship An D .1 C r=100/An 1 with A0 D P . The linear and geometric models may have to be combined in some problems. Consider a situation where the interest on a loan is compounded continuously but loan repayments are made on a regular basis. This results in a linearly decreasing and geometrically increasing model, which will reduce the balance if the repayment amount is greater than the interest. This can be modeled as AnC1 D .1 C r=100/An C , where C is the annual amount repaid. Replace .1 C r=100/ by .1 r=100/, and C by CC to get a geometrically decaying, and linearly increasing model. Both the linear and geometric terms can also be increasing in some problems. Consider an investment scheme in which a fixed amount C is deposited every period (say monthly), and interest on current balance is also compounded during the same period. This results in the recurrence AnC1 D .1 C r=100/An C C where A0 is the first payment (principal amount, which can be C ). Similarly, if the monthly payment on a loan of initial amount P is C , the balance AnC1 satisfies the recurrence AnC1 D .1 C r=.12x100//An C where A0 D P . 8 This also applies to value depreciation models of assets or items that are acquired for use, and undergoes wear-and-tear or decrease in quality due to continuous use or inactivity.

4.13. APPLICATIONS IN ECONOMICS

91

4.12.1 ANNUITY An annuity is a mortgage or trust account into which regular payments are made. The recurrences for annuity and pensions where a certain amount is invested, and regularly withdrawn (usually after a lapse of time) follow the same pattern. Equal amounts of cash or other asset flows occurring as a stream at fixed periods is known as annuity, and if the flows are unequal, it is called a mixed stream (where C will vary). Consider an annuity with fixed interest rate r into which a constant amount C is invested. The future value .F V / at the end of n periods is given by   F V n D C Rn C Rn 1 C    C R1 ; (4.21) where R is .1 C r/ or .1 C r=100/ as discussed above, and it is assumed that payments are made in the beginning of each period (so that interest is accrued on it by the end of the period). Note that the initial amount C becomes CR at the end of the period, so that initial C is not counted. Take R as common in RHS of (4.21), and use the sum of a geometric progression to get F Vn D CR Œ.Rn 1/ =.R 1/. This satisfies the recurrence F Vn D F Vn 1 C CRn . The net present value (NPV) models (also known as discounted cash flow models) are very similar because R is used as divisor.  NPV D Cn =Rn C Cn

1 =R

n 1

 C    C C1 =R C C0 ;

(4.22)

where C0 ; C1 ; : : : ; Cn are the cash inflows during time periods 0 through n, and interest is compounded.

4.13 APPLICATIONS IN ECONOMICS Economy growth and decay models are used to model the future growth of world economies. This is either applied at entity levels (like a country), sector (like agriculture or industrial output of various industries), or commodity level. The simplest one is the geometric model discussed above as EnC1 D REn , with initial condition E0 D K , where E0 denotes a particular stage in the economy. This may be either the beginning of a year, before an economic recession sets in, or may coincide with another related event (like natural disasters, economic reforms, new rules or laws that affect economy, changes in governance due to elections, etc.). Here R is an averP age niD1 ri =n, computed from key economy factors like inflation rate, import-export deficits (balance of trade), foreign exchange rates of various currencies, etc. The magnitude of R decides whether growth or decay occurs. A GF can easily be obtained as before. But R need not remain a constant over time. Thus, recurrence relations are much better suited to model, as they allow R to vary over time. A combined linear and geometric model can also be used when inflows (like exports, loans, bank contributions) and outflows (imports, loan repayments) are incorporated. The GF approach can be used when derivatives (say of first- or second-order) are involved

92

4. APPLICATIONS OF GENERATING FUNCTIONS

on either side of the equation, as in pricing models. As discussed in Chapter 2, derivatives of GFs results in a left-shift of the coefficients.

4.14 SUMMARY This chapter discussed several applications of GFs in algebra, analysis of algorithms, combinatorics, economics, epidemiology, genetics, graph theory, management, number theory, organic chemistry, reliability theory, and statistics. They are powerful tools in the analysis of algorithms in which complexity is expressed in terms of recurrence relations. This chapter also discussed several recurrence relationships encountered by researchers and professionals in many scientific disciplines.

Bibliography Bóna, M., Combinatorics of Permutations, 2nd ed., CRC Press, 2012. DOI: 10.1201/b12210. Chattamvelli, R., Statistical Algorithms, Alpha Science, Oxford, UK, 2012. 85 Chattamvelli, R. and Jones, M. C., Recurrence relations for noncentral density, distribution functions, and inverse moments, Journal of Statistical Computation and Simulation, 52(3), 289– 299, 1996. DOI: 10.1080/00949659508811679. 7, 71 Graham, R., Knuth, D. E., and Patashnik, O., Concrete Mathematics, 2nd ed., Addison Wesley, MA, 1994. 69 Grimaldi, R. P., Discrete and Combinatorial Mathematics: An Applied Introduction, 5th ed., Pearson Education, 2019. 69 Knuth, D. E., The Art of Computer Programming, vol. 1, Addison–Wesley, MA, 2011. 66 Lando, S. K., Lectures on Generating Functions, American Mathematical Society, 2003. DOI: 10.1090/stml/023. Panjer, H. H., Recursive evaluation of a family of compound distributions, ASTIN Bulletin, vol. 11, pp. 22–26. 67 Sedgewick, R. and Wayne, K., Algorithms, Addison–Wesley, MA, 2011. 66 Sedgewick, R. and Flajolet, P., An Introduction to the Analysis of Algorithms, Addison–Wesley, MA, 2013. 66 Shanmugam, R. and Chattamvelli, R., Statistics for Scientists and Engineers, John Wiley, NY, 2016. DOI: 10.1002/9781119047063. 13, 30, 33, 37, 54, 85, 88 Shishebor, Z., Nematollahi, A. R., and Soltani, A. R., On covariance generating functions and spectral densities of periodically correlated autoregressive processes, Journal of Applied Mathematics and Stochastic Analysis, vol. 2006, Article ID 94746, 2006. https://www.hindawi. com/journals/ijsa/2006/094746/ref/ DOI: 10.1155/jamsa/2006/94746. 16 Stanley, R. P., Algebraic Combinatorics: Walks, Trees, Tableaux, and More, Springer, 2015. 75

94

BIBLIOGRAPHY

Steutel, F. W. and Van Harn, K., Discrete analogues of self-decomposability and stability, Annals of Probability, 7, 893–899, 1979. DOI: 10.1214/aop/1176994950. 84 Wilf, H., Generating functionology, Academic Press, 1994. 76

Index alkanes, 78 annuity, 91 applications algebra, 61 binary search analysis, 67 bioinformatics, 87 combinatorics, 69 divisibility of distributions, 84 economics, 91 genomics, 88 in chemistry, 76 in computing, 64 in epidemiology, 79 in graph theory, 74 isomers count, 78 management, 89 merge-sort analysis, 65 number theory, 82 quick-sort analysis, 66 recurrence relations, 71 reliability, 85 tree enumeration, 76 well-formed parentheses, 68 basic operations, 19 binary search analysis, 67 binary tree, 25 binomial distribution, 53 mean deviation, 45 Catalan numbers, 26, 27, 76

CDF generating function, 40, 81 ceil operator, 43, 62 CGF, 54 characteristic function, 52 inversion theorem, 54 of symmetric distributions, 53 Poisson distribution, 52 properties of, 53 summary table, 53 chromatic polynomial, 75 combinatorics identities, 69 compound interest, 90 convolutions, 24, 69–71 cumulants, 54 divide-and-conquer, 63, 65, 66 epidemiology, 79 SIR model, 80 exponential generating function, 11 factorial moments, 39 generating function, 56 geometric distribution, 57 negative binomial distribution, 58 falling factorial, 4 Fibonacci numbers, 73 floor operator, 11, 43, 62 generating function arithmetic on, 20 CDF of geometric distribution, 41

98

INDEX

CDF of negative binomial distribution, 41 CDF of Poisson distribution, 41 convergence, 3 convolutions, 24 definition, 2 differentiation, 28 existence, 2 extract coefficients, 20 for CDF, 40 for factorial moments, 56 for mean deviation, 43, 45 for probabilities, 34 for SF of binomial distribution, 42 for SF of geometric distribution, 42 for SF of negative binomial distribution, 43 for SF of Poisson distribution, 42 for survival function, 42 in chemistry, 76, 78 in combinatorics, 69 in epidemiology, 79 in graph theory, 74 in number theory, 82 in reliability, 85 in statistics, 33, 83, 84 integration, 30 linear combination, 21 of strided sequence, 64 partial sum, 9 scaling, 20 shifting, 21 standard, 6 summary table, 35 truncated distributions, 58 types, 34 types of, 5 geometric distribution, 53 displaced, 79

factorial moments, 57 mean deviation, 45 geometric sequence, 7 graph enumeration, 74 Haldane map, 88 harmonic number, 10, 67 heterozygosity, 89 hypergeometric series, 38 incomplete beta function, 41, 86 infinite divisible distributions, 84 information generating function, 16 inverse moments, 40 multiplicative, 31 of sequence, 31 isomers, 77, 78 left tail probability generating function, 40 linear recurrence, 71, 72, 90 logarithmic distribution, 53 logistic map, 72 McClaurin series, 52 mean deviation binomial distribution, 45 correction term, 44, 45 generating function, 43, 45 geometric distribution, 45 negative binomial distribution, 47 Poisson distribution, 47 merge-sort analysis, 65, 66 MGF, 48 conditional, 58 factorial moments, 56 of Poisson distribution, 51 properties of, 49 sum of IID random variables, 50

INDEX

moment generating function binomial distribution, 49 properties, 49 summary table, 53 negative binomial distribution factorial moments, 58 mean deviation, 47 notations, 3 Pochhammer, 4 operations on GF, 19, 20, 23, 28 ordinary generating function, 5 partial fraction, 7, 25, 31 permutation, 11 inversions, 62 Pochhammer generating function, 14, 75 notation, 4 number, 38 Poisson distribution, 53, 84 characteristic function, 52 conditional MGF, 58 factorial moments, 57 mean deviation, 47 polymer chemistry, 77 probability, 32, 89 generating function, 34 probability generating function binomial distribution, 37 geometric distribution, 37 hypergeometric distribution, 38 logarithmic distribution, 38 negative binomial distribution, 37 Poisson distribution, 36

power-series distribution, 39 properties, 39 special values, 36 uniform distribution, 36 quick-sort analysis, 66 radius of convergence, 3 recurrence relations, 7, 71 right tail probability generating function, 42 rising factorial, 4 roots of unity, 64 sequence alignment, 87 series-parallel systems, 86 SIR model, 80 Stirling numbers, 17 first kind, 56 second kind, 56 summary table moment generating functions, 53 of characteristic functions, 53 of generating functions, 35 operations on GF, 28 truncated generating functions, 59 survival function binomial distribution, 86 generating function, 42 telescopic series, 7, 62 towers of Hanoi, 73 tree enumeration, 76 truncated distributions, 59, 88 generating function, 58

99