Market-Consistent Prices: An Introduction to Arbitrage Theory [1 ed.] 303039722X, 9783030397227

Arbitrage Theory provides the foundation for the pricing of financial derivatives and has become indispensable in both f

492 113 7MB

English Pages 465 [448] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Market-Consistent Prices: An Introduction to Arbitrage Theory [1 ed.]
 303039722X, 9783030397227

Table of contents :
Preface
A Glimpse at the Theory
The Scope of the Book
The Distinguishing Features of This Book
Prerequisites
Acknowledgement
Contents
1 Random Variables: Linearity and Order
1.1 Outcomes and Events
1.2 Random Variables
1.3 The Linear Structure
1.4 Linear Functionals
1.5 Convex Sets and Functionals
1.6 The Order Structure
1.7 Monotone Functionals
1.8 Exercises
2 Probabilities and Expectations
2.1 Probability Measures
2.2 Probability Mass Function
2.3 Independent Events
2.4 Expected Value
2.5 Variance and Covariance
2.6 Change of Probability
2.7 The Meaning of Probability
2.8 Exercises
3 Random Variables: Topology and Geometry
3.1 The Norm Structure
3.2 The Topological Structure
3.3 Topology and Order
3.4 Continuous Functionals
3.5 The Inner-Product Structure
3.6 Orthogonality
3.7 Exercises
4 Extensions of Linear Functionals
4.1 Separation Theorems
4.2 Extension Results
4.3 Representation Results
4.4 Exercises
5 Single-Period Financial Markets
5.1 The Elements of the Market
5.2 Portfolios of Securities
5.3 Replicable Payoffs
5.4 Complete Markets
5.5 Changing the Unit of Account
5.6 Exercises
6 Market-Consistent Prices for Replicable Payoffs
6.1 Price and Payoff Arbitrage
6.2 The Law of One Price
6.3 Market-Consistent Prices
6.4 The Pricing Functional
6.5 Exercises
7 Fundamental Theorem of Asset Pricing
7.1 Pricing Extensions
7.2 Pricing Densities
7.3 Pricing Measures
7.4 Exercises
8 Market-Consistent Prices for General Payoffs
8.1 Marketed Bounds
8.2 Market-Consistent Prices
8.3 Characterizing Market-Consistent Prices
8.4 Sub- and Superreplication Prices
8.5 Exercises
9 Random Variables: Information and Measurability
9.1 Partitions and Atoms
9.2 Observable Events
9.3 Refining Partitions
9.4 Measurable Random Variables
9.5 Exercises
10 Conditional Probabilities and Expectations
10.1 Conditional Probabilities
10.2 Expectations Conditional on Events
10.3 Expectations Conditional on Partitions
10.4 Changing the Probability Measure
10.5 Exercises
11 Conditional Linear Functionals
11.1 Conditional Functionals
11.2 Localizations
11.3 Linearity, Convexity, Monotonicity
11.4 Exercises
12 Extensions of Conditional Linear Functionals
12.1 Scalarizations
12.2 Extension Results
12.3 Representation Results
12.4 Exercises
13 Information and Stochastic Processes
13.1 Information Structures
13.2 Stochastic Processes
13.3 Adapted Processes
13.4 Martingale Processes
13.5 Conditional Functionals
13.6 Exercises
14 Multi-Period Financial Markets
14.1 The Elements of the Market
14.2 Trading Strategies
14.3 Replicable Payoffs
14.4 Complete Markets
14.5 Changing the Unit of Account
14.6 Exercises
15 Market-Consistent Prices for Replicable Payoffs
15.1 Price and Payoff Arbitrage
15.2 The Law of One Price
15.3 Market-Consistent Prices
15.4 Pricing Functionals
15.5 Exercises
16 Fundamental Theorem of Asset Pricing
16.1 Pricing Extensions
16.2 Pricing Densities
16.3 Pricing Measures
16.4 Exercises
17 Market-Consistent Prices for General Payoffs
17.1 Marketed Bounds
17.2 Market-Consistent Prices
17.3 Characterizing Market-Consistent Prices
17.4 Sub- and Superreplication Prices
17.5 Exercises
18 Market-Consistent Prices for Payoff Streams
18.1 Payoff Streams
18.2 Terminal-Payoff Equivalents
18.3 Replicable Payoff Streams
18.4 Market-Consistent Prices
18.5 Characterizing Market-Consistent Prices
18.6 Sub- and Superreplication Prices
18.7 Exercises
19 Market-Consistent Prices for American Options
19.1 American Options
19.2 Exercise Strategies
19.3 Marketed Bounds
19.4 Market-Consistent Prices
19.5 Replicable American Options
19.6 Characterizing Market-Consistent Prices
19.7 Sub- and Superreplication Prices
19.8 Market-Consistent Strategies
19.9 Characterizing Market-Consistent Strategies
19.10 Exercises
A Sets and Maps
A.1 Logical Symbols
A.2 Sets and Maps
A.3 Basic Combinatorics
B Vector Spaces
B.1 The Vector Space Axioms
B.2 Linear Subspaces
B.3 Bases and Dimensions
B.4 Linear Maps
C Normed Spaces
C.1 The Normed Space Axioms
C.2 Norm and Topology
C.3 Sequences and Convergence
C.4 Continuous Maps
D Inner-Product Spaces
D.1 The Inner-Product Space Axioms
D.2 Orthogonal Vectors
D.3 Orthogonal Projections
D.4 Riesz Representation
Bibliography
Index
Index of Symbols

Citation preview

Pablo Koch-Medina Cosimo Munari

Market-Consistent Prices An Introduction to Arbitrage Theory

Market-Consistent Prices

Pablo Koch-Medina • Cosimo Munari

Market-Consistent Prices An Introduction to Arbitrage Theory

Pablo Koch-Medina Department of Banking and Finance University of Zurich Zurich, Switzerland

Cosimo Munari Department of Banking and Finance University of Zurich Zurich, Switzerland

ISBN 978-3-030-39722-7 ISBN 978-3-030-39724-1 (eBook) https://doi.org/10.1007/978-3-030-39724-1 Mathematics Subject Classification: 60-01, 91-01, 91G20 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To Madeleine and Anaïs To Ambra, Agostino, Antonella, Nicola

Preface

This book provides a comprehensive and rigorous introduction to the mathematics of arbitrage theory. The subject matter of arbitrage theory as considered here is the valuation of financial contracts given that a finite number of them, the so-called basic securities, are traded in a market. Economic agents are assumed to be rational, in a sense to be specified further below, and to have unlimited access to the market. Since agents are also free to transact any financial contract privately, i.e., outside of the market, it is natural to ask the following question: Does the existence of the market for basic securities imply a range of prices outside of which it would be “irrational”, either for the buyer or the seller, to privately transact a given financial contract?

The purpose of arbitrage theory is to answer this question and to characterize the range of “rational” prices in various ways. Since the publications, in the 1970s and early 1980s, of the influential papers by Black and Scholes [3] and Merton [19] on option pricing and by Harrison and Kreps [13] and Harrison and Pliska [14] highlighting the general underlying principle of “no-arbitrage”, arbitrage theory has established itself as a fundamental pillar not only in finance theory but, being the basis for the pricing of derivative instruments, also in financial practice. But, what is the essence of arbitrage theory?

A Glimpse at the Theory To get a general sense of arbitrage theory, we look at the simplest setting: A market with only two dates. In this setting, the basic underlying ideas can be made clear appealing only to “common sense” and avoiding excessive formalism. Financial Contracts and Economic Agents Uncertainty at the terminal date is modelled by a finite set  whose elements represent the various potential terminal states of the economy. A financial contract specifies for each state ω ∈  a terminal payment X(ω). The mapping X :  → R is called the terminal vii

viii

Preface

payoff of the contract. From now on we identify contracts with their payoffs. Agents are assumed to exhibit a very mild form of rationality: They prefer more to less. To see what this means for us, consider two financial contracts with payoffs X and Y such that • X(ω) ≥ Y (ω) for every ω ∈ , • X(ω) > Y (ω) for some ω ∈ . In this case, we say that X is strictly better than Y and that Y is strictly worse than X. Our minimal rationality assumption boils down to these two rules: When confronted with the choice of buying X or Y for the same price, an agent will always prefer buying X and, similarly, when confronted with the choice of selling X or Y for the same price, an agent will always prefer selling Y . The Financial Market and Replicable Payoffs The basic securities are represented by their (typically) uncertain payoffs and their known initial prices. Buying and selling can be executed in arbitrary quantities, with no restrictions on borrowing, and with no transaction costs. Under these assumptions, a unit of a basic security can be bought and sold at a unique price, its market price. By buying and selling basic securities at the initial date, agents can set up portfolios which generate a terminal payoff. The value of a portfolio is the cost of buying or selling the required amounts of the individual basic securities. Thus, market activity is very simple: Agents trade at the initial date to set up a portfolio of basic securities and wait until the terminal date to collect the corresponding payoff. A payoff X that is generated by a portfolio of basic securities is special in that it can be “produced” or “replicated” by setting up the corresponding portfolio. For this reason, we call such a payoff X a replicable payoff and any portfolio replicating it a replicating portfolio for X. For convenience, we assume that the payoffs of the basic securities are linearly independent, so that every replicable payoff X has a unique replicating portfolio. In particular, there is a unique “replication cost”, corresponding to the value of the replicating portfolio for X. We denote this cost by π(X) and refer to it as the market-consistent price of X. Note that only the basic securities can be directly transacted in the market and that is why we prefer to speak of the “market-consistent” rather than the “market” price of a replicable payoff. Nevertheless, it still makes sense to say that agents can “buy”, respectively “sell”, a replicable payoff X in the market. By this we just mean that they can access the market to set up a replicating portfolio for X, respectively −X. The “No-arbitrage” Assumption A portfolio with zero value whose terminal payoff is strictly better than the zero payoff is said to be an arbitrage opportunity. Clearly, acquiring such a portfolio is desirable for every agent: It doesn’t cost anything to set it up and it is no worse than the zero payoff in all states and strictly better than the zero payoff in at least one state. Hence, if such a portfolio existed in an efficient market, it could only be short lived: There would be unlimited

Preface

ix

demand for it and market forces would drive prices to levels at which it would cease to be an arbitrage opportunity. A market is said to satisfy the no-arbitrage assumption if no arbitrage opportunities exist. This assumption can be articulated in terms of marketconsistent prices for replicable payoffs: Any replicable payoff X that is strictly better than the zero payoff must have a strictly-positive market-consistent price π(X). Clearly, the noarbitrage assumption represents an idealization that can only hold approximately in reality. In spite of this qualification, assuming the absence of arbitrage leads to a powerful theory with countless important theoretical and practical applications. Market-Consistent Prices We now assume that there is life outside the market so that two agents may contemplate transacting any given payoff bilaterally. At first sight, one might say that, given that the transaction is private, it is fully up to the seller and buyer to agree on whichever price they want. However, the no-arbitrage assumption implies that, for every conceivable payoff, there is a range of prices outside of which either the buyer or the seller should refuse to transact because they could get a better deal by directly trading in the market. At the risk of stating the obvious, let us first try to understand the range of prices at which it would not be foolish to transact a replicable payoff X outside the market. In this case, it should be clear that neither buyer nor seller should consider transacting X at any other price than its market-consistent price π(X). Indeed, transacting at a higher price would not make sense for the buyer because it would be better to spend π(X) to buy X directly in the market and obtain the exact same payoff at a lower price. Similarly, transacting at a lower price would be foolish for the seller because it is clearly better to sell X directly in the market for the higher amount π(X). Hence, for a replicable payoff, the only price that is “consistent” with the market is its market price. If every conceivable payoff is replicable, then the above discussion settles the question about the range of prices at which any given payoff can be transacted. However, in a general market, we need to consider the case where buyer and seller are contemplating to transact a nonreplicable payoff X at a price p. The Buyer’s Perspective Our rationality assumption implies that if a replicable payoff Z that is strictly better than X has a market-consistent price π(Z) ≤ p, then any agent would prefer buying Z directly in the market and would refuse to buy X for the price p. On the other hand, if all replicable payoffs Z that are strictly better than X have a marketconsistent price that is higher than p, then buying X for the price p is not at odds with our minimal rationality assumption. In this case, we say that p is a market-consistent buyer price for X: At a market-consistent buyer price, the buyer cannot do better by trading directly in the market. The Seller’s Perspective In the same vein, our rationality assumption implies that if a replicable payoff Z that is strictly worse than X has a market-consistent price π(Z) ≥ p, then any agent would prefer selling Z directly in the market and would not consider selling

x

Preface

X for the price p. However, if every replicable payoff Z that is strictly worse than X has a market-consistent price that is lower than p, then selling X for the price p would not contradict our rationality assumption. In this case, we say that p is a market-consistent seller price for X: At a market-consistent seller price, the seller cannot do better by trading directly in the market. It follows from the preceding discussion that, if a transaction is to be attractive for both parties, the only prices that should be considered are prices that are simultaneously market-consistent buyer and seller prices. Such prices are called market-consistent prices for X. While a replicable payoff X has a unique market-consistent price, one can show that, whenever X is not replicable, the set of market consistent prices is an open interval. It is important to note that our minimal rationality assumption requires only that agents prefer more to less. However, in general, an agent will have preferences over arbitrary payoffs X and Y even though neither of the two payoffs is strictly better or worse than the other. Thus, whether a buyer and a seller ultimately agree to transact a given payoff at one of its market-consistent prices will depend on their broader individual preferences. Our theory only tells us that, if they do transact at a market-consistent price, it will not be foolish from a market perspective. In the literature, market-consistent prices are usually called arbitrage-free or noarbitrage prices and are often described as the prices that could be assigned to a payoff without introducing arbitrage opportunities in the market. We refer to the discussion in Chaps. 7 and 8 to see why this interpretation may be problematic.

The Scope of the Book The study of market-consistent prices is the main concern of arbitrage theory and the concept runs as a red thread throughout the entire book. In a first step, we look at marketconsistent prices in the context of a single-period financial market, as the one we have just discussed. In a second step, we move to multi-period financial markets. These are markets where trading is allowed at a finite number of dates and financial contracts may mature not only at the terminal date but also at intermediate dates. In multi-period markets we study market-consistent prices for various types of financial contracts, including contracts delivering fixed payments over multiple dates (payoff streams) as well as contracts where the buyer is presented with a “basket” of payoffs with different maturities and, at every date, is given the option to either “exercise” and receive the payoff maturing at that date giving up all other payoffs in the basket, or delay exercise to a later date (American options).

Preface

xi

The Distinguishing Features of This Book There are a few books around that deal with arbitrage theory in discrete time, see e.g., Pliska [20], Duffie [6], Shreve [22], Elliott and Kopp [8], van der Hoek and Elliott [23], Cutland and Roux [5] for finite state spaces and Föllmer and Schied [9] for general state spaces. In fact, originally, we had not intended adding to this list, but had embarked on writing a second extended edition of Koch-Medina and Merino [16]. However, when it became clear that the result would be quite different not only from the first edition of that book, but also from the rest of the literature, we decided to write a completely new book. We believe there is novelty both in the scope of the material we cover, which contains also new results (especially in multi-period models and American options), but also in the way that standard material is presented. Here are some of the features of our book that may be worth highlighting: • We provide a comprehensive coverage of arbitrage theory in single- and multi-period models. The two parts are independent of each other and each is preceded by the relevant mathematical background. This will give flexibility to readers and lecturers basing courses on this book. Moreover, a reader having the necessary mathematical background can simply focus on the financial chapters and use the mathematical chapters for easy reference. • Working in a finite-state economy allows us to cover all the relevant topics in a rigorous way with minimal requirements on the mathematical background of the reader. Throughout the book we are careful to provide a clear and intuitive financial motivation for the more technical mathematical notions. • We base our approach on a systematic study of pricing functionals and their properties. Consequently, we put more emphasis on the linear algebraic structure of the theory than other authors, who may prefer to highlight the probabilistic aspects. We believe that this gives greater conceptual clarity to our story and help highlight the financial meaning of many results that would otherwise appear to be purely technical. This is especially true in the case of multi-period models where we explicitly consider pricing functionals at intermediate dates, as opposed to only at the initial date as in most presentations of the topic. The price to pay for such a clarity is a certain degree of notational complexity compared with the standard approach where pricing functionals and their domains are not explicitly introduced. • In line with our linear algebraic angle, the Fundamental Theorem of Asset Pricing is presented as a result about the existence of strictly-positive linear extensions of the pricing functionals. The more common formulations in terms of so-called equivalent martingale measures or state price densities are obtained as simple consequences of representation theorems for strictly-positive linear functionals. The advantage of this approach is that it requires working with linear algebra only and that the concepts of an equivalent martingale measure and a state price density clearly emerge as instances of

xii

Preface

the same unifying approach. To the best of our knowledge, this approach has been fully pursued in the literature only in the case of single-period models. • Our treatment of American options is completely new and comprehensive. It relies on our results about pricing functionals at intermediate dates. In particular, contrary to the standard textbooks on the topic, the use of advanced results from the theory of stochastic processes is not required. This significantly reduces the complexity of the topic, especially for American options in incomplete markets, and hopefully makes the theory accessible to a wider audience. Some of the observations may also be surprising for the expert reader. For instance, we explain why the market puts zero value on the “optionality” of American options and why replicable American options should never be bought. In the literature, much emphasis has been put on the pricing of American options and the question of when the owner of such an option should consider exercising has not been explicitly addressed. We fill this gap by introducing the notion of a market-consistent exercise strategy. This is of particular relevance in the context of nonreplicable American options, in which we show, for instance, that the exercise strategies that are used for pricing are not necessarily the same strategies that an owner should consider adopting to actually exercise. • Each chapter concludes with a set of exercises. While some of them are simple and meant to test familiarity with the new concepts, others are more demanding and aim either at pointing out alternative routes to some of the results or at exploring complementary material.

Prerequisites We assume of the reader only a working knowledge of linear algebra and calculus at the advanced undergraduate level. All additional mathematical tools that are required are fully developed in the text. Because the mathematical prerequisites are minimal, this book can be used by a wide variety of readers. It can be read by mathematics students seeking an introduction to mathematical finance and by finance students seeking an elementary but rigorous introduction to the subject. In spite of its elementary level, we believe this book can also appeal to mathematicians seeking to understand how the main financial ideas in arbitrage theory translate into rigorous mathematical questions.

Acknowledgement We would like to thank Gregor Svindland for valuable discussions on many of the topics dealt with in this book, especially on the approach to American options. We would also like to thank Sarah Goob and Sabrina Hoecklin from the editorial team at Birkhäuser for their assistance during the preparation of this manuscript. The material in this book has been

Preface

xiii

classroom tested at the University of Zurich. We thank the participants of these courses for their encouraging feedback. Zurich, Switzerland Zurich, Switzerland

Pablo Koch-Medina Cosimo Munari

Contents

1

Random Variables: Linearity and Order . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.1 Outcomes and Events .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.2 Random Variables .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.3 The Linear Structure .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.4 Linear Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.5 Convex Sets and Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.6 The Order Structure .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.7 Monotone Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.8 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

1 1 2 3 5 7 15 20 21

2

Probabilities and Expectations .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.1 Probability Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.2 Probability Mass Function .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.3 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.4 Expected Value .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.5 Variance and Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.6 Change of Probability.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.7 The Meaning of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.8 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

25 25 28 33 40 44 50 53 54

3

Random Variables: Topology and Geometry .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.1 The Norm Structure .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.2 The Topological Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.3 Topology and Order .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.4 Continuous Functionals .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.5 The Inner-Product Structure .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.6 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.7 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

59 59 64 68 69 73 76 80

xv

xvi

Contents

4

Extensions of Linear Functionals .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.1 Separation Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.2 Extension Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.3 Representation Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.4 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

83 83 87 95 99

5

Single-Period Financial Markets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.1 The Elements of the Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.2 Portfolios of Securities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.3 Replicable Payoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.4 Complete Markets.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.5 Changing the Unit of Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.6 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

103 103 110 112 115 116 121

6

Market-Consistent Prices for Replicable Payoffs . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.1 Price and Payoff Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.2 The Law of One Price . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.3 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.4 The Pricing Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

125 125 128 129 131 132

7

Fundamental Theorem of Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.1 Pricing Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.2 Pricing Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.3 Pricing Measures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.4 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

135 135 138 140 143

8

Market-Consistent Prices for General Payoffs . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 8.1 Marketed Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 8.2 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 8.3 Characterizing Market-Consistent Prices . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 8.4 Sub- and Superreplication Prices . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 8.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

147 147 148 151 152 155

9

Random Variables: Information and Measurability. . . . . . . . . . . .. . . . . . . . . . . . . . . 9.1 Partitions and Atoms .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.2 Observable Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.3 Refining Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.4 Measurable Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

159 159 161 164 165 172

Contents

xvii

10 Conditional Probabilities and Expectations . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.1 Conditional Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.2 Expectations Conditional on Events . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.3 Expectations Conditional on Partitions . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.4 Changing the Probability Measure .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

175 175 178 180 185 186

11 Conditional Linear Functionals .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.1 Conditional Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.2 Localizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.3 Linearity, Convexity, Monotonicity . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.4 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

189 189 192 194 199

12 Extensions of Conditional Linear Functionals . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.1 Scalarizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.2 Extension Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.3 Representation Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.4 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

203 203 205 213 217

13 Information and Stochastic Processes .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.1 Information Structures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.2 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.3 Adapted Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.4 Martingale Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.5 Conditional Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 13.6 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

221 221 223 225 226 228 230

14 Multi-Period Financial Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.1 The Elements of the Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.2 Trading Strategies .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.3 Replicable Payoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.4 Complete Markets.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.5 Changing the Unit of Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 14.6 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

235 235 241 246 249 251 254

15 Market-Consistent Prices for Replicable Payoffs . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 15.1 Price and Payoff Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 15.2 The Law of One Price . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 15.3 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 15.4 Pricing Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 15.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

261 261 265 269 270 274

xviii

Contents

16 Fundamental Theorem of Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 16.1 Pricing Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 16.2 Pricing Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 16.3 Pricing Measures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 16.4 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

279 279 287 293 296

17 Market-Consistent Prices for General Payoffs . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 17.1 Marketed Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 17.2 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 17.3 Characterizing Market-Consistent Prices . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 17.4 Sub- and Superreplication Prices . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 17.5 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

303 303 304 306 309 314

18 Market-Consistent Prices for Payoff Streams . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.1 Payoff Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.2 Terminal-Payoff Equivalents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.3 Replicable Payoff Streams.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.4 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.5 Characterizing Market-Consistent Prices . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.6 Sub- and Superreplication Prices . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.7 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

317 317 319 321 326 334 336 339

19 Market-Consistent Prices for American Options . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.1 American Options .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.2 Exercise Strategies .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.3 Marketed Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.4 Market-Consistent Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.5 Replicable American Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.6 Characterizing Market-Consistent Prices . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.7 Sub- and Superreplication Prices . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.8 Market-Consistent Strategies .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.9 Characterizing Market-Consistent Strategies .. . . . . . . . . . . . .. . . . . . . . . . . . . . . 19.10 Exercises . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

343 343 346 351 356 369 375 383 391 399 407

A

413 413 414 416

Sets and Maps . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . A.1 Logical Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . A.2 Sets and Maps .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . A.3 Basic Combinatorics .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

Contents

xix

B

Vector Spaces. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . B.1 The Vector Space Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . B.2 Linear Subspaces.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . B.3 Bases and Dimensions .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . B.4 Linear Maps .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

419 419 420 421 423

C

Normed Spaces. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . C.1 The Normed Space Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . C.2 Norm and Topology .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . C.3 Sequences and Convergence.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . C.4 Continuous Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

425 425 426 428 429

D

Inner-Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . D.1 The Inner-Product Space Axioms .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . D.2 Orthogonal Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . D.3 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . D.4 Riesz Representation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

431 431 432 433 435

Bibliography . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 437 Index . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 439 Index of Symbols . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 445

1

Random Variables: Linearity and Order

The topic of this book requires us to deal with quantities, such as future payments or prices, whose value is not known in advance. To this effect, we first need to develop the basic mathematics to model uncertainty. This chapter is devoted to introducing the notion of a sample space, corresponding to the set of possible outcomes of a situation of uncertainty, and of a random variable, a quantity that is contingent on those outcomes. We equip the set of random variables with the structure of an ordered vector space. This structure, in particular, allows us to treat the notion of convexity which pervades much of mathematical finance. Although the main topic of our book is mathematical finance, much of the theory of random variables originated with the study of games of chance, a fact that is reflected in most of the examples with which we illustrate the basic theory.

1.1

Outcomes and Events

We consider situations of uncertainty that resolve into any of a finite number of possible outcomes. Here, the expression “situation of uncertainty” is a loose and general term used to capture a real-world situation whose actual realization is not known in advance but becomes known, or, as we also say, is revealed, only once uncertainty has resolved. For instance, one could think of the outcome of a game of chance, but also of the state of the weather, or the state of the economy at a future date. Sometimes we find it more suggestive to refer to a situation of uncertainty as a “random experiment”, i.e., an experiment whose possible outcomes are known, but there is uncertainty as to which of them will actually materialize. Definition 1.1.1 (Sample Space, Event) A (finite) sample space is a (finite) set  whose elements are interpreted as the possible outcomes of a random experiment. A set E ⊂  © Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_1

1

2

1 Random Variables: Linearity and Order

is called an event, and a singleton {ω}, with ω ∈ , is called an elementary event. The set of all events is denoted by E(Ω) or simply E if there is no ambiguity about the underlying sample space. The typical examples of a sample space are those associated with games of chance such as tossing a coin and rolling a die. Example 1.1.2 (Tossing a Coin) Tossing a coin is the basic example of a game of chance. Here, the sample space is naturally  = {H, T }, where H stands for “heads” and T for “tails”. If we toss the coin m times, the sample space becomes  = {H, T }m = {(a1 , . . . , am ) ; a1 , . . . , am ∈ {H, T }}, where the first element of the string (a1 , . . . , am ) ∈  represents the outcome of the first trial, the second element the outcome of the second trial, and so on.  Example 1.1.3 (Rolling a Die) Another classical example of a random experiment is rolling a die. If the die has six faces, the sample space can be naturally represented by  = {1, 2, 3, 4, 5, 6}. If we roll the die m times, the sample space becomes  = {1, 2, 3, 4, 5, 6}m = {(a1 , . . . , am ) ; a1 , . . . , am ∈ {1, 2, 3, 4, 5, 6}}, where the first element of the string (a1 , . . . , am ) ∈  represents the outcome of the first trial, the second element the outcome of the second trial, and so on. 

1.2

Random Variables

Standing Assumption In the remainder of this chapter we consider a random experiment represented by a finite sample space  = {ω1 , . . . , ωK }.

1.3 The Linear Structure

3

Every real-valued quantity whose value depends on the possible outcomes of the random experiment can be naturally modelled by a real-valued function defined on our sample space, which in this context is called a random variable. Definition 1.2.1 (Random Variable) Any function X :  → R is called a random variable (on ). The set of random variables is denoted by L() or simply L if there is no ambiguity about the underlying sample space.  Example 1.2.2 (Constant Random Variable) A random variable X ∈ L is said to be constant, or deterministic, if there exists an a ∈ R such that X(ω1 ) = · · · = X(ωK ) = a. In this case, we also write X = a. In particular, any scalar a ∈ R can be interpreted as the constant random variable taking the value a. The constant random variable taking the value 0 is called the zero or the null random variable.  Example 1.2.3 (Indicator) Consider an event E ∈ E. The indicator of E is the random variable 1E ∈ L defined by ⎧ ⎨1, if ω ∈ E, 1E (ω) := ⎩0, if ω ∈ E c . In words, the random variable 1E detects to which of the events E or E c each element of  belongs to by assigning to it the value 1 or 0, respectively. In particular, note that  1 = 1 and 1∅ = 0. For every outcome ω ∈  we write 1ω instead of 1{ω} .

1.3

The Linear Structure

In this section we show how to transfer the natural algebraic operations on real numbers to corresponding operations on random variables. For a map f : Rm → R and random variables X1 , . . . , Xm ∈ L, we denote by f (X1 , . . . , Xm ) the random variable in L defined by f (X1 , . . . , Xm )(ω) := f (X1 (ω), . . . , Xm (ω)). The composition of functions and random variables encompasses many useful special cases, including sums, products, and quotients. We define sums and products for two random variables, but it should be clear how to extend these definitions to an arbitrary finite number of random variables.

4

1 Random Variables: Linearity and Order

Definition 1.3.1 (Operations) Let X, Y ∈ L and consider two subsets A, B ⊂ L. Define a function f : R2 → R by f (x, y) = x + y. The random variable f (X, Y ) ∈ L is called the sum of X and Y , and we write X + Y := f (X, Y ). Similarly, the sum of A and B is defined by A + B := {X + Y ; X ∈ A, Y ∈ B}. For a fixed a ∈ R consider the function f : R → R given by f (x) = ax. The random variable f (X) ∈ L is called the product of X by a, and we write aX := f (X). Moreover, we set −X := −1X. The dilation of A by a is defined by aA := {aX ; X ∈ A}. The previous operation can be generalized in the following way. Consider the function f : R2 → R defined by f (x, y) = xy. The random variable f (X, Y ) ∈ L is called the product of X and Y , and we write XY := X · Y := f (X, Y ). Moreover, we set X2 := X · X. We can also consider the composition of random variables with functions that are defined on a proper subset of Rm . For instance, define the function f : R × (0, ∞) → R by f (x, y) = xy . If Y (ω) > 0 for every ω ∈ , then the random variable f (X, Y ) ∈ L is called the quotient of X and Y , and we write X := X/Y := f (X, Y ). Y



When equipped with the two operations “sum of two random variables” and “product of a random variable by a scalar”, the set L becomes a vector space over R. The explicit verification of this simple fact is left to the reader. It is useful to know that the indicators of the elementary events of  constitute a basis for L. In particular, the dimension of L is equal to the cardinality of . We refer to Appendix B for a general review of vector spaces.

1.4 Linear Functionals

5

Theorem 1.3.2 The set B = {1ω1 , . . . , 1ωK } is a vector space basis for L. In particular, we have dim(L) = K. Proof Note that for every random variable X ∈ L we can write X=

K 

X(ωk )1ωk .

k=1

This shows that B spans the entire L. To prove that B is linearly independent, suppose that K 

ak 1ωk = 0

k=1

for a1 , . . . , aK ∈ R. Then, for every h ∈ {1, . . . , K} we must have ah =

K 

ak 1ωk (ωh ) = 0.

k=1

This shows that B is a linearly independent set and concludes the proof.



Remark 1.3.3 The preceding result allows us to apply the whole arsenal of results from linear algebra to the set of random variables L just as if we were working on RK . In effect, to some extent, this is precisely what one does when working with random variables defined on a finite sample space. Indeed, one can verify that the map F : L → RK defined by setting F (X) = (X(ω1 ), . . . , X(ωK )) is a vector space isomorphism, i.e., a one-toone correspondence that preserves the vector space operations; see Exercise 1.8.2. This means that, from the point of view of linear algebra, the vector spaces L and RK are indistinguishable. 

1.4

Linear Functionals

In this section we focus on real-valued maps defined on sets of random variables. Definition 1.4.1 (Functional) Let M be a subset of L. A functional (on M) is a map π : M → R.  Further below we study several special classes of functional. At the present level of generality we observe that the set of functionals carries a very natural vector space structure.

6

1 Random Variables: Linearity and Order

Definition 1.4.2 (Operations) Let M be a subset of L. For two functionals π : M → R and ρ : M → R, we define the sum of π and ρ as the new functional π + ρ : M → R given by (π + ρ)(X) := π(X) + ρ(X). Similarly, for all π : M → R and a ∈ R we define the product of π by a as the new functional aπ : M → R given by (aπ)(X) := aπ(X).



From now on we only consider functionals defined on a linear subspace of L, i.e., on a subset of L that is itself a vector space with respect to the algebraic operations introduced above; see Sect. B.2. A particularly important class of functionals defined on linear subspaces consists of those functionals that preserve the vector space operations. Definition 1.4.3 (Linear Functional) Let M be a linear subspace of L. We say that π : M → R is a linear functional if for all X, Y ∈ M and a ∈ R it satisfies: (1) π(X + Y ) = π(X) + π(Y ) (additivity). (2) π(aX) = aπ(X) (homogeneity).



The defining properties of a linear functional can be extended to arbitrary linear combinations. This basic result will be repeatedly used without explicit reference throughout the book. We refer to Sect. B.4 for a review of other standard properties of linear functionals. Proposition 1.4.4 Let M be a linear subspace of L. If π : M → R is a linear functional, then for all X1 , . . . , Xm ∈ M and all a1 , . . . , am ∈ R we have  π

m 

 =

ai Xi

i=1

m 

ai π(Xi ).

i=1

Proof We proceed by induction on m ∈ N. Base Step The statement is obvious if m = 1 by homogeneity. Induction Step Assume that the statement is true for m ∈ N and consider m + 1 random  variables. For convenience, set X = m i=1 ai Xi . By our induction hypothesis, π(X) =

m  i=1

ai π(Xi ).

1.5 Convex Sets and Functionals

7

Hence, we can use linearity to obtain that

π

m+1 

 ai Xi

= π(X + am+1 Xm+1 ) = π(X) + am+1 π(Xm+1 ) =

i=1

m+1 

ai π(Xi ).

i=1

This concludes the induction argument.



Remark 1.4.5 (Constructing Linear Functionals) Let M be a linear subspace of L and assume that {X1 , . . . , Xm } is a basis of M. A linear functional π : M → R is uniquely determined by the values it assigns to the elements X1 , . . . , Xm . This follows by noting  that every X ∈ M can be written as X = m i=1 ai Xi for unique scalars a1 , . . . , am ∈ R, m and that π(X) = i=1 ai π(Xi ) by linearity. In particular, two linear functionals defined on M are equal when they coincide on the elements of any basis of M. This allows us to define a linear functional on M by simply specifying the values π(X1 ), . . . , π(Xm ).  Remark 1.4.6 Let M be a linear subspace of L. The vector space of all functionals on M is infinite dimensional unless M = {0}. The set of linear functionals defined on M is a linear subspace of the space of all functionals defined on M. Moreover, the dimension of this linear subspace is equal to the dimension of M; see Proposition B.4.3. 

1.5

Convex Sets and Functionals

Convexity and other related concepts arise naturally in many applications, not only in mathematical finance. We recall here the concepts in the context of our space L and establish some important results about convex sets and convex functionals. Albeit elementary, these results are seldom found in standard linear algebra textbooks. Convex Sets and Cones A subset of L is said to be convex whenever it contains the entire segment connecting any two of its elements. Definition 1.5.1 (Convex Set) A set A ⊂ L is said to be convex whenever X, Y ∈ A, a ∈ [0, 1] ⇒ aX + (1 − a)Y ∈ A.



It is useful to know that convexity is preserved under the vector space operations and also under arbitrary intersections. The simple proof is left as an exercise.

8

1 Random Variables: Linearity and Order

Proposition 1.5.2 (i) If A, B ⊂ L are convex, then A + B is convex. (ii) If A ⊂ L is convex and a ∈ R, then aA is convex. (iii) If I is an arbitrary index set and Ai ⊂ L is convex for every i ∈ I , then i∈I Ai is convex. If A is an arbitrary nonempty subset of L, we may consider the family of all convex subsets containing A. This is a nonempty family, since the full space L is a convex set that contains A. By the preceding proposition, the intersection of all sets in this family is a convex set which is clearly not empty, since it contains A. This intersection is, by construction, the smallest convex set containing A. Definition 1.5.3 (Convex Hull) The convex hull of a nonempty set A ⊂ L is the smallest convex subset of L containing A. The convex hull of A is denoted by co(A).  It should be clear that a set is convex if and only if it coincides with its convex hull. The convex hull can be characterized in a more direct way. To this end we need the notion of a convex combination. Definition 1.5.4 (Convex Combination) A random variable X ∈ L is a convex combination of the random variables X1 , . . . , Xm ∈ L if there exist scalars a1 , . . . , am ∈ [0, 1] that add up to 1 and satisfy X=

m 

ai Xi .



i=1

Proposition 1.5.5 For a set A ⊂ L the following statements are equivalent: (a) A is convex. (b) A contains all convex combinations of its elements. Proof It is clear that (b) implies (a). Conversely, assume that A is convex. We need to  prove that, if X1 , . . . , Xm ∈ A and a1 , . . . , am ∈ [0, 1] add up to 1, then m i=1 ai Xi ∈ A. We prove this by induction on m ∈ N. Base Step It is obvious that the statement is true if m = 1.

1.5 Convex Sets and Functionals

9

Induction Step Assume that the statement is true for m ∈ N and consider m + 1 random variables. If am+1 = 1, then the statement is clearly true. Hence, assume that am+1 ∈ [0, 1) and set X=

m  i=1

ai Xi . 1 − am+1

Being a convex combination of m elements of A, the random variable X belongs to A by the induction hypothesis. Hence, by convexity, we obtain m+1 

ai Xi = (1 − am+1 )X + am+1 Xm+1 ∈ A.

i=1



This proves that (a) implies (b) and concludes the proof of the equivalence.

The convex hull of a subset of L can now be characterized as the family of all convex combinations of elements in the set. Proposition 1.5.6 For every set A ⊂ L we have

co(A) =

m 

ai Xi ; a1 , . . . , am ∈ [0, 1],

i=1

m 

ai = 1, X1 , . . . , Xm ∈ A, m ∈ N .

i=1

Proof Let us denote by C the right-hand side above. We first establish that C is convex. n  Indeed, if X = m i=1 ai Xi and Y = i=1 bi Yi belong to C and a ∈ [0, 1], we set ⎧ ⎨aa , i ci = ⎩(1 − a)bi−m ,

if i ∈ {1, . . . , m}, if i ∈ {m + 1, . . . , m + n},

and ⎧ ⎨X , i Zi = ⎩Yi−m ,

if i ∈ {1, . . . , m}, if i ∈ {m + 1, . . . , m + n}.

It is easy to verify that c1 , . . . , cm+n belong to [0, 1] and add up to 1. Consequently, aX + (1 − a)Y =

m  i=1

ci Zi ∈ C.

10

1 Random Variables: Linearity and Order

This proves that C is convex. Since C clearly contains A, we have co(A) ⊂ C. Moreover, since co(A) is convex and all elements of A are also elements of co(A), Proposition 1.5.5 immediately implies that C ⊂ co(A). This yields that co(A) = C, as asserted.  A subset of L is said to be a cone if it contains the entire ray passing through any of each elements. Definition 1.5.7 (Cone) A set A ⊂ L is said to be a cone whenever X ∈ A, a ∈ [0, ∞) ⇒ aX ∈ A.



Similarly to convexity, conicity is preserved by the vector space operations and by taking unions and intersections. The simple proof is again left as an exercise. Proposition 1.5.8 (i) If A, B ⊂ L are cones, then A + B is a cone. (ii) If A ⊂ L is a cone and a ∈ R, then aA is a cone. (iii) If I is an arbitrary index set and Ai ⊂ L is a cone for every i ∈ I , then i∈I Ai and i∈I Ai are cones. The preceding result allows us to define the smallest cone containing a given set following the same line of argument as for the convex hull. Definition 1.5.9 (Conical Hull) The conical hull of a set A ⊂ L is the smallest cone in L containing A. The conical hull of A is denoted by cone(A).  Clearly, a set is a cone if and only if it coincides with its conical hull. The conical hull admits a very simple direct characterization. Proposition 1.5.10 For every set A ⊂ L we have cone(A) = {aX ; a ∈ [0, ∞), X ∈ A}. Proof Let us denote by C the right-hand side above. It is clear that C ⊂ cone(A). On the other hand, it is immediate to see that C is a cone containing A, so that cone(A) ⊂ C. It follows that cone(A) = C as desired.  We are mainly interested in convex cones. It is useful to know that, for a cone, being convex is equivalent to being closed under addition in the sense specified below; see also Exercise 1.8.4.

1.5 Convex Sets and Functionals

11

Proposition 1.5.11 For a cone A ⊂ L the following statements are equivalent: (a) A is convex. (b) A is closed under addition, i.e., X, Y ∈ A ⇒ X + Y ∈ A. Proof Assume A is a closed under addition and take X, Y ∈ A and a ∈ [0, 1]. By conicity, aX ∈ A and (1 − a)Y ∈ A. Since A is closed under addition, aX + (1 − a)Y ∈ A. It follows that A is convex. Conversely, assume A is convex and take X, Y ∈ A. Convexity implies that Z = 12 X + 12 Y ∈ A and conicity that X + Y = 2Z ∈ A. It follows that A is closed under addition.  The following result is a corollary of the representations for the convex and the conical hulls in Propositions 1.5.6 and 1.5.10, respectively. Proposition 1.5.12 For every set A ⊂ L the following statements hold: (i) If A is convex, then cone(A) is the smallest convex cone in L containing A. (ii) If A is a cone, then co(A) is the smallest convex cone in L containing A. Proof To prove (i), we only need to show that cone(A) is convex. To this end, take X, Y ∈ A and a, b ∈ [0, ∞). If a = b = 0, then we have aX + bY = 0 ∈ cone(A). Otherwise, set Z=

a b X+ Y a+b a+b

and observe that Z ∈ A by convexity of A. As a result, aX + bY = (a + b)Z ∈ cone(A) by Proposition 1.5.10. It follows from Proposition 1.5.11 that cone(A) is convex. To prove (ii), we only have to show that co(A) is a cone. To this end, take X ∈ co(A) and a ∈ [0, ∞). By Proposition 1.5.6 the random variable X is a convex combination of  elements of A of the form X = m i=1 ai Xi . Since aXi ∈ A for every i ∈ {1, . . . , m} by conicity of A, Proposition 1.5.6 shows that aX =

m 

aai Xi ∈ co(A).

i=1

This proves that co(A) is a cone.



Constructing the smallest convex cone containing a given set A ⊂ L can be achieved in two ways: By taking the conical hull of the convex hull of A or, equivalently, by taking the convex hull of the conical hull of A.

12

1 Random Variables: Linearity and Order

Proposition 1.5.13 For every set A ⊂ L we have cone(co(A)) = co(cone(A)) and cone(co(A)) =

m 

ai Xi ; a1 , . . . , am ∈ [0, ∞), X1 , . . . , Xm ∈ A, m ∈ N .

i=1

Proof It follows from Proposition 1.5.12 that cone(co(A)) is the smallest convex cone containing co(A). Since any convex cone containing A must contain co(A), we infer that cone(co(A)) is the smallest convex cone containing A. Similarly, co(cone(A)) is the smallest convex cone containing cone(A). Since any convex cone containing A must contain cone(A), it follows that co(cone(A)) is the smallest convex cone containing A. This proves that cone(co(A)) = co(cone(A)). The last identity is a consequence of Propositions 1.5.6 and 1.5.10. We leave the details as an exercise.  Convex Functionals We have already discussed linear functionals. We now introduce another important class of functionals which extend in a natural way the classical notion of a convex function. Definition 1.5.14 (Convex Functional) Let M be a linear subspace of L. We say that a functional π : M → R is convex if for all X, Y ∈ M and a ∈ [0, 1] we have π(aX + (1 − a)Y ) ≤ aπ(X) + (1 − a)π(Y ). We say that π is concave if −π is convex.



The inequality defining convexity can be extended to arbitrary convex combinations, as shown by the next result. Proposition 1.5.15 Let M be a linear subspace of L. If π : M → R is a convex functional, then for all X1 , . . . , Xm ∈ M and all a1 , . . . , am ∈ [0, 1] adding up to 1 we have  m  m   π ai Xi ≤ ai π(Xi ). i=1

i=1

Proof We prove the assertion by induction on m ∈ N. Base Step The statement is obvious if m = 1. Induction Step Assume that the statement is true for m ∈ N and consider m + 1 random variables. If am+1 = 1, then the assertion is clearly true. Hence, we may assume that

1.5 Convex Sets and Functionals

13

am+1 ∈ [0, 1). In this case, set X=

m  i=1

ai Xi . 1 − am+1

Then, by the induction hypothesis, π(X) ≤

m  i=1

ai π(Xi ). 1 − am+1

Since m+1 

ai Xi = (1 − am+1 )X + am+1 Xm+1 ,

i=1

it follows from the convexity of π that

π

m+1 

 ai Xi

≤ (1 − am+1 )π(X) + am+1 π(Xm+1 ) ≤

i=1

This concludes the induction argument.

m+1 

ai π(Xi ).

i=1



In the sequel we will also encounter functionals that are “almost” linear in the following sense. Definition 1.5.16 (Sublinear Functional) Let M be a linear subspace of L. We say that a functional π : M → R is sublinear if the following properties are satisfied for all X, Y ∈ M and a ∈ [0, ∞): (1) π(X + Y ) ≤ π(X) + π(Y ) (subadditivity). (2) π(aX) = aπ(X) (positive homogeneity). We say that π is superadditive, respectively superlinear, whenever −π is subadditive, respectively sublinear.  We show that the inequality defining subadditivity can be extended to arbitrary sums of random variables.

14

1 Random Variables: Linearity and Order

Proposition 1.5.17 Let M be a linear subspace of L. If π : M → R is a subadditive functional, then for all X1 , . . . , Xm ∈ M we have  m  m   π Xi ≤ π(Xi ). i=1

i=1

Proof The assertion can be easily established by induction on m ∈ N. Base Step The inequality is obvious if m = 1. Induction Step Assume the inequality holds for m ∈ N and consider m + 1 random variables. Setting X = X1 + · · · + Xm , it is easy to see that π

m+1 

 Xi

= π(X + Xm+1 ) ≤ π(X) + π(Xm+1 ) ≤

i=1

m+1 

π(Xi ),

i=1



by the subadditivity of π and our induction hypothesis.

It is easy to verify that sublinear functionals are automatically convex. Moreover, a convex functional that is positively homogeneous is automatically sublinear. The easy proof is left as an exercise. Proposition 1.5.18 Let M be a linear subspace of L. Then, for every positivelyhomogeneous functional π : M → R the following statements are equivalent: (a) π is convex. (b) π is subadditive. In particular, π is convex whenever it is sublinear. Proof First, assume that π is convex and take X, Y ∈ positive homogeneity and convexity that 

1 1 1 X+ Y = 2π X+ π(X + Y ) = π 2 2 2 2

M. Then, it follows the from 1 Y 2

 ≤ π(X) + π(Y ).

This proves that π is subadditive. Hence, (a) implies (b). Conversely, assume that π is subadditive and take X, Y ∈ M. Then, for every a ∈ [0, 1] we have π(aX + (1 − a)Y ) ≤ π(aX) + π((1 − a)Y ) = aπ(X) + (1 − a)π(Y ) by the subadditive and positive homogeneity. This shows that π is convex and proves that (b) implies (a). 

1.6 The Order Structure

15

Remark 1.5.19 The class of convex functionals is not a linear subspace of the vector space of all functionals defined on M. However, it is easy to see that it is a convex cone in the latter space. The same is true for the class of sublinear functionals. 

1.6

The Order Structure

Just as operations on real numbers can be carried over to operations on random variables, the order between real numbers has a natural counterpart in the world of random variables. For our later study it is useful to allow for equalities and inequalities to hold only locally on certain events, and not only globally on the entire sample space. Definition 1.6.1 (Equalities and Inequalities) variables X, Y ∈ L we write: (i) (ii) (iii) (iv) (v)

X X X X X

=Y = Y ≥Y Y >Y

For every event E ∈ E and all random

on E whenever X(ω) = Y (ω) for every ω ∈ E. on E whenever X(ω) = Y (ω) for some ω ∈ E. on E whenever X(ω) ≥ Y (ω) for every ω ∈ E. on E whenever X ≥ Y on E and X = Y on E. on E whenever X(ω) > Y (ω) for every ω ∈ E.

The converse inequalities X ≤ Y , X  Y , and X < Y are defined in a similar way. If E = , then we typically omit any reference to E. In addition, we set: (vi) {X = Y } := {ω ∈  ; X(ω) = Y (ω)}. (vii) {X ≥ Y } := {ω ∈  ; X(ω) ≥ Y (ω)}. (viii) {X > Y } := {ω ∈  ; X(ω) > Y (ω)}. The corresponding sets {X ≤ Y } and {X < Y } are defined similarly. Definition 1.6.2 (Positive Random Variable) variable X ∈ L. We say that:



Let E ∈ E and consider a random

(i) X is positive on E whenever X ≥ 0 on E. (ii) X is nonzero positive on E whenever X  0 on E. (iii) X is strictly positive on E whenever X > 0 on E. We say that X is (nonzero/strictly) negative on E if −X is (nonzero/strictly) positive on E. We omit any reference to E in the case where E = .  It is immediate to verify that the relation “≥” for random variables we have just introduced satisfies the axioms of a partial ordering on the space L.

16

1 Random Variables: Linearity and Order

Proposition 1.6.3 For all X, Y, Z ∈ L the following statements hold: (i) X ≥ X (reflexivity). (ii) X ≥ Y and Y ≥ X imply X = Y (iii) X ≥ Y and Y ≥ Z imply X ≥ Z

(antisymmetry). (transitivity).

The above result shows that L is a so-called partially ordered set. The above partial order is compatible with the algebraic operations on random variables in the sense specified below. The easy proof is left as an exercise. Proposition 1.6.4 Let E ∈ E and assume that X, Y ∈ L satisfy X ≥ Y on E. Then, the following statements hold: (i) X + Z ≥ Y + Z on E for every Z ∈ L. (ii) aX ≥ aY on E for every a ∈ [0, ∞). (iii) XZ ≥ Y Z on E for every Z ∈ L such that Z ≥ 0 on E. This preceding proposition shows that the vector space L equipped with the relation “≥” is, in fact, what is called an ordered vector space. The set of positive random variables is immediately seen to be a convex cone, which is appropriately called the positive cone. Definition 1.6.5 (Positive Cone) The positive cone (of L) is the convex cone L+ := {X ∈ L ; X ≥ 0}.



Maxima and Minima A key and very useful property of sets of real numbers is that their supremum and infimum always exist. We now discuss a generalization of these concepts to random variables. We proceed in line with our general construction principle. Definition 1.6.6 (Maximum, Minimum) Consider the map f : R2 → R defined by setting f (x, y) = max{x, y}. For all X, Y ∈ L the random variable f (X, Y ) ∈ L is called the maximum of X and Y , and we write max{X, Y } := X ∨ Y := f (X, Y ). Similarly, let f : R2 → R be given by f (x, y) = min{x, y}. The random variable f (X, Y ) is called the minimum of X and Y , and we write min{X, Y } := X ∧ Y := f (X, Y ).



1.6 The Order Structure

17

The following result establishes a useful link between the maximum and the minimum of two random variables. The simple verification is left as an exercise. Proposition 1.6.7 For all random variables X, Y ∈ L we have min{X, Y } = − max{−X, −Y }. Consider two random variables X, Y ∈ L. The following proposition shows that the maximum of X and Y is the “smallest” among those random variables that dominate both X and Y . Similarly, the minimum of X and Y is the “largest” among those random variables that are dominated by X and Y . We omit the easy proof. Proposition 1.6.8 For all random variables X, Y ∈ L the following statements hold: (i) max{X, Y } ≥ X, max{X, Y } ≥ Y and for every Z ∈ L we have Z ≥ X, Z ≥ Y ⇒ Z ≥ max{X, Y }. (ii) min{X, Y } ≤ X, min{X, Y } ≤ Y and for every Z ∈ L we have Z ≤ X, Z ≤ Y ⇒ Z ≤ min{X, Y }. Suprema and Infima It should be clear how to define maxima and minima for a finite number of random variables and how to extend the above results to this more general setting. Here, we discuss how to generalize the notions of a maximum and of a minimum to an arbitrary family of random variables. Definition 1.6.9 (Supremum, Infimum) Let C ⊂ L be a family of random variables. The map supX∈C X :  → R ∪ {∞} defined by 

 sup X (ω) := sup{X(ω) ; X ∈ C}

X∈C

is called the supremum of C. The map infX∈C X :  → R ∪ {−∞} defined by 

 inf X (ω) := inf{X(ω) ; X ∈ C}

X∈C

is called the infimum of C.



Given that we have restricted random variables to have real values, the infimum and supremum of a family of random variables may not themselves be random variables, since

18

1 Random Variables: Linearity and Order

they could attain the values ∞ or −∞, respectively. The next proposition provides a simple criterion to establish when a supremum or an infimum is indeed a random variable. Proposition 1.6.10 Let C ⊂ L be a nonempty family of random variables. Then, the following statements hold: (i) The supremum of C is a random variable if and only if there exists a random variable Z ∈ L such that Z ≥ X for every X ∈ C. (ii) The infimum of C is a random variable if and only if there exists a random variable Z ∈ L such that Z ≤ X for every X ∈ C. The link between the supremum and the infimum of a family of random variables is recorded in the next proposition, which can be seen as a generalization of Proposition 1.6.7. The easy verification is left as an exercise. Proposition 1.6.11 For every family of random variables C ⊂ L we have inf X = − sup {−X}.

X∈C

X∈C

Consider a family of random variables C ⊂ L. One may wonder whether there exists a sequence (Xn ) ⊂ C that converges pointwise to the supremum of C, i.e., Xn (ω) → sup X(ω) X∈C

for every ω ∈ . This is, however, not true in general, as our next example shows. Example 1.6.12 Consider the sample space  = {ω1 , ω2 } and set C = {Xn ; n ∈ N} where for every n ∈ N ⎧ ⎨1 , if n is odd, ω1 Xn = ⎩1ω , if n is even. 2 The supremum of C is the constant random variable 1. However, no sequence of elements of C can converge pointwise to 1, since every sequence contains infinitely many elements which are 0 either at ω1 or at ω2 .  The next proposition shows a sufficient condition for a family of random variables to admit a sequence converging to its supremum, respectively infimum. For this to be the case, the family has to be sufficiently “well ordered” in the sense specified below; see also Exercise 1.8.10.

1.6 The Order Structure

19

Definition 1.6.13 (Directed Family) A family of random variables C ⊂ L is said to be directed upward if for all X, Y ∈ C there exists Z ∈ C such that Z ≥ max{X, Y }, and directed downward if for all X, Y ∈ C there exists Z ∈ C such that Z ≤ min{X, Y }.  Proposition 1.6.14 For every family C ⊂ L of random variables the following statements hold: (i) If C is directed upward, then there exists a sequence (Xn ) ⊂ C such that for every ω ∈  we have that Xn (ω) → supX∈C X(ω). (ii) If C is directed downward, then there exists a sequence (Xn ) ⊂ C such that for every ω ∈  we have that Xn (ω) → infX∈C X(ω). Proof In view of Proposition 1.6.11, it suffices to establish (i). To this effect, we start by showing that for any X1 , . . . , Xm ∈ C there exists Z ∈ C such that Z ≥ max{X1 , . . . , Xm }. We establish this by induction on m. Base Step The assertion is obvious if m = 1. Induction Step Assume the assertion holds for m ∈ N and consider m + 1 random variables. By the induction assumption, there exists Z ∈ C such that Z ≥ max{X1 , . . . , Xm }. Moreover, there also exists W ∈ C satisfying W ≥ max{Z, Xm+1 }. But then we clearly have W ≥ max{max{X1 , . . . , Xm }, Xm+1 } = max{X1 , . . . , Xm+1 }. This concludes the induction argument. Now, by the definition of the supremum of real numbers, for each ω ∈  we can find a sequence (Xnω ) ⊂ C such that Xnω (ω) → sup X(ω). X∈C

As a consequence of the above induction argument, for every n ∈ N there exists a random variable Xn ∈ C such that Xn ≥ max{Xnω1 , . . . , XnωK }.

20

1 Random Variables: Linearity and Order

Since for every ω ∈  and every ε ∈ (0, ∞) there exists n(ω, ε) ∈ N such that 0 ≤ sup X(ω) − Xn (ω) ≤ sup X(ω) − Xnω (ω) ≤ ε X∈C

X∈C

whenever n ≥ n(ω, ε) by the definition of (Xnω ), we conclude that (Xn ) converges pointwise to the supremum of C. 

1.7

Monotone Functionals

As will become apparent as the book progresses, functionals that respect the order structure are at the heart of arbitrage theory. Definition 1.7.1 ((Strictly) Increasing Functional) Let M be a linear subspace of L. We say that a functional π : M → R is increasing if X, Y ∈ M, X ≥ Y ⇒ π(X) ≥ π(Y ). In this case, we say that π is strictly increasing if X, Y ∈ M, X  Y ⇒ π(X) > π(Y ). We say that π is (strictly) decreasing if −π is (strictly) increasing.



For a linear functional, being increasing is equivalent to assigning positive values to positive random variables. Proposition 1.7.2 Let M be a linear subspace of L and let π : M → R be a linear functional. Then, the following statements hold: (i) π is increasing if and only if π(X) ≥ 0 for every X ∈ M with X ≥ 0. (ii) π is strictly increasing if and only if π(X) > 0 for every X ∈ M with X  0. If M = L, then the following statements hold: (iii) π is increasing if and only if π(1ω ) ≥ 0 for every ω ∈ . (iv) π is strictly increasing if and only if π(1ω ) > 0 for every ω ∈ . Proof We prove the equivalences for increasing linear functionals. The strictly increasing case can be established by the same argument.

1.8 Exercises

21

We first establish (i). To prove the “if” implication, assume that π(X) ≥ 0 for every X ∈ M such that X ≥ 0. Now, take X, Y ∈ M such that X ≥ Y . Since X − Y ≥ 0, the linearity of π yields π(X) − π(Y ) = π(X − Y ) ≥ 0, or, equivalently, π(X) ≥ π(Y ). This shows that π is increasing. To establish the “only if” implication, assume that π is increasing. Then, for every X ∈ M with X ≥ 0 we clearly have π(X) ≥ π(0) = 0 by linearity. Next, we focus on (iii). In view of item (i), we only have to show the “if” implication. To this end, assume that π(1ω ) ≥ 0 for every ω ∈ . Take X ∈ L such that X ≥ 0, i.e., X(ω) ≥ 0 for every ω ∈ . Hence, the linearity of π yields π(X) =



X(ω)π(1ω ) ≥ 0.

ω∈

It follows from (i) that π is increasing. This delivers the desired implication.



In view of the above result, the following terminology has become standard for linear functionals. Definition 1.7.3 ((Strictly) Positive Functional) Let M be a linear subspace of L. A linear functional π : M → R is said to be positive whenever it is increasing and strictly positive whenever it is strictly increasing. 

1.8

Exercises

In all exercises below we consider a finite sample space  = {ω1 , . . . , ωK }. Exercise 1.8.1 Prove that the sum of random variables and the product of a random variable by a scalar introduced in Sect. 1.3 satisfy the vector space axioms listed in Appendix B, i.e., prove that: (1) (2) (3) (4) (5) (6)

X + Y = Y + X (commutative law) (X + Y ) + Z = X + (Y + Z) (associative law) X + 0 = X (existence of a zero vector) X + (−X) = 0 (existence of an inverse) a(X + Y ) = aX + aY (distributive law for sum) (a + b)X = aX + bX (distributive law for addition of scalars)

22

1 Random Variables: Linearity and Order

(7) (ab)X = a(bX) (distributive law for product of scalars) (8) aX = X whenever a = 1 (normalization of product) for all random variables X, Y, Z ∈ L and all scalars a, b ∈ R. Exercise 1.8.2 Show that the map F : L → RK defined by setting F (X) = (X(ω1 ), . . . , X(ωK )). is a vector space isomorphism, i.e., a one-to-one correspondence that is linear (with respect to the linear structure on L introduced in Sect. 1.3 and to the canonical linear structure on RK ). Exercise 1.8.3 Prove the following statements about convex sets: (i) If A, B ⊂ L are convex, then A + B is convex. (ii) If A ⊂ L is convex and a ∈ R, then aA is convex. (iii) If I is an arbitrary index set and Ai ⊂ L is convex for every i ∈ I , then i∈I Ai is convex. (iv) If I is an arbitrary index set and Ai ⊂ L is convex for every i ∈ I , then i∈I Ai need not be convex. Show that statements (i) to (iii) remain true if one replaces convex sets by cones. Show also that an arbitrary union of cones is still a cone. Exercise 1.8.4 We say that a set A ⊂ L is closed under addition if X + Y ∈ A for all X, Y ∈ A. Consider the following properties: (i) A is convex and 0 ∈ A. (ii) A is closed under addition and 0 ∈ A. (iii) A is a cone. Show that, under any of the above properties, the remaining two properties are equivalent. Exercise 1.8.5 Let M be a linear subspace of L and consider a functional π : M → R. The graph of π is the set graph(π) := {(X, a) ∈ M × R ; π(X) = a} ⊂ L × R. The epigraph of π is the set epi(π) := {(X, a) ∈ M × R ; π(X) ≤ a} ⊂ L × R.

1.8 Exercises

23

(i) Show that π is convex if and only if epi(π) is convex. (ii) Show that π is subadditive if and only if epi(π) is closed under addition. (iii) Show that π is positively homogeneous if and only if epi(π) is a cone. Exercise 1.8.6 Let M be a linear subspace of L and consider the following properties of a functional π : M → R: (i) π is convex and π(0) = 0. (ii) π is subadditive and π(0) = 0. (iii) π is positively homogeneous. Combine Exercises 1.8.4 and 1.8.5 to show that, under any of the above properties, the remaining two properties are equivalent. Exercise 1.8.7 Let E ∈ E and X, Y ∈ L be fixed and assume that X ≥ Y on E. Prove the following statements: (i) X + Z ≥ Y + Z on E for every Z ∈ L. (ii) aX ≥ aY on E for every a ≥ 0. (iii) XZ ≥ Y Z on E for every Z ∈ L such that Z ≥ 0 on E. Replace ≥ by  and > and establish the corresponding inequalities. Exercise 1.8.8 Let C ⊂ L be a family of random variables. (i) Prove that the supremum of C is finite valued if and only if there exists a random variable Z ∈ L such that Z ≥ X for every X ∈ C. (ii) Prove that the infimum of C is finite valued if and only if there exists a random variable Z ∈ L such that Z ≤ X for every X ∈ C. Exercise 1.8.9 Prove that for every family of random variables C ⊂ L we have inf X = − sup {−X}.

X∈C

X∈C

Exercise 1.8.10 Let  = {ω1 , ω2 } and consider the set C = {Xn ; n ∈ N} defined by ⎧ ⎨1 − 1 1 + 1 , if n is odd, ω2 n ω1 Xn = ⎩1ω + 1 − 1 1ω , if n is even. 1 2 n Prove that the supremum of C is the constant random variable 1 and that Xn (ω) → 1 for every ω ∈ , even though C is not directed upward. This shows that the conditions in

24

1 Random Variables: Linearity and Order

Proposition 1.6.14 are only sufficient for a supremum of a family of random variables to be the pointwise limit of a sequence of random variables from that family. Exercise 1.8.11 Let M be a linear subspace of L and let π : M → R be a (nonzero) positive linear functional. Prove that π(X) > 0 for every strictly-positive random variable X ∈ M. Exercise 1.8.12 Let M be a linear subspace of L containing a strictly positive random variable. For a linear functional π : M → R define the kernel of π by ker(π) = {X ∈ M ; π(X) = 0}. Prove that the following statements are equivalent: (a) π is strictly positive. (b) ker(π) ∩ L+ = {0}.

2

Probabilities and Expectations

In this chapter we introduce the language of probability theory on finite sample spaces. A probability measure assigns to each possible outcome of a random experiment a number that is interpreted as its “likelihood” of occurrence. We explore elementary properties of probability measures and introduce the important concepts of expectation and variance of a random variable. In the last section we also give a very brief account of the important discussions about the meaning of probability.

2.1

Probability Measures

The notion of a probability measure provides a formalization of the intuitive procedure of assigning to each event in  a “likelihood” of occurrence. Here, two events E, F ∈ E are said to be disjoint if E ∩ F = ∅. Definition 2.1.1 (Probability Measure) Let Ω be a finite sample space. A probability measure (on Ω) is any map P : E → [0, 1] satisfying the following axioms: (1) P() = 1 (normalization). (2) P(E ∪ F ) = P(E) + P(F ) for all disjoint events E, F ∈ E

(additivity).

For every ω ∈  we simply write P(ω) instead of P({ω}). The support of P is the event supp(P) ∈ E defined by supp(P) := {ω ∈  ; P(ω) > 0}. The couple (, P) is called a (finite) probability space.



© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_2

25

26

2 Probabilities and Expectations

For a probability measure P : E → [0, 1] and for every event E ∈ E we interpret the number P(E) as the “probability” or “likelihood” that E will occur. A “likelihood” 0 is interpreted as impossibility of occurrence and a “likelihood” 1 as certainty of occurrence. For this reason we sometimes say that E is an impossible event whenever P(E) = 0 and that E is a sure event whenever P(E) = 1. In the final section of this chapter we discuss briefly different philosophical takes on the concept of probability. In this book we often develop intuition about the probabilistic concepts we introduce using the “frequentist” approach, which views the probability of an event as the limit of its relative frequency in a large number of trials. While appropriate for some situations, this approach poses severe conceptual challenges in others, including applications to economics and finance. Our use of it as a vehicle of illustration has a purely pragmatic character: Most concepts admit a very intuitive description within the frequentist interpretation of probability.

Standing Assumption Throughout the remainder of this chapter we fix a finite probability space (, P) with  = {ω1 , . . . , ωK }.

Remark 2.1.2 Note that we do not assume that supp(P) = , which may call for a justification. In the case that supp(P) = , i.e., when there are nonempty impossible events, we could define a new sample space  = supp(P) and a probability measure P by setting P (E) = P(E) for every E ∈ E(Ω  ). The new probability measure would have full support, i.e., supp(P ) =  , and the new probability space ( , P ) would differ from (, P) only in that we have eliminated all the (nonempty) impossible events. All in all this appears to be a reasonable thing to do since, when modelling a random phenomenon with a finite number of outcomes, it seems natural to focus only on those outcomes that have a real chance of occurrence. However, the additional generality afforded by considering probability measures that allow for impossible events turns out to be useful when studying conditional probabilities in Chap. 10.  We now show that the additivity property can be extended to arbitrary (finite) unions of pairwise disjoint events. Here, we say that E1 , . . . , Em ∈ E are pairwise disjoint if Ei ∩ Ej = ∅ for all i, j ∈ {1, . . . , m} such that i = j . Proposition 2.1.3 For all pairwise disjoint events E1 , . . . , Em ∈ E we have  P

m 

i=1

 Ei

=

m  i=1

P(Ei ).

2.1 Probability Measures

27

Proof We proceed by induction on m ∈ N. Base Step The assertion is obviously true for m = 1. Induction Step Assume the assertion holds for m ∈ N and take m + 1 pairwise disjoint events. Set E = E1 ∪ · · · ∪ Em . Since E and Em+1 are disjoint, the additivity property of P implies that P(E ∪ Em+1 ) = P(E) + P(Em+1 ). the induction hypothesis, P

m+1 

 Ei

= P(E) + P(Em+1 ) =

i=1

m 

P(Ei ) + P(Em+1 ) =

i=1

m+1 

P(Ei ).

i=1



This concludes the proof.

Remark 2.1.4 (Constructing Probability Measures) The preceding result shows that a probability measure is completely determined by the probabilities of the elementary events. To see this, note that every nonempty event E ∈ E can be written as a union of pairwise disjoint events as E = ω∈E {ω}. Then, the additivity property of P implies P(E) =



P(ω).

ω∈E

In particular, 

P(ω) = P() = 1.

ω∈

Hence, the vector (P(ω1 ), . . . , P(ωK )) ∈ [0, 1]K fully determines P. Conversely, every vector (p1 , . . . , pK ) ∈ [0, 1]K with components adding up to 1 induces a probability measure on .  The next result lists some elementary but fundamental properties of probability measures, which are freely used throughout the book. Proposition 2.1.5 For all events E, F ∈ E the following statements hold: (i) (ii) (iii) (iv)

P(E \ F ) = P(E) − P(E ∩ F ). P(E ∪ F ) = P(E) + P(F ) − P(E ∩ F ). P(E c ) = 1 − P(E). If E ⊂ F , then P(E) ≤ P(F ).

28

2 Probabilities and Expectations

Proof To prove assertion (i), note that E can be expressed as the disjoint union of E ∩ F and E ∩ F c . Since E \ F = E ∩ F c , using the additivity property of P, we get P(E) = P(E ∩ F ) + P(E \ F ). Assertion (ii) follows from the fact that we can write E ∪ F as the disjoint union of E and F \ E. Hence, using (i), we obtain P(E ∪ F ) = P(E) + P(F \ E) = P(E) + P(F ) − P(E ∩ F ). In the same vein, we can write  as the disjoint union of E and E c , so that 1 = P() = P(E) + P(E c ), proving assertion (iii). Finally, if E is a subset of F , we can write F as the disjoint union of E and F \ E, so that P(E) ≤ P(E) + P(F \ E) = P(F ). This yields assertion (iv) and concludes the proof.

2.2



Probability Mass Function

A random variable is a function that depends on the possible outcomes of our random experiment. It is therefore of interest to use our knowledge of the underlying probability law to obtain information about the likelihood of a random variable taking a particular value. This information is summarized in the probability mass function. Definition 2.2.1 (Probability Mass Function) Let X ∈ L. The probability mass function of X (under P) is the function PX : R → [0, 1] defined by PX (x) := P({X = x}).



It is important to be aware that the same random variable may have different probability mass functions under different probability measures. Example 2.2.2 Consider the sample space  = {H, T } associated with tossing a coin, where H stands for “heads” and T for “tails”. If the coin is “fair”, the appropriate

2.2 Probability Mass Function

29

probability measure P is defined by P(H ) = P(T ) = 12 . The probability measure for an “unfair” coin will, however, be different. For instance, the probability measure Q defined by Q(H ) = 13 and Q(T ) = 23 represents a coin for which the probability of “tails” occurring is twice as large as that of “heads”. For the random variable X = 1H we have PX (1) =

1 1 = = QX (1). 2 3

Hence, the probability mass functions of X under the above two probability measures are different.  It is also important to realize that a random variable is not fully characterized by its probability mass function, i.e., two different random variables may have the same probability mass function. Example 2.2.3 Consider the probability space (, P) associated with tossing a “fair” coin, i.e.,  = {H, T } and P(H ) = P(T ) = 12 . Setting X = 1H and Y = 1T , it is clear that  PX = PY , even though X = Y . The probability mass function assigns to each of the conceivable values that a random variable may attain the probability of it being observed once uncertainty is resolved. As a result, two random variables may have the same probability mass function even though they are defined on different probability spaces. Example 2.2.4 Consider the probability space (1 , P) associated with tossing a “fair” coin, i.e., 1 = {H, T } and P(H ) = P(T ) = 12 . Further, consider a second probability space (2 , Q) associated with rolling a “fair” die, i.e., 2 = {1, 2, 3, 4, 5, 6} and Q(1) = · · · = Q(6) = 16 . It is immediate that the random variables X = 1H defined  on 1 and Y = 1{1,2,3} defined on 2 are such that PX = QY . In view of the above example, the following definition is meaningful. Definition 2.2.5 (Identically-Distributed Random Variables) Let (1 , P) and (2 , Q) be two finite probability spaces. Two random variables X ∈ L(1 ) and Y ∈ L(2 ) are identically distributed (under P and Q) if PX = QY .  The term “identically distributed” comes from the fact that the information about the probability with which a random variable attains a value can also be conveyed through the (cumulative) distribution function introduced in Exercise 2.8.2. We prefer to

30

2 Probabilities and Expectations

use probability mass functions because they seem more natural in the context of finite probability spaces. The following result tells us that it is always possible to construct a random variable on an appropriate probability space that has a pre-specified probability mass function. Proposition 2.2.6 Assume that a1 , . . . , am ∈ R are pairwise district and consider p1 , . . . , pm ∈ (0, 1) adding up to 1. Then, there exist a probability space (, P) and a random variable X ∈ L() such that ⎧ ⎨p , k PX (x) = ⎩0,

if x = ak for k ∈ {1, . . . , m}, if x ∈ {a1 , . . . , am }.

Proof Set  = {a1 , . . . , am } and define a probability measure P on the sample space  by setting P(ak ) = pk for k ∈ {1, . . . , m}; see Remark 2.1.4. Then, PX has the desired   form if we take the random variable on  given by X = m k=1 ak 1ak . It is important to realize that the above result does not state that we can always construct a random variable with a prescribed probability mass function on every probability space, but only on some probability space. As the following example shows, this is because finite probability spaces are not sufficiently “rich”, in the sense that we cannot find events having an arbitrary prescribed probability of occurrence; see also Exercise 2.8.3. Example 2.2.7 Consider the probability space (, P) associated with tossing a “fair” coin, i.e.,  = {H, T } and P(H ) = P(T ) = 12 . It is easy to see that, for any random variable X  defined on , the only values that PX can possibly take are 0, 12 , and 1. Bernoulli and Binomial Random Variables Bernoulli and binomial random variables form a class of random variables that are characterized by their probability mass functions. They are often encountered in applications. Definition 2.2.8 (Bernoulli and Binomial Random Variables) Consider two parameters p ∈ [0, 1] and m ∈ N. A random variable X ∈ L is said to be binomial with parameters (p, m) if its probability mass function is given by ⎧ ⎨mpx (1 − p)m−x , if x ∈ {0, . . . , m}, x PX (x) = ⎩0, if x ∈ / {0, . . . , m}. In this case, we sometimes write X ∼ Bin(p, m). A binomial random variable with parameters (p, 1) is said to be Bernoulli with parameter p, in which case the probability

2.2 Probability Mass Function

31

mass function simplifies to ⎧ ⎪ ⎪ ⎨p, PX (x) = 1 − p, ⎪ ⎪ ⎩ 0,

if x = 1, if x = 0, if x ∈ / {0, 1}.

In this case, we sometimes write X ∼ Ber(p).



Remark 2.2.9 Let p ∈ [0, 1] and m ∈ N be fixed. Recall that, by the binomial formula recorded in Proposition A.3.6, it holds that m   m k p (1 − p)m−k = 1. k k=0

This shows that the above definition is correct.



Binomial random variables are ubiquitous in applications. Here, we describe a standard game of chance where they appear naturally. Example 2.2.10 (The Multiple-Urn Game) Consider the following random experiment. We are given m identical urns, each containing K balls numbered 1, . . . , K. We draw simultaneously from each urn one ball and record the outcome in a suitable list. The natural sample space for this random experiment is the set of all such possible lists, i.e.,  = {1, . . . , K}m . In particular, note that card() = K m by Proposition A.3.1. We assume that the probability of drawing any particular list is the same for all lists. Accordingly, we define a probability measure P by P(E) =

card(E) . Km

Assume now that in each of the m urns the first N balls are white while the last K − N balls are black. Define a random variable X ∈ L by setting X(a1 , . . . , am ) = card({i ∈ {1, . . . , m} ; ai ∈ {1, . . . , N}}). The random variable X counts the number of white balls in each list of . Now, take any k ∈ {0, . . . , m}. We want to determine the number of lists containing exactly k white balls or, equivalently, the cardinality of the event {X = k}. Note that, by Proposition A.3.4,   there are mk ways to select k spots in a list containing m entries. For each of them, by

32

2 Probabilities and Expectations

Proposition A.3.2, we have N k possible combinations of white balls and (K − N)m−k possible combinations of black balls. Hence, card({X = k}) =

 m N k (K − N)m−k . k

It follows that P({X = k}) =



 k K − N m−k m N . K K k

If we set p = N/K, we can equivalently write

 m k P({X = k}) = p (1 − p)m−k . k Note also that {X = x} = ∅ for every x ∈ R \ {0, . . . , m}. This shows that X is binomial with parameters (p, m), where p is the percentage of white balls in each urn and m is the number of urns.  Limitations of Focusing Exclusively on the Probability Mass Function If all one wants to emphasize about a random variable is the probability with which it takes the various possible values, one can focus exclusively on its probability mass function and completely ignore the underlying probability space. For this reason, it is often the case in the probability literature and, particularly, in the statistical literature that authors work with random variables by specifying their probability mass function (or more commonly their cumulative distribution function as introduced in Exercise 2.8.2), but not the underlying probability space. Whether or not this is justified depends on the particular problem at hand. Assume a person buys insurance paying a fixed amount c ∈ (0, ∞), the so-called insurance cover, if his or her house burns down. In this case, the natural sample space is Ω1 = {B, NB}, where B stands for “the house burns” and NB for “the house does not burn”. We assume the corresponding probability measure is defined by P(B) = p, P(NB) = 1 − p for some p ∈ (0, 1). The insurance payment can be thus represented by the random variable X ∈ L(1 ) with probability mass function PX (x) =

⎧ ⎨p,

if x = c,

⎩1 − p,

if x = c.

2.3 Independent Events

33

Assume the person can buy a lottery ticket that is completely unrelated to the house burning or not, but pays the same amounts c with the same probability as the insurance contract. In this case, the natural sample space is 2 = {W, NW }, where W stands for “the ticket is a winner” and NW for “the ticket is not a winner”. The corresponding probability measure is assumed to satisfy Q(W ) = p, Q(NW ) = 1 − p for the same p. Hence, the payoff of the lottery can be represented by a random variable Y ∈ L(2 ) with the same probability mass function as before, i.e., QY (x) =

⎧ ⎨p,

if x = c,

⎩1 − p,

if x = c.

However, in spite of X and Y being identically distributed, it is highly unlikely that the person buying insurance will be indifferent between the random payments represented by X and Y , since his or her primary interest is in receiving the insurance cover c in the event that the house burns down. On the other hand, the insurance company may view this contract in an entirely different way and may be only interested in the probability of having to pay a given amount regardless of which specific event triggered the payment. Similarly, when gambling by tossing a fair coin, one may be indifferent between a bet paying one EUR if we get heads and nothing if we get tails, and a bet paying nothing if we get heads and one EUR if we get tails. In these situations of “gambling” type, we are only interested in the probability of having a particular reward, but not in how that reward was generated, i.e., we may care only about the probability mass of the gamble.

2.3

Independent Events

The notion of independence plays a central role in probability theory. In this section we investigate some of the different guises in which it can be formulated. Intuitively speaking, two events E, F ∈ E are said to be independent if the occurrence or non-occurrence of one event has no impact whatsoever, in probabilistic terms, on the occurrence or non-occurrence of the other event. In other words, the occurrence of E should not affect the relative frequency with which F occurs, and viceversa. This implies that we should have P(E ∩ F ) P(E ∩ F ) = P(F ) and = P(E), P(E) P(F ) provided that both E and F are not impossible.

34

2 Probabilities and Expectations

Definition 2.3.1 (Independent Events) We say that two events E, F ∈ E are independent (under P) if P(E ∩ F ) = P(E)P(F ).  Example 2.3.2 The state space corresponding to rolling a die twice can be naturally 1 described by  = {1, 2, 3, 4, 5, 6}2. Assume the die is fair, so that P(ω) = 36 for every ω ∈ . Now, let E be the event “rolling the first time a 6” and F the event “rolling the second time a 2”, i.e., E = {(6, 1), . . . , (6, 6)}, F = {(1, 2), . . . , (6, 2)}. In this case, we clearly have P(E) = P(F ) = 16 . Moreover, E ∩ F = {(6, 2)} implies that 1 . Therefore, P(E ∩ F ) = 36 P(E ∩ F ) = P(E)P(F ) and, thus, E and F are independent under P. This corresponds to our intuition, which tells us that getting a 6 in the first trial should not influence the outcome of the second trial.  It is clear from the definition that independence between events is a notion where the particular probability measure plays a critical role. Example 2.3.3 Let  = {1, 2, 3, 4} and consider the following two probability measures P and Q on : P(1) = P(2) = P(3) = P(4) =

1 1 5 , Q(1) = Q(2) = Q(3) = , Q(4) = . 4 8 8

Consider the events E = {1, 2} and F = {2, 3}. Then, we easily see that P(E ∩ F ) =

1 1 1 = P(E)P(F ), Q(E ∩ F ) = = = Q(E)Q(F ). 4 8 16

Hence, E and F are independent under P, but not under Q.



The following proposition formalizes the natural intuition according to which independence should be preserved if we pass to complement events. Proposition 2.3.4 For all events E, F ∈ E the following statements are equivalent: (a) (b) (c) (d)

E and F are independent under P. E and F c are independent under P. E c and F are independent under P. E c and F c are independent under P.

2.3 Independent Events

35

Proof Assume that (a) holds. Since F = (E c ∩ F ) ∪ (E ∩ F ), P(E c ∩ F ) = P(F ) − P(E ∩ F ) = P(F ) − P(E)P(F ) = P(E c )P(F ), by additivity and independence. This yields (b). By the same argument, it is now clear that (b) implies (d), which implies (c), which in turn implies (a), concluding the proof of the equivalence.  The notion of independence can be extended to arbitrary families of events. Definition 2.3.5 (Independent Family of Events) We say that E1 , . . . , Em ∈ E are independent (under P) if, for every subset I ⊂ {1, . . . , m}, we have P



 =

Ei

i∈I



P(Ei ).

i∈I

We say that C1 , . . . , Cm ⊂ E are independent (under P) if E1 , . . . , Em are independent  under P for all E1 ∈ C1 , . . . , Em ∈ Cm . Clearly, the events of an independent family are also pairwise independent. However, the converse is not true: Pairwise independence does not suffice to ensure the independence of a family of events. The following examples are meant to dispel some common misunderstandings about independence. Example 2.3.6 Let  = {1, 2, 3, 4} and let P(1) = · · · = P(4) = 14 . Consider the events E1 = {1, 2}, E2 = {1, 3}, E3 = {1, 4}. Then, one easily verifies that E1 , E2 , E3 are pairwise independent under P, but P(E1 ∩ E2 ∩ E3 ) = P(E1 )P(E2 )P(E3 ). Thus, while independence of a family of events implies pairwise independence, the converse is not true.  The next example shows that for some given events E1 , . . . , Em ∈ E to be independent under P it does not suffice to require that  P

m 

i=1

 Ei

=

m  i=1

P(Ei ).

36

2 Probabilities and Expectations

Example 2.3.7 Let  = {1, 2, 3, 4, 5, 6} and let P(1) = · · · = P(6) = events

1 6.

Consider the

E1 = {1, 2, 3}, E2 = {2, 4, 6}, E3 = {1, 2, 4, 5}. It is easy to check that P(E1 ∩ E2 ∩ E3 ) = P(E1 )P(E2 )P(E3 ). However, we have P(E1 ∩ E2 ) = P(E1 )P(E2 ), showing that E1 , E2 , E3 are not independent under P.



The independence criterion for complements recorded in Proposition 2.3.4 can be extended to arbitrary families of events. This will be an immediate consequence of the next lemma. Lemma 2.3.8 Let E1 , . . . , Em ∈ E. If E1 , . . . , Em are independent under P, then c are also independent under P. E1 , . . . , Em−1 , Em Proof Let us first define the events ⎧ ⎨E , i Fi = ⎩E c ,

if i ∈ {1, . . . , m − 1}, if i = m.

m

Assume that I is a nonempty subset of {1, . . . , m}. If m ∈ / I , then clearly P

 i∈I

 Fi

=P



 Ei

=

i∈I

 i∈I

P(Ei ) =



P(Fi )

i∈I

by the independence of E1 , . . . , Em−1 . Now let m ∈ I and define for convenience E=

 i∈I i =m

Ei .

2.3 Independent Events

37

c ), it is easy to see that Then, using that E = (E ∩ Em ) ∪ (E ∩ Em

P



 Fi

c ) = P(E) − P(E ∩ Em ) = P(E ∩ Em

i∈I c = P(E) − P(E)P(Em ) = P(E)P(Em )  = P(Fi ) i∈I

where we have used the additivity property of P in the second equality and the independence of E1 , . . . , Em in the third and the last equalities. In conclusion, F1 , . . . , Fm are independent under P, concluding the proof.  Proposition 2.3.9 Let E1 , . . . , Em ∈ E and define for I ⊂ {1, . . . , m} ⎧ ⎨E , if i ∈ I, i Fi = ⎩E c , if i ∈ / I. i If E1 , . . . , Em are independent under P, then so are F1 , . . . , Fm . Proof Set J = {1, . . . , m} \ I . We establish the assertion by induction on card(J ). Base Step If card(J ) = 0, then the assertion is obvious. Induction Step Assume that the assertion holds whenever card(J ) = n for some n ∈ {0, . . . , m − 1} and assume that J contains n + 1 elements. We can write J as the union of two disjoint sets J1 and J2 such that card(J1 ) = n and card(J2 ) = 1. by the induction assumption, the family of events consisting of Ei for i ∈ I ∪ J2 and Eic for i ∈ J1 is independent under P. Then, it follows immediately from Lemma 2.3.8 that the family consisting of Ei for i ∈ I and Eic for i ∈ J is also independent under P. This concludes the induction argument.  Independent Random Variables The concept of independence for events can be naturally transferred to the world of random variables. Definition 2.3.10 (Independent Random Variables) We say that X1 , . . . , Xm ∈ L are independent (under P) whenever the events {X1 = a1 }, . . . , {Xm = am } are independent under P for all a1 , . . . , am ∈ R.  The following criterion for the independence of indicator functions is an immediate consequence of this definition.

38

2 Probabilities and Expectations

Proposition 2.3.11 For all events E1 , . . . , Em ∈ E the following statements are equivalent: (a) 1E1 , . . . , 1Em are independent under P. (b) E1 , . . . , Em are independent under P. Proof Assume that 1E1 , . . . , 1Em are independent under P and take a nonempty index set I ⊂ {1, . . . , m}. Then, by the definition of independence, we obtain P



 Ei

=P

 

  {1Ei = 1} = P(1Ei = 1) = P(Ei ),

i∈I

i∈I

i∈I

i∈I

proving that E1 , . . . , Em are independent under P. Hence, (a) implies (b). Conversely, assume that E1 , . . . , Em are independent under P and take arbitrary a1 , . . . , am ∈ R. / {0, 1} for some j ∈ I , then clearly Moreover, take a nonempty set I ⊂ {1, . . . , m}. If aj ∈ P



  {1Ei = ai } = 0 = P(1Ei = ai ).

i∈I

i∈I

Hence, assume that ai ∈ {0, 1} for every i ∈ I . In this case, we find two disjoint sets I0 , I1 ⊂ I such that I0 ∪ I1 = I and ⎧ ⎨0, if i ∈ I , 0 ai = ⎩1, if i ∈ I1 . Then, by Proposition 2.3.9, P



 {1Ei = ai } = P

i∈I



{1Ei = 0} ∩

i∈I0

=P =



 i∈I0

=



i∈I0

=





 {1Ei = 1}

i∈I1

Eic

i∈I0

P(Eic )





 Ei

i∈I1



P(Ei )

i∈I1

P(1Eic = 0)



P(1Ei = 1)

i∈I1

P(1Ei = ai ).

i∈I

This proves that 1E1 , . . . , 1Em are independent under P, so (b) implies (a).



2.3 Independent Events

39

Example 2.3.12 (Sums of Independent Bernoulli Random Variables) Assume that E1 , . . . , Em ∈ E are independent under P and satisfy P(E1 ) = · · · = P(Em ) = p for some p ∈ (0, 1). Then, the random variable X ∈ L defined by X=

m 

1Ei

i=1

is a binomial random variable with parameters (p, m). To show this, note first that X can only take the values 0, . . . , m. Now, fix k ∈ {0, . . . , m} and consider the collection Ck ⊂ E() consisting of all events that can be expressed as the intersection of k events out of E1 , . . . , Em and of the complements of the remaining m − k events. Then, it follows from Proposition 2.3.9 that P(E) = pk (1 − p)m−k for every E ∈ Ck . Note that the number of events in Ck coincides with the number of different ways of successively drawing k balls from an urn containing m balls if the order in which the balls are drawn does not matter and after each draw we do not replace the chosen ball in the urn. Hence, we have

 m k

card(Ck ) =

by Proposition A.3.4. It is clear that, for every state ω ∈ , we have X(ω) = k if and only if ω belongs to precisely k events among E1 , . . . , Em , and so {X = k} =



E.

E∈Ck

Since the elements of Ck are pairwise disjoint by construction, the additivity of P yields P({X = k}) =

 E∈Ck

P(E) =

 m k p (1 − p)m−k . k

This shows that X is a binomial random variable with parameters (p, m).



40

2.4

2 Probabilities and Expectations

Expected Value

Consider a random experiment that can be repeated under the same conditions as often as we wish. Moreover, consider a random variable X ∈ L that can take the pairwise distinct values a1 , . . . , am ∈ R. If we repeat the experiment a large number of times, we expect to observe that X takes the value a ∈ R with a relative frequency equal to P(X = a). Hence, if we consider the ratio between the sum of the observed realizations of X and the number of trials, i.e., the average of the observed realizations over the number of trials, we would expect this number to be roughly equal to the sum m 

P({X = ai })ai .

i=1

The above sum can thus be interpreted as the “average value” of the observed realizations of X. This leads us to the fundamental concept of the expected value of a random variable. Definition 2.4.1 (Expected Value) Let X ∈ L. The expected value, or expectation, or mean, of X (with respect to P) is the number EP [X] ∈ R defined by EP [X] :=



P(ω)X(ω).



ω∈

As the following simple example shows, the expected value of a random variable need not belong to the range of values the random variable may actually take. Hence, the expected value is not the most likely value a random variable may take, but only the average of the values we can expect to observe in a large number of trials. The example also illustrates the obvious fact that the expected value of a random variable critically depends on the reference probability measure. Example 2.4.2 Consider the probability space (, P) associated with tossing a “fair” coin, i.e.,  = {H, T } and P(H ) = P(T ) = 12 . The random variable X = 1H records whether heads show up or not. It is readily seen that EP [X] =

1 1 1 X(H ) + X(T ) = . 2 2 2

This is consistent with our intuition: If we toss the coin a large number of times, then we expect that roughly half of the time “heads” will turn up and the other half “tails”. Hence, we expect that our random variable will deliver 1 half of the time and 0 the other half. In other words, we expect the average observed value of X to be roughly equal to 12 . On

2.4 Expected Value

41

the other hand, if the coin is unfair and the corresponding probability measure is given by Q(H ) = 13 and Q(T ) = 23 , then we have 2 1 1 X(H ) + X(T ) = . 3 3 3

EQ [X] =

This is also intuitively clear since, when tossing the coin a large number of times, we would expect “heads” to turn up roughly one third of the time. Incidentally, note that neither 12 nor 13 are values that X can ever attain.  We start by stating the following simple but useful result on expectations of indicator functions. Proposition 2.4.3 For every event E ∈ E we have EP [1E ] = P(E). Proof Since 1E = 1 on E and 1E = 0 on E c , the additivity of P yields EP [1E ] =



P(ω)1E (ω) =

ω∈



P(ω) = P(E).



ω∈E

The expectation functional is linear, i.e., the expectation of a sum coincides with the sum of the individual expectations and the expectation of a product by a scalar coincides with the multiple of the expectation. This fundamental property will be used throughout the book without explicit reference. Proposition 2.4.4 The functional EP : L → R is linear, i.e., the following statements hold for all X, Y ∈ L and a ∈ R: (i) EP [X + Y ] = EP [X] + EP [Y ]. (ii) EP [aX] = aEP [X]. Proof Clearly, EP [0] = 0. Moreover, EP [aX + Y ] =



P(ω)(aX(ω) + Y (ω))

ω∈

=a



P(ω)X(ω) +

ω∈



P(ω)Y (ω)

ω∈

= aEP [X] + EP [Y ]. The above statements follow by setting a = 1 and Y = 0, respectively.



42

2 Probabilities and Expectations

It is easy to see that, in general, the expectation of a product of two random variables is not equal to the product of the individual expectations. However, equality holds if the random variables are independent. Proposition 2.4.5 For all random variables X, Y ∈ L that are independent under P we have EP [XY ] = EP [X]EP [Y ]. Proof Let X take the values x1 < · · · < xm and Y take the values y1 < · · · < yn so that X=

m 

xi 1Ei , Y =

n 

yj 1Fj

j =1

i=1

where Ei = {X = xi } for i ∈ {1, . . . , m} and Fj = {Y = yj } for j ∈ {1, . . . , n}. Now, note that XY =

n m  

xi yj 1Ei 1Fj =

i=1 j =1

n m  

xi yj 1Ei ∩Fj .

i=1 j =1

Then, it follows from independence that EP [XY ] =

n m  

P(Ei ∩ Fj )xi yj =

i=1 j =1

n m  

P(Ei )P(Fj )xi yj = EP [X]EP [Y ].

i=1 j =1



Next, we show that expectations preserve the order between random variables. Proposition 2.4.6 The following statements hold for all random variables X, Y ∈ L: (i) EP [X] ≥ EP [Y ] whenever X ≥ Y on supp(P). (ii) EP [X] > EP [Y ] whenever X  Y on supp(P). In particular, the linear functional EP : L → R is positive. If supp(P) = , then it is also strictly positive. Proof Assume that X ≥ Y on supp(P). Then, EP [X] =

 ω∈supp(P)

P(ω)X(ω) ≥



P(ω)Y (ω) = EP [Y ].

ω∈supp(P)

The above inequality is clearly strict if X(ω) > Y (ω) for some ω ∈ supp(P).



The next result shows that the expectation of a random variable is fully determined by its probability mass function.

2.4 Expected Value

43

Proposition 2.4.7 Let f : R → R be given and assume that X ∈ L takes the pairwise distinct values a1 , . . . , am ∈ R. Then, EP [f (X)] =

m 

PX (ai )f (ai ).

i=1

In particular, we have EP [X] =

m 

PX (ai )ai .

i=1

Proof Set Ei = {X = ai } for every i ∈ {1, . . . , m} and note that f (X) =

m 

f (ai )1Ei .

i=1

Propositions 2.4.3 and 2.4.4 immediately yield EP [f (X)] =

m 

f (ai )EP [1Ei ] =

i=1

m 

f (ai )P(Ei ).

i=1

This proves the first statement. The second statement follows immediately by taking f (x) = x for every x ∈ R.  A simple consequence of the above proposition is that two random variables, possibly defined on different probability spaces but with the same probability mass function, have the same expectation. Corollary 2.4.8 Let X and Y be random variables defined on the probability spaces (1 , P) and (2 , Q), respectively. If PX = QY , then for every f : R → R we have EP [f (X)] = EQ [f (Y )]. In particular, EP [X] = EQ [Y ]. Armed with the preceding results, we can now calculate the expected values of Bernoulli and binomial random variables. Example 2.4.9 (Expectation of a Bernoulli) For a Bernoulli random variable X ∈ L with parameter p ∈ [0, 1] we have EP [X] = p.

44

2 Probabilities and Expectations

To see this, recall that X takes the values 0 and 1 with probabilities 1 − p and p. Hence,  by Proposition 2.4.7, EP [X] = P(X = 1) = p. Example 2.4.10 (Expectation of a Binomial) For a binomial random variable X ∈ L with parameters (p, m), with p ∈ [0, 1] and m ∈ N, we have EP [X] = pm. To see this, recall that X takes the values 0, . . . , m only with probabilities

 m k P(X = k) = p (1 − p)m−k k for k ∈ {0, . . . , m}. Hence, Proposition 2.4.7 yields that EP [X] =

m   m k p (1 − p)m−k k. k k=0

The result is now a direct consequence of Proposition A.3.6.

2.5



Variance and Covariance

Consider two random variables X, Y ∈ L and note that we can always express X “in terms of” Y as X = Y + (X − Y ). The term X − Y can be interpreted as the “deviation” of X from Y . In this section we discuss how to measure the size of this deviation by means of the so-called mean square error. Definition 2.5.1 (Mean Square Error) Let X, Y ∈ L be fixed. The mean square error between X and Y (with respect to P) is defined by MSEP [X, Y ] := EP [(X − Y )2 ].



Since deviations are squared, this measure weighs large deviations more heavily than small ones, and this regardless of their sign. Other measures of deviations could be, for instance, defined by EP [|X − Y |p ] for some p ∈ (1, ∞). However, the choice of p = 2 has become customary and has the clear advantage of allowing the use of powerful Hilbert space methods, as we will see later in Chap. 3. In the previous section we defined the expected value of a random variable X ∈ L. The variance of X is the mean square error between X and its expected value EP [X] and, hence, is a measure of the deviation of X from its mean.

2.5 Variance and Covariance

45

Definition 2.5.2 (Variance) Let X ∈ L. The variance of X (with respect to P) is the number VARP [X] ∈ R defined by VARP [X] := MSEP [X, EP [X]] = EP [(X − EP [X])2 ].



The following example provides a good illustration of the notion of the variance. Example 2.5.3 Consider the sample space  = {H, T } associated with tossing a coin and the probability measure P specified by P(H ) = P(T ) = 12 . To every n ∈ N we associate the random variable Xn ∈ L defined by Xn = n1H − n1T . The random variables Xn have the same expectation, EP [Xn ] =

1 1 n + (−n) = 0, 2 2

for every n ∈ N. However, in spite of having the same mean, the values of the above random variables wander further away from the mean as n increases. This is captured by the variance, which reads VARP [Xn ] = EP [Xn2 ] =

1 2 1 n + (−n)2 = n2 2 2

for every n ∈ N.



For a random variable X ∈ L we may consider the mean square error between X and a constant random variable. This mean square error is smallest when we choose the constant random variable EP [X]. In particular, this means that VARP [X] coincides with the smallest mean square error between X and a constant random variable and provides an alternative interpretation of the expected value of a random variable. Proposition 2.5.4 For every random variable X ∈ L we have VARP [X] = MSEP [X, EP [X]] = inf MSEP [X, c]. c∈R

Proof Define the function f : R → R by setting f (c) = MSEP [X, c]. Then, f (c) = EP [(X − c)2 ] = EP [X2 ] − 2cEP [X] + c2 for every c ∈ R. The assertion follows since this function clearly has its minimum at c = EP [X]. 

46

2 Probabilities and Expectations

The following result provides a useful and simple formula for calculating the variance of a random variable. Proposition 2.5.5 For every random variable X ∈ L we have VARP [X] = EP [X2 ] − EP [X]2 . Proof The assertion follows by setting m = EP [X] and noting that VARP [X] = EP [(X − m)2 ] = EP [X2 − 2mX + m2 ] = EP [X2 ] − m2 .



We can apply the preceding result to immediately determine the variance of an indicator function. Proposition 2.5.6 For every event E ∈ E we have VARP [1E ] = P(E)(1 − P(E)). Proof Recall from Proposition 2.4.3 that EP [1E ] = P(E). As 12E = 1E , we infer from Proposition 2.5.5 that VARP [1E ] = EP [1E ] − EP [1E ]2 = P(E) − P(E)2 = P(E)(1 − P(E)).



The next proposition lists additional properties of the variance. Proposition 2.5.7 The following statements hold for all X ∈ L and a, b ∈ R: (i) VARP [aX + b] = a 2 VARP [X]. (ii) VARP [X] ≥ 0. (iii) VARP [X] = 0 if and only if X is constant on supp(P). Proof Set m = EP [X]. Assertion (i) follows by noting that VARP [aX + b] = EP [(aX + b − EP [aX + b])2] = EP [(aX − am)2 ] = a 2 EP [(X − m)2 ] = a 2 VARP [X]. To prove assertions (ii) and (iii), note first that (X − m)2 ≥ 0, so that VARP [X] ≥ 0 by Proposition 2.4.6. By the same proposition, VARP [X] = 0 if and only if (X − m)2 = 0 on supp(P), which is equivalent to X = m on supp(P).  Remark 2.5.8 The functional VARP : L → R is easily seen to be convex, so that for all X, Y ∈ L and a ∈ [0, 1] we have VARP [aX + (1 − a)Y ] ≤ a VARP [X] + (1 − a) VARP [Y ].

2.5 Variance and Covariance

47

However, VARP is neither linear, nor sublinear. Indeed, for any nonconstant random variable X ∈ L, Proposition 2.5.7 implies that VARP [2X] = 4 VARP [X] = 2 VARP [X], showing that VARP is neither additive, nor homogeneous.



We would now like to derive a formula for the variance of a sum of two random variables. It seems plausible that the variance of this sum will depend on the degree of “resonance” between the two random variables, i.e., on the degree to which the deviations of these random variables from their respective expected values tend to amplify or attenuate each other. This degree of “resonance” is captured by the concept of covariance. Definition 2.5.9 (Covariance) Let X, Y ∈ L. The covariance between (or of ) X and Y (with respect to P) is defined by COVP [X, Y ] := EP [(X − EP [X])(Y − EP [Y ])].



Before providing the formula for the variance of the sum of random variables, it is useful to establish the following simple reformulation of the covariance. Proposition 2.5.10 For all random variables X, Y ∈ L we have COVP [X, Y ] = EP [XY ] − EP [X]EP [Y ]. Proof For ease of notation set mX = EP [X] and mY = EP [Y ]. Then COVP [X, Y ] = EP [(X − mX )(Y − mY )] = EP [XY ] − mX mY − mX mY + mX mY = EP [XY ] − mX mY , which proves the desired formula.



We can now establish the announced formula for the variance of the sum of two random variables. Its generalization to the sum of multiple random variables is left as an exercise; see Exercise 2.8.12. Proposition 2.5.11 For all random variables X, Y ∈ L we have VARP [X + Y ] = VARP [X] + VARP [Y ] + 2 COVP [X, Y ].

48

2 Probabilities and Expectations

Proof Set mX = EP [X] and mY = EP [Y ] and use Propositions 2.5.5 and 2.5.10 to obtain VARP [X + Y ] = EP [(X + Y )2 ] − (mX + mY )2 = EP [X2 ] + 2EP [XY ] + EP [Y 2 ] − m2X − 2mX mY − m2Y = VARP [X] + VARP [Y ] + 2(EP [XY ] − mX mY ) = VARP [X] + VARP [Y ] + 2 COVP [X, Y ], proving the desired claim.



It follows from the above formula that the variance of a sum of random variables does not coincide with the sum of the variances of the individual random variables unless the covariance term vanishes. This brings us to our next definition. Definition 2.5.12 (Uncorrelated Random Variables) We say that two random variables  X, Y ∈ L are uncorrelated ( under P) if COVP [X, Y ] = 0. It is an immediate consequence of Proposition 2.5.11 that the variance of a sum of uncorrelated random variables is equal to the sum of the variances of the individual random variables. Proposition 2.5.13 For all random variables X, Y ∈ L that are uncorrelated under P VARP [X + Y ] = VARP [X] + VARP [Y ]. The next proposition shows that independent random variables are always uncorrelated. In particular, it follows that the variance of the sum of two independent random variables equals the sum of the individual variances. Proposition 2.5.14 Let X, Y ∈ L. If X and Y are independent under P, then they are also uncorrelated under P. Proof Assume that X and Y are independent under P. Then, it follows immediately from Proposition 2.4.5 and 2.5.10 that COVP [X, Y ] = EP [XY ] − EP [X]EP [Y ] = EP [XY ] − EP [XY ] = 0, i.e., X and Y are uncorrelated under P.



The next example shows that the converse of the preceding result does not hold, i.e., uncorrelated random variables need not be independent.

2.5 Variance and Covariance

49

Example 2.5.15 Let  = {ω1 , ω2 , ω3 } and set P(ω1 ) = P(ω2 ) = P(ω3 ) = 13 . Moreover, consider the random variables X = 21ω1 , Y = 1ω1 + 21ω2 . Then, X and Y are uncorrelated, but not independent under P. To see this, note first that EP [XY ] =

2 = EP [X]EP [Y ], 3

showing that COVP [X, Y ] = 0 by Proposition 2.5.10. Hence, X and Y are uncorrelated under P. On the other side, it is immediate that P({X = 2} ∩ {Y = 2}) = 0 =

1 = P({X = 2})P({Y = 2}). 9

This implies that X and Y are not independent under P.



Two random variables sharing the same probability mass function have the same variance. This is an immediate consequence of the fact that expectations depend only on the probability mass function of the underlying random variables. Proposition 2.5.16 Let X and Y be random variables defined on the probability spaces (1 , P) and (2 , Q), respectively. If PX = QY , then VARP [X] = VARQ [Y ]. Proof Note that EP [X] = EQ [Y ] and EP [X2 ] = EQ [Y 2 ] due to Corollary 2.4.8. Hence, the statement follows immediately from Proposition 2.5.5.  We conclude this section by computing the variance of Bernoulli and binomial random variables. Example 2.5.17 (Variance of a Bernoulli) For a Bernoulli random variable X ∈ L with parameter p ∈ [0, 1] we have VARP [X] = p(1 − p). To see this, note that X = 1{X=1} and use Proposition 2.5.6 to obtain VARP [X] = P({X = 1})(1 − P({X = 1})) = p(1 − p).



50

2 Probabilities and Expectations

Example 2.5.18 (Variance of a Binomial) For a binomial random variable X ∈ L with parameters (p, m), with p ∈ [0, 1] and m ∈ N, we have VARP [X] = p(1 − p)m. To see this, recall from Example 2.4.10 that EP [X] = pm. Moreover, by Proposition A.3.6, we have m   m k EP [X ] = p (1 − p)m−k k 2 = p2 m2 − p2 m + pm. k 2

k=0

We can now conclude that VARP [X] = EP [X2 ] − EP [X]2 = p(1 − p)m.

2.6



Change of Probability

We will encounter many circumstances where we need to simultaneously consider multiple probability measures defined on the same probability space. It will prove useful to know that, under certain conditions, the expectation of a random variable under one probability measure can be expressed as expectation under another probability measure. Whether this is feasible or not depends on how the impossible events under the two probabilities are related to one another. Definition 2.6.1 (Dominated/Equivalent Probability) Consider two probability measures P and Q. We say that Q is dominated by P whenever P(ω) = 0 ⇒ Q(ω) = 0 for every ω ∈ , or equivalently supp(Q) ⊂ supp(P). In this case, we write Q  P. We say that P and Q are equivalent whenever Q is dominated by P and viceversa, or equivalently supp(P) = supp(Q). In this case, we write Q ∼ P.  Radon-Nikodym densities allow us to switch between expectations under different probability measures.

2.6 Change of Probability

51

Definition 2.6.2 (Radon-Nikodym Density) Consider a probability measure Q and assume that Q is dominated by P. A random variable D ∈ L satisfying EQ [X] = EP [DX] for every X ∈ L is called a Radon-Nikodym density for Q with respect to P.



Remark 2.6.3 The notion of a Radon-Nikodym density makes only sense if Q is dominated by P. To see this, let D ∈ L be a Radon-Nikodym density for Q with respect to P. Then, for every ω ∈ supp(Q) we must have Q(ω) = EQ [1ω ] = EP [D1ω ] = P(ω)D(ω), showing that ω ∈ supp(P) must hold as well. This shows that Q is dominated by P.



The next result provides a characterization of Radon-Nikodym densities with respect to P and shows that every probability measure dominated by P admits a Radon-Nikodym density. Radon-Nikodym densities do not need to be unique, but they are whenever P has full support. Here, for every probability measure Q we denote by dQ dP the random variable in L defined by dQ := dP

 ω∈supp(P)

Q(ω) 1ω . P(ω)

Proposition 2.6.4 For every probability measure Q such that Q is dominated by P and for every random variable D ∈ L the following statements are equivalent: (a) D is a Radon-Nikodym density for Q with respect to P. (b) D = dQ dP on supp(P). In particular, if supp(P) = , then respect to P.

dQ dP

is the unique Radon-Nikodym density for Q with

Proof Assume first that (a) holds. Then, for every ω ∈ supp(P) we must have Q(ω) = EQ [1ω ] = EP [D1ω ] = P(ω)D(ω), showing that (b) holds. Conversely, assume that (b) holds and note that EP [DX] =

 ω∈supp(P)

P(ω)

Q(ω) X(ω) = P(ω)

 ω∈supp(Q)

Q(ω)X(ω) = EQ [X]

52

2 Probabilities and Expectations

for every X ∈ L, where we used that supp(Q) ⊂ supp(P) in the second equality. This establishes that (a) holds and concludes the proof.  The last result of this section, which is sometimes called the Radon-Nikodym Theorem, provides a useful characterization of (strictly) positive random variables with normalized expectation in terms of Radon-Nikodym densities. Theorem 2.6.5 (Radon-Nikodym Theorem) For every random variable D ∈ L the following statements are equivalent: (a) D is positive on supp(P) and EP [D] = 1. (b) There exists a probability measure Q such that Q is dominated by P and D=

dQ on supp(P). dP

In this case, D is strictly positive on supp(P) if and only if Q is equivalent to P. Proof Assume first that (b) holds and note that D is clearly positive on supp(P) and satisfies EP [D] =

 ω∈supp(P)

P(ω)D(ω) =

 ω∈supp(P)

P(ω)

Q(ω) = P(ω)



Q(ω) = Q() = 1,

ω∈supp(Q)

where we used that supp(Q) ⊂ supp(P) in the third equality. This establishes (a). To prove the converse implication, assume that (a) holds and define a map Q : E → [0, 1] by setting Q(E) = EP [D1E ]. We claim that Q is a probability measure dominated by P. First, note that Q() = EP [D1 ] = EP [D] = 1. Moreover, take two disjoint events E, F ∈ E and note that 1E∪F = 1E + 1F . Then, we infer that Q(E ∪ F ) = EP [D(1E + 1F )] = EP [D1E ] + EP [D1F ] = Q(E) + Q(F ), proving additivity. Hence, Q is indeed a probability measure. Since Q(ω) = EP [D1ω ] = P(ω)D(ω)

2.7 The Meaning of Probability

53

for every ω ∈ , it follows that supp(Q) ⊂ supp(P), so that Q is dominated by P. Now, note that EQ [X] =



EP [D1ω ]X(ω) =

ω∈



P(ω)D(ω)X(ω) = EP [DX]

ω∈

for every X ∈ L, so that D is a Radon-Nikodym density of Q with respect to P. As a result, Proposition 2.6.4 implies that (b) holds and concludes the proof of the equivalence. The last assertion is a direct consequence of (b). 

2.7

The Meaning of Probability

We conclude this chapter with a brief excursion into the philosophy of probability. Attaching a rigorous meaning to the term “likelihood” or “probability” is not as straightforward as it may seem at first sight. For instance, consider the following questions: • What is the probability that, when tossing a coin, heads will show up? • What is the probability of a particular political party winning the next elections? Attaching a meaning to “probability” in the first question seems straightforward. Intuitively, one would say that, provided the coin is fair, the probability of heads showing up should be 50%, which means that if we toss the coin a large number of times we expect heads to show up in 50% of the cases. In the second question it is less clear what we mean, since we are dealing with a non-repeatable situation. The meaning of probability has occupied philosophers and scientists for a long time and there seems to be no conclusive answer. We provide a brief (and incomplete) account of the three major views on probability. Frequency Theory The sample space consists of the outcomes of a random experiment that can be repeated under the same conditions as often as we wish. In this context, the probability of an event is interpreted as the relative frequency with which we would expect to observe its occurrence in a large number of trials. This interpretation requires assuming that there is something in the “objective” or “physical” nature of the experiment that reveals itself through repeated trials. This type of interpretation is typically associated with the Austrian philosopher Richard von Mises in the 1920s, although forms of it had been already put forward by British empiricists in the middle of the nineteenth century. Propensity Theory One of the problems of the frequency approach is that it rules out the possibility of assigning a probability to singular events. This was viewed as problematic, because singular situations were encountered in many areas where assigning a probability seemed desirable. The philosopher Karl Popper was, in particular, concerned with the fact

54

2 Probabilities and Expectations

that the frequency theory did not allow to assign probabilities to situations in quantum mechanics that could be conceived as single events. This led him to put forward the interpretation of the probability of an outcome of a one-off experiment as a measure of the “physical” or “objective” disposition, or propensity, inherent to the experimental set up to generate that particular outcome. In a repeatable experiment the propensity of an event would be responsible for the relative frequencies observed in a long series of trials. Propensity theory drops the repeatability of an experiment, but continues to assume an objective notion of “propensity”. Subjective Theory The subjective theory, associated with Frank Ramsey and Bruno de Finetti in the 1920s and 1930s, drops every pretense of objectivity and interprets probability as a degree of belief of a particular rational agent. The qualifier “rational” implies that the degrees of beliefs about various events are coherent in that they conform to the axioms of probability. In this view, consistency is only required within the beliefs of a particular individual, but not across individuals. For convenience, we often illustrate definitions and results in probability theory using the language of frequency theory. However, to understand this book it is not necessary to take a particular view on what is the most appropriate interpretation. Nevertheless, we feel that when applying probability theory to finance, as in this book, it is worthwhile being aware of the controversies around the interpretation of probability theory. The above account is far from being complete or adequate and is only meant to raise awareness and to wet the appetite for this interesting and important topic. For a comprehensive treatment we direct the reader to the monographs by Gillies [10] and the references therein. We also refer to Hacking [11] for a study of the development of the notion of probability in the modern era.

2.8

Exercises

In all exercises below we assume that (, P) is a finite probability space. Exercise 2.8.1 Show that for any two events E, F ∈ E the probability that the terminal outcome belongs to exactly one of them is given by P((E ∪ F ) \ (E ∩ F )) = P(E) + P(F ) − 2P(E ∩ F ). Exercise 2.8.2 (Distribution Function) Let X ∈ L. The (cumulative) distribution function of X (with respect to P) is the function FX : R → [0, 1] defined by FX (x) := P({X ≤ x}).

2.8 Exercises

55

Assume that X takes the values a1 < · · · < am and prove that the following statements hold: (i) The distribution function can be recovered from the probability mass function as follows: ⎧ ⎪ if x ∈ (−∞, a1 ), ⎪ ⎨0, j FX (x) = i=1 PX (ai ), if x ∈ [aj , aj +1 ), with j ∈ {1, . . . , m − 1}, ⎪ ⎪ ⎩ 1, if x ∈ [am , ∞). (ii) The probability mass function can be recovered from the distribution function as follows:

PX (x) =

⎧ ⎪ ⎪ ⎨FX (x), ⎪ ⎪ ⎩

FX (x) − FX (ai ),

if x ∈ (−∞, a1 ], if x ∈ (ai , ai+1 ], with i ∈ {1, . . . , m − 1},

FX (x) − FX (am ), if x ∈ (am , ∞).

Exercise 2.8.3 Let a1 , . . . , am ∈ R be pairwise distinct and take p1 , . . . , pm ∈ (0, 1) summing up to 1. Then, the following statements are equivalent: (a) There exists a random variable X ∈ L such that ⎧ ⎨p , if x = a for k ∈ {1, . . . , m}, k k PX (x) = ⎩0, if x ∈ {a1 , . . . , am }. (b) There exist pairwise disjoint events E1 , . . . , Em ∈ E such that P(Ek ) = pk for every k ∈ {1, . . . , m}. Exercise 2.8.4 Prove that for a random variable X ∈ L the following statements are equivalent: (a) X is a Bernoulli random variable. (b) X = 1E for some event E ∈ E. Exercise 2.8.5 For any two E, F ∈ E establish the following statements: (i) If either P(E) = 1 or P(E) = 0, then E and F are independent under P. (ii) If E ∩ F = ∅, then E and F are independent under P if and only if P(E) = P(F ) = 0.

56

2 Probabilities and Expectations

Exercise 2.8.6 (Independent Experiments) Let (i , Pi ) be a finite probability space for every i ∈ {1, . . . , n} and set  = 1 × · · · × n . Each element of  represents a sequence of outcomes of n random experiments. Given ω ∈  and i ∈ {1, . . . , n}, consider the event Ei (ω) = 1 × · · · × i−1 × {ωi } × i+1 × · · · × n . The event Ei (ω) corresponds to the event “the outcome of the ith experiment is ωi ”. Now, consider a probability measure P on the sample space . We say that the random experiments are independent (under P) whenever the following conditions are satisfied: (1) E1 (ω), . . . , En (ω) are independent under P for every ω ∈ . (2) P(Ei (ω)) = Pi (ωi ) for all ω ∈  and i ∈ {1, . . . , n}. Condition (2) is equivalent to assuming that the probability that ωi occurs in the ith experiment should not depend on whether we view the ith experiment as a single experiment or as part of the combined experiment. Show that the function P : E → [0, 1] defined by P(E) =

n 

Pi (ωi )

ω∈E i=1

is the unique probability measure under which the random experiments are independent in the above sense. Exercise 2.8.7 (Compound Bernoulli Experiments) Let (0 , P0 ) be the probability space specified by 0 = {0, 1} and P0 (0) = 1 − p, P0 (1) = p for some p ∈ (0, 1). This situation is usually referred to as a (simple) Bernoulli experiment. Here, the outcome 1 is interpreted as “success” and the outcome 0 as “failure”. Now, consider a compound experiment in which the above simple Bernoulli experiment is repeated n times in an independent manner. The natural sample space for the compound experiment is given by the Cartesian product  = n0 . Note that each ω ∈  is nothing but an n-tuple of “successes” and “failures”. The number of successes that can be observed in the string ω can be therefore described by Sn (ω) =

n  i=1

ωi .

2.8 Exercises

57

(i) Deduce from Exercise 2.8.6 that the function P : E → [0, 1] defined by P(E) =



pSn (ω) (1 − p)n−Sn (ω)

ω∈E

is the unique probability measure making the simple Bernoulli experiments independent. (ii) For every i ∈ {1, . . . , n} the random variable Zi :  → R defined by Zi (ω) = ωi keeps track of whether the outcome of the ith experiment was a success, in which case Zi (ω) = 1, or a failure, in which case Zi (ω) = 0. Show that the random variables Z1 , . . . , Zn are independent under P. Exercise 2.8.8 Show that two (by necessity non-independent) random variables X, Y ∈ L may satisfy EP [XY ] = EP [X]EP [Y ]. Exercise 2.8.9 (Jensen Inequality) Let f : R → R be a convex function. Prove that for every random variable X ∈ L we have f (EP [X]) ≤ EP [f (X)]. Show that we have equality if and only if there exist a, b ∈ R such that f (x) = ax + b for all x ∈ [minω∈ X(ω), maxω∈ X(ω)]. Exercise 2.8.10 Show that the functional VARP : L → R is convex, i.e., for all X, Y ∈ L and a ∈ [0, 1] we have VARP [aX + (1 − a)Y ] ≤ a VARP [X] + (1 − a) VARP [Y ]. Exercise 2.8.11 Prove that for every random variable X ∈ L and every a ∈ R we have COVP (X, a) = 0. Exercise 2.8.12 Prove that for all X1 , . . . , Xm ∈ L we have  VARP

m  i=1

 Xi =

m  i=1

VARP [Xi ] + 2

m  i,j =1 i =j

COVP [Xi , Xj ].

58

2 Probabilities and Expectations

In particular, if X1 , . . . , Xm are independent under P, then  VARP

m 

 Xi =

i=1

m 

VARP [Xi ].

i=1

Exercise 2.8.13 Let  = {ω1 , . . . , ω5 } and P(ω1 ) = · · · = P(ω4 ) =

1 4

and P(ω5 ) = 0.

(i) Define a probability measure Q by setting Q(ω1 ) =

1 3 , Q(ω2 ) = , Q(ω3 ) = Q(ω4 ) = Q(ω5 ) = 0. 4 4

Note that Q is dominated by P and determine all the Radon-Nikodym densities for Q with respect to P. (ii) Define a random variable D ∈ L by setting D = 21{ω1 } + 1{ω3 ,ω4 ,ω5 } . Determine all the probability measures Q such that D is a Radon-Nikodym density for Q with respect to P.

3

Random Variables: Topology and Geometry

We have already equipped the set of random variables defined on a given sample space with the structure of an ordered vector space. Once a probability measure is specified, one can define a family of norms, the so-called p-norms, on the space of random variables. One of these norms, namely the 2-norm, arises from an inner product. This additional structure allows us to introduce a variety of powerful topological notions such as convergence and continuity, as well as geometrical notions such as orthogonality. To carry out this program we need to assume that the underlying probability space does not admit nonempty impossible events. Although we develop most of the material on normed and inner-product spaces we need in our specific context, the appendix contains a brief review of the abstract theory.

Standing Assumption We fix a finite probability space (, P) with  = {ω1 , . . . , ωK }. In addition, we assume that supp(P) = .

3.1

The Norm Structure

We start by introducing a family of standard norms on the space of random variables.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_3

59

60

3 Random Variables: Topology and Geometry

Definition 3.1.1 (p-Norm) Let X ∈ L. The p-norm of X is defined by ⎧ ⎨E [|X|p ] p1 , if p ∈ [1, ∞), P Xp := ⎩maxω∈ |X(ω)|, if p = ∞. The ∞-norm is also called the maximum norm.



Our first result shows that the above expressions are indeed norms on the space of random variables. As a result, when equipped with any of the above norms, the space L becomes a normed vector space. We refer to Appendix C for a general review of finitedimensional normed spaces. Theorem 3.1.2 For every p ∈ [1, ∞] the functional  · p : L → R is a norm, i.e., for all X, Y ∈ L and a ∈ R the following properties hold: (1) (2) (3) (4)

Xp ≥ 0 (positivity). Xp = 0 if and only if X = 0 ( discrimination). aXp = |a| Xp ( positive homogeneity). X + Y p ≤ Xp + Y p ( triangle inequality).

Moreover, for every X ∈ L we have Xp → X∞ as p → ∞. Proof We leave the proof that  · ∞ is a norm as an exercise. So, take p ∈ [1, ∞). In this case, the positivity and positive homogeneity axioms are clearly satisfied by virtue of the linearity of expectations and Proposition 2.4.6. The discrimination axiom also follows from Proposition 2.4.6 since supp(P) =  by our standing assumption. Finally, to establish the triangle inequality, take two nonzero random variables X, Y ∈ L. Since the function f : R → R defined by f (x) = |x|p is convex, p p p         X Y   ≤ a  X  + (1 − a) Y  = 1 a + (1 − a)      X Y p p Xp p Y p p p for every a ∈ [0, 1] by positive homogeneity. Setting a = Xp /(Xp + Y p ) and taking the pth root yields     Y X   ≤ 1, +  X + Y  Xp + Y p p p p which, by positive homogeneity, yields the triangle inequality X + Y p ≤ Xp + Y p .

3.1 The Norm Structure

61

To prove the last assertion, take an arbitrary nonzero X ∈ L and note that Xp = X∞



|X(ω)|p P(ω) p X∞ ω∈

 p1

for every p ∈ [1, ∞). Moreover, consider the nonempty event E = {ω ∈  ; |X(ω)| = X∞ }. It is immediate to see that 

P(ω)

ω∈

 |X(ω)|p P(ω) = P(E) p → X∞ ω∈E 1

as p → ∞. We conclude by noting that P(E) p → 1 as p → ∞.



Remark 3.1.3 The assumption supp(P) =  is essential for  · p to be a norm when p ∈ [1, ∞). Indeed, assume that P(ω) = 0 for some ω ∈ . In this case, we would have 1ω p = 0 for every p ∈ [1, ∞) even though 1ω is a nonzero random variable, violating the discrimination axiom of norms. The maximum norm, however, is a norm also when supp(P) = . Indeed, as is immediately seen, the maximum norm does not depend in any way on the probability P.  A key result in the theory of normed vector spaces, see Proposition C.1.2, states that in a finite-dimensional vector space all norms are equivalent. Nonetheless, it is instructive to provide a direct proof of the equivalence of the family of norms we have just introduced. Proposition 3.1.4 Let p, q ∈ [1, ∞]. Then,  · p and  · q are equivalent norms on L, i.e., there exist cp,q , cp,q ∈ (0, ∞) such that for every X ∈ L we have cp,q Xq ≤ Xp ≤ cp,q Xq . Proof It is clearly sufficient to show that we can find cp ∈ (0, ∞) such that cp X∞ ≤ Xp ≤ X∞ for every X ∈ L. Note first that for every X ∈ L we have p

Xp =

 ω∈

p

P(ω)|X(ω)|p ≤ X∞

 ω∈

p

P(ω) = X∞ .

62

3 Random Variables: Topology and Geometry

This yields the right-hand side inequality above. To prove the left-hand side inequality, 1

define cp = minω∈ P(ω) p . For every X ∈ L take ω ∈  such that |X(ω)| = X∞ and note that p

p

p

Xp ≥ P(ω)|X(ω)|p ≥ cp X∞ .



As we will see shortly, the importance of the equivalence of the above norms is that all topological notions such as closedness and openness of sets, convergence of sequences, and continuity of functionals, turn out to be independent of the particular norm with respect to which they are defined. Hence, whenever proving topological properties, one can choose the norm that is most convenient to work with. We will use this to our advantage in several instances. Distance and Convergence Recall that on a normed vector space there is a natural way to define the distance between two elements. Definition 3.1.5 (Distance, Convergence, Limit) Let X, Y ∈ L. For every p ∈ [1, ∞] the p-distance between X and Y is the number dp (X, Y ) := X − Y p . Let (Xn ) ⊂ L be a sequence. We say that (Xn ) converges in the p-norm to X if dp (Xn , X) → 0. p

In this case, X is said to be the p-limit of (Xn ) and we write Xn − → X.



Definition 3.1.6 (Ball, Bounded Set) Let X ∈ L and take p ∈ [1, ∞]. The p-ball of radius r ∈ (0, ∞) centered at X is the set p

Br (X) := {Y ∈ L ; dp (X, Y ) ≤ r}. A set A ⊂ L is said to be p-bounded if there exists r ∈ (0, ∞) such that p

A ⊂ Br (0). We say that a sequence (Xn ) ⊂ L is p-bounded if the set {Xn ; n ∈ N} is p-bounded. 

3.1 The Norm Structure

63

Remark 3.1.7 (i) It is readily verified that every convergent sequence admits a unique limit, so that the notion of a limit is well defined, and that, in this case, every subsequence converges to the same limit; see Proposition C.3.3. (ii) Let p ∈ [1, ∞]. Using the triangle inequality, it is easy to see that a set p A ⊂ L is p-bounded if and only if A ⊂ Br (X) for some X ∈ L and r ∈ (0, ∞). (iii) Since all the p-norms are equivalent, it is clear that the notions of convergence for sequences and boundedness for sets do not depend on the particular p. This allows us to say “a sequence (Xn ) ⊂ L converges to a random variable X ∈ L”, written Xn → X, whenever we have convergence in one and, hence, in all the p-norms. Similarly, we say “a set A ⊂ L is bounded” if it is bounded in one and, hence, in all of the p-norms.  Since  is a finite set, the convergence of a sequence of random variables is equivalent to pointwise convergence or, as is sometimes said, to “convergence ω by ω”; see also Proposition C.3.4. Proposition 3.1.8 For a sequence (Xn ) ⊂ L and a random variable X ∈ L the following statements are equivalent: (a) Xn → X. (b) Xn (ω) → X(ω) for every ω ∈ . Proof That (a) implies (b) follows immediately from the fact that 0 ≤ |Xn (ω) − X(ω)| ≤ Xn − X∞ for every ω ∈ . Similarly, that (b) implies (a) follows from Xn − X1 =



P(ω) |Xn (ω) − X(ω)| .



ω∈

The vector space operations in L are continuous in the following sense. These assertions are immediate consequences of the pointwise characterization of convergence and will be used without further reference.

64

3 Random Variables: Topology and Geometry

Proposition 3.1.9 For sequences (Xn ), (Yn ) ⊂ L and (an ) ⊂ R and for X, Y ∈ L and a ∈ R the following statements hold: (i) If Xn → X and Yn → Y , then Xn + Yn → X + Y . (ii) If an → a and Xn → X, then an Xn → aX. (iii) If Xn → X and Yn → Y , then Xn Yn → XY .

3.2

The Topological Structure

We start by recalling two fundamental topological concepts. Definition 3.2.1 (Open/Closed Set) Let p ∈ [1, ∞] and consider a set A ⊂ L. We say p that A is p-open if for every X ∈ A we have Br (X) ⊂ A for some r ∈ (0, ∞). We say c that A is p-closed whenever the complement A is p-open.  Remark 3.2.2 It is clear from Proposition 3.1.4 that the notions of open and closed sets do not depend on the particular choice of p, so that we can simply say “open set” or “closed set”, dropping the index p in our notation.  We now provide a sequential characterization of the closedness of a set, which holds in general normed spaces. In view of its importance, we include a proof. Proposition 3.2.3 For a set A ⊂ L the following statements are equivalent: (a) For every sequence (Xn ) ⊂ L and every X ∈ L we have (Xn ) ⊂ A, Xn → X ⇒ X ∈ A. (b) A is closed. Proof To show that (a) implies (b), assume that A contains the limit of all convergent sequences with elements in A. If A is not closed, then we find a random variable X ∈ Ac 1 (X) ∩ A for every n ∈ N. such that Br1 (X) ∩ A = ∅ for every r ∈ (0, ∞). Take Xn ∈ B1/n Since clearly Xn → X, our assumption would imply that X ∈ A, which is however impossible because X belongs to Ac . Hence, the set A must be closed and the desired implication holds. To show that (b) implies (a), assume that A is closed and take a sequence (Xn ) ⊂ A and X ∈ L such that Xn → X. Note that, for every r ∈ (0, ∞), we have Xn − X1 ≤ r or equivalently Xn ∈ Br1 (X) for n ∈ N sufficiently large. This implies that Br1 (X) ∩ A = ∅ for every r ∈ (0, ∞). Since Ac is open by assumption, it follows that X must belong to A, proving the desired implication. 

3.2 The Topological Structure

65

We record a variety of useful properties of open and closed sets. The first result tells us that openness is preserved by unions and, similarly, closedness is preserved by intersections. The verification is left as an exercise. Proposition 3.2.4 Let I be an arbitrary index set and for every i ∈ I take a set Ai ⊂ L. The following statements hold: (i) If Ai is open for every i ∈ I , then i∈I Ai is also open. If, in addition, I is finite, then i∈I Ai is also open. (ii) If Ai is closed for every i ∈ I , then i∈I Ai is also closed. If, in addition, I is finite, then i∈I Ai is also closed. The second result shows that every linear subspace of L is closed. This fact is true in every finite-dimensional normed space and will be repeatedly used in the sequel without explicit reference. Given its importance, we include a proof. Proposition 3.2.5 Every linear subspace of L is closed. Proof Let M be a linear subspace of L and assume that dim(M) = M. If M = 0, then M = {0} and it is clear that M is closed. Hence, assume that M > 0. In this case, we find M linearly independent random variables Z1 , . . . , ZM ∈ M and it follows from Proposition B.3.6 that there exist ZM+1 , . . . , ZK ∈ L such that {Z1 , . . . , ZK } is a basis for L. In particular, for every X ∈ L we can write X=

K 

ai (X)Zi

i=1

for unique coefficients a1 (X), . . . , aK (X) ∈ R by Proposition B.3.4. Now, take a sequence (Xn ) ⊂ M and assume that Xn → X for some X ∈ L. By Proposition C.3.4, a1 (Xn ) → a1 (X), . . . , aK (Xn ) → aK (X). Since ai (Xn ) = 0 for every i ∈ {M + 1, . . . , K} and every n ∈ N, we infer that ai (X) = 0 for every i ∈ {M + 1, . . . , K} as well. But then X=

M 

ai (X)Zi ∈ M,

i=1

showing that M is closed by Proposition 3.2.3.



66

3 Random Variables: Topology and Geometry

Compact Sets Another fundamental topological notion is compactness. Definition 3.2.6 (Compact Set) Let p ∈ [1, ∞] and take A ⊂ L. We say that A is pcompact if every sequence (Xn ) ⊂ A has a subsequence that converges in the p-norm to some X ∈ A.  Remark 3.2.7 (i) Proposition 3.1.4 implies that compactness does not depend on the particular choice of p and we may say “compact set” without any reference to p. (ii) What we have called a “compact” set is in fact a “sequentially compact” set. In normed spaces, however, compactness and sequential compactness are equivalent notions; see Proposition C.3.6.  Compactness will play a critical role in the sequel. Here, we single out two fundamental results related to compactness. The first result provides a useful characterization of compactness that holds because L is finite-dimensional. We assume known the fact that every bounded sequence of real numbers has a convergent subsequence, which is typically referred to as the Bolzano-Weierstrass Theorem. Proposition 3.2.8 For every set A ⊂ L the following statements are equivalent: (a) A is compact. (b) A is closed and bounded. In particular, every bounded sequence in L admits a convergent subsequence. Proof First, assume that A is compact. It follows immediately from Proposition 3.2.3 that A is closed. If A were not bounded, then we would find a sequence (Xn ) ⊂ A such that Xn ∞ > n for every n ∈ N. However, this is not possible because every subsequence of (Xn ) would be unbounded and, as such, would not converge. Hence, A is bounded. This shows that (a) implies (b). Conversely, assume that A is closed and bounded and take a sequence (Xn ) ⊂ A. Note that, by boundedness, there exists an r ∈ (0, ∞) such that Xn ∞ ≤ r for every n ∈ N. This implies that |Xn (ωi )| ≤ r for every i ∈ {1, . . . , K} and every n ∈ N. We construct a convergent subsequence using induction. Base Step Since (Xn (ω1 )) is a bounded sequence of real numbers, we find a subsequence (Xn(1) ) of (Xn ) and a1 ∈ R such that Xn(1) (ω1 ) → a1 .

3.2 The Topological Structure

67 (k)

Induction Step Assume that for k ∈ {1, . . . , K − 1} we have found a subsequence (Xn ) of (Xn ) and numbers a1 , . . . , ak ∈ R such that Xn(k) (ωi ) → ai

(3.1)

for every i ∈ {1, . . . , k}. Since (Xn(k) )(ωk+1 ) is a bounded sequence of real numbers, there (k+1) (k) exist a subsequence (Xn ) of (Xn ) and ak+1 ∈ R such that Xn(k+1) (ωk+1 ) → ak+1 . Since (Xn(k+1) ) is a subsequence of (Xn(k) ), it follows from (3.1) that Xn(k+1) (ωi ) → ai for every i ∈ {1, . . . , k + 1}. The above induction argument delivers a subsequence (Xn(K) ) of (Xn ) and numbers a1 , . . . , aK ∈ R such that Xn(K) (ωi ) → ai for every i ∈ {1, . . . , K}. Now, define a random variable X ∈ L by setting X(ω1 ) = a1 , . . . , X(ωK ) = aK . It is clear that Xn(K) (ω) → X(ω) for every ω ∈ . Proposition 3.1.8 now implies that (K) Xn → X, concluding the proof.  The second result establishes a sufficient condition for the closedness of the sum of two closed sets. Proposition 3.2.9 Let A and B be closed subsets of L. If A is compact, then A + B is closed. Proof Consider two sequences (Xn ) ⊂ A and (Yn ) ⊂ B and assume that Xn + Yn → Z for a Z ∈ L. By compactness, (Yn ) admits a subsequence (Yn ) such that Yn → Y for some Y ∈ B. Recall from the definition of a subsequence that Yn = Yα(n) for every n ∈ N, where α : N → N is a suitable function. Setting Xn = Xα(n) for every n ∈ N, it is easy to see that Xn → Z − Y . Since A is closed, we infer from Proposition 3.2.3 that Z − Y ∈ A, which yields Z ∈ A + B. Another application of Proposition 3.2.3 shows that A + B is closed. 

68

3 Random Variables: Topology and Geometry

Remark 3.2.10 It is worth pointing out that the above closedness result is not true if we do not ask for compactness of B; see Exercise 3.7.7. Results ensuring the closedness of the sum of two closed sets are rare and very useful.

3.3

Topology and Order

We have already seen that the order introduced on L is compatible with the vector space operations, making L into an ordered vector space. The next proposition shows that the order is also compatible with the topology on L, which makes L into an ordered normed space, i.e., into a normed space equipped with an order structure for which the positive cone L+ is closed. Proposition 3.3.1 For all sequences (Xn ), (Yn ) ⊂ L and random variables X, Y ∈ L Xn → X, Yn → Y, Xn ≥ Yn for every n ∈ N ⇒ X ≥ Y. In particular, the positive cone L+ is closed. The positive cone can be “generated” by a compact set. This property will play a fundamental role in our study of extensions of linear functionals in the next chapter. Proposition 3.3.2 Let B = {1ω1 , . . . , 1ωK } be the canonical basis of L. Then, co(B) is a compact set and L+ = cone(co(B)) = {aX ; a ∈ [0, ∞), X ∈ co(B)}. Proof The inclusion “⊃” is clear since L+ is a convex cone containing B. To establish the converse inclusion, take an arbitrary nonzero X ∈ L+ and set λ=

K 

X(ωk ) and λk =

k=1

X(ωk ) λ

for k ∈ {1, . . . , K}.

Clearly, λ ∈ (0, ∞) and λ1 , . . . λK ∈ [0, 1] add up to 1. Since X=λ

K  k=1

it follows that the inclusion “⊂” holds.

λk 1ωk ,

3.4 Continuous Functionals

69

It remains to prove that co(B) is compact. To this end, take any X ∈ co(B) and note that 0 ≤ X ≤ 1. This immediately yields X∞ ≤ 1 and establishes that co(B) is bounded. To show compactness it is then sufficient to prove that co(B) is closed. Take a sequence (Xn ) ⊂ co(B) converging to some X ∈ L. Note that, for every n ∈ N, the numbers Xn (ω1 ), . . . , Xn (ωK ) must belong to [0, 1] and add up to 1. Since Xn (ω) → X(ω) for every ω ∈ , we conclude that X(ω1 ), . . . , X(ωK ) also belong to [0, 1] and add up to 1.  Since X = K k=1 X(ωk )1ωk , it follows that X ∈ co(B), establishing that co(B) is closed by Proposition 3.2.3 and concluding the proof. 

3.4

Continuous Functionals

In this section we recall the fundamental notion of continuity and show that every convex functional defined on a space of random variables is automatically continuous. Definition 3.4.1 (Continuous Functional) Let M be a linear subspace of L. We say that a functional π : M → R is continuous if for every sequence (Xn ) ⊂ M and every X ∈ M we have Xn → X ⇒ π(Xn ) → π(X).



We recall first one of the key results in analysis, the so-called Weierstrass Theorem, stating that a continuous function always attains a maximum and a minimum on a compact set. Proposition 3.4.2 (Weierstrass Theorem) Let M be a linear subspace of L. For every continuous functional π : M → R and every compact set A ⊂ M there exist two random variables Xmax , Xmin ∈ A such that π(Xmax ) = sup π(X), π(Xmin ) = inf π(X). X∈A

X∈A

Proof Set π(A) = {π(X) ; X ∈ A}. By compactness, every sequence (Xn ) ⊂ A admits a subsequence (Xn ) that converges to some X ∈ A. Since π(Xn ) → π(X) by continuity, we infer that π(A) is a compact subset of R. Then, π(A) has a maximum and a minimum, concluding the proof.  Before proving our announced result on the continuity of convex functionals, we first show that convex functions are bounded on bounded sets.

70

3 Random Variables: Topology and Geometry

Lemma 3.4.3 Let M be a linear subspace of L and let π : M → R be a convex functional. Then, for every r ∈ (0, ∞) there exists cr ∈ (0, ∞) such that sup |π(X)| ≤ cr

X∈Mr

(3.2)

where Mr = {X ∈ M ; X∞ ≤ r}. Proof Fix r ∈ (0, ∞). We break the proof into three steps. Step 1 Let {X1 , . . . , XM } be a basis of M. For every X ∈ M there exist unique numbers a1 (X), . . . , aM (X) ∈ R such that X=

M 

ai (X)Xi .

(3.3)

i=1

It follows from Proposition C.3.4 that for every index i ∈ {1, . . . , M} the functional πi : M → R given by πi (X) = ai (X) is continuous. Since Mr is compact, Proposition 3.4.2 shows that there exists a constant nr ∈ (0, ∞) such that max

sup |ai (X)| =

i∈{1,...,M} X∈Mr

max

sup |πi (X)| ≤ nr .

i∈{1,...,M} X∈Mr

(3.4)

Step 2 We show that every random variable in Mr can be expressed as a convex combination of random variables belonging to the finite subset of M given by Sr = {0, nr MX1 , . . . , nr MXM , −nr MX1 , . . . , −nr MXM }. To see this, take X ∈ Mr and set ⎧ ⎨ |ai (X)| , nr M bi = ⎩1 − M j =1

if i ∈ {1, . . . , M}, |aj (X)| nr M ,

if i = 0.

3.4 Continuous Functionals

71

It is not difficult to show that bi ∈ [0, 1] for every i ∈ {0, . . . , M} and that these coefficients add up to 1. In particular, b0 is nonnegative because M  |aj (X)| j =1

nr M



M M   nr 1 = = 1, nr M M j =1

i=1

by virtue of (3.4). It now suffices to observe that (3.3) is equivalent to X = b0 0 +



bi (nr MXi ) +

i∈I



bi (−nr MXi ),

i∈J

where the index sets I and J are given by I = {i ∈ {1, . . . , M} ; ai (X) ≥ 0} and J = {i ∈ {1, . . . , M} ; ai (X) < 0}. This shows that X is a convex combination of elements of Sr . Step 3 We prove the desired claim. Note that Sr is finite and set mr = max |π(X)|. X∈Sr

 By Step 2, we can write every X ∈ Mr as X = m i=1 ai Xi with X1 , . . . , Xm ∈ Sr and a1 , . . . , am ∈ [0, 1] adding up to 1. Then, Proposition 1.5.15 yields π(X) ≤

m 

ai π(Xi ) ≤

i=1

m 

a i mr = mr .

(3.5)

i=1

Moreover, using convexity, we have

π(0) = π

 1 1 1 1 X + (−X) ≤ π(X) + π(−X), 2 2 2 2

which implies π(X) ≥ 2π(0) − π(−X) ≥ 2π(0) − mr ,

(3.6)

where we have used that −X also belongs to Mr , so that (3.5) also holds if we replace X by −X. Combining (3.5) with (3.6) immediately delivers (3.2) with constant cr given, e.g., by mr + 2|π(0)|. 

72

3 Random Variables: Topology and Geometry

We can now prove that every convex functional defined on a space of random variables is continuous. Proposition 3.4.4 Let M be a linear subspace of L and let π : M → R be a convex functional. Then, for every p ∈ [1, ∞] and every r ∈ (0, ∞) there exists a constant cr,p ∈ (0, ∞) such that for all X, Y ∈ Mr we have |π(X) − π(Y )| ≤ cr,p X − Y p ,

(3.7)

where Mr = {X ∈ M ; Xp ≤ r}. In particular, π is continuous. Proof By Proposition 3.1.4 we can focus on the case p = ∞. Fix r ∈ (0, ∞). Take X, Y ∈ Mr with X = Y and define Z ∈ M by Z =Y +r

Y −X . Y − X∞

Note that Z ∈ M2r by the triangle inequality and that Y =

r Y − X∞ Z+ X. r + Y − X∞ r + Y − X∞

Thanks to the convexity of π, π(Y ) ≤

r Y − X∞ π(Z) + π(X). r + Y − X∞ r + Y − X∞

Consequently, there exists a constant c ∈ (0, ∞) such that π(Y ) − π(X) ≤

Y − X∞ Y − X∞ π(Z) − π(X) r + Y − X∞ r + Y − X∞

≤ Y − X∞ ≤

π(Z) − π(X) r

c Y − X∞ , r

where we used Lemma 3.4.3 to obtain the last inequality. Exchanging the roles of X and Y we get (3.7) with cr,∞ = rc . The continuity of π follows from (3.7).  In the case of a sublinear functional, a stronger continuity result holds true. Indeed, in this case, one can choose the constant in (3.7) to be independent of r.

3.5 The Inner-Product Structure

73

Corollary 3.4.5 Let M be a linear subspace of L and let π : M → R be a sublinear functional. Then, for every p ∈ [1, ∞] there exists cp ∈ (0, ∞) such that for all X, Y ∈ M |π(X) − π(Y )| ≤ cp X − Y p .

(3.8)

Proof By Proposition 3.4.4, there exists a constant cp ∈ (0, ∞) such that |π(X)| ≤ cp for every X ∈ M1 = {X ∈ M ; Xp ≤ 1}. Now, for X, Y ∈ M with X = Y set X =

X Y and Y  = . X − Y p X − Y p

The subadditivity of π implies π(X ) = π(X − Y  + Y  ) ≤ π(X − Y  ) + π(Y  ). Since X − Y  ∈ M1 , we infer that π(X ) − π(Y  ) ≤ π(X − Y  ) ≤ cp . The same inequality clearly holds if we exchange the roles of X and Y . As a result of the positive homogeneity of π, we infer that |π(X) − π(Y )| = |π(Y  ) − π(X )| ≤ cp . X − Y p This proves (3.8) for arbitrary X, Y ∈ M such that X = Y . Since (3.8) is clearly true for all X, Y ∈ M such that X = Y , the proof is complete.  Remark 3.4.6 The continuity property established in (3.7) is called local Lipschitz continuity and, similarly, property (3.8) is called ( global) Lipschitz continuity. It follows from the above results that, being linear, the expectation functional is Lipschitz continuous and, being convex, the variance functional is locally Lipschitz continuous. 

3.5

The Inner-Product Structure

The 2-norm introduced at the beginning of this chapter has a special property that makes it particularly appealing: It is a norm arising from an inner product on L. It therefore opens up the possibility of exploiting the rich geometrical structure of inner-product spaces. For a brief review of finite-dimensional inner-product spaces we refer to Appendix D.

74

3 Random Variables: Topology and Geometry

Definition 3.5.1 (Inner Product) Let X, Y ∈ L. The inner product of X and Y (with respect to P) is defined by (X, Y )P := EP [XY ].



The next result shows that the above expression indeed defines an inner product on the space of random variables. Proposition 3.5.2 The map (·, ·)P : L × L → R is an inner product, i.e., the following properties hold for all X, Y, Z ∈ L and a ∈ R: (1) (2) (3) (4) (5)

(X, X)P ≥ 0 (positivity). (X, X)P = 0 if and only if X = 0 (discrimination). (X, Y )P = (Y, X)P (symmetry). (X + Z, Y )P = (X, Y )P + (Z, Y )P (additivity). (aX, Y )P = a(X, Y )P (homogeneity).

Moreover, for every X ∈ L we have X2 =

√ (X, X)P .

Proof Recall that supp(P) =  by our standing assumption. The positivity axiom as well as the discrimination axiom of inner products follow from Proposition 2.4.6. Moreover, the symmetry axiom is evidently satisfied and the additivity and homogeneity axioms are immediate consequences of the linearity of expectations.  Thus, equipped with the inner product we have just defined, the space L is an innerproduct space whose underlying norm is the 2-norm. From now on we will always think of L as being equipped with this inner product and norm. Nevertheless, for some convergence or continuity arguments we will sometimes use one of the other p-norms. As already mentioned, this is justified because these concepts do not depend on the particular p. Remark 3.5.3 The above result is not true if nontrivial impossible events exist. Indeed, if we have P(ω) = 0 for some ω ∈ , then 1ω is a nonzero random variable satisfying  (1ω , 1ω )P = 0, thus violating the discrimination property of inner products. In the space of random variables the Cauchy-Schwarz inequality, see Proposition D.1.3, takes the following form. We provide an explicit proof that relies on the properties of random variables and expectations. Proposition 3.5.4 (Cauchy-Schwarz Inequality) For all X, Y ∈ L we have |EP [XY ]| ≤



 EP [X2 ] EP [Y 2 ].

(3.9)

3.5 The Inner-Product Structure

75

The inequality is strict if and only if X and Y are linearly independent. Proof If X = 0 or Y = 0 all assertions are trivial, so assume that X and Y are both nonzero. In this case, note that X2 and Y 2 have a strictly positive expectation by Proposition 2.4.6, and 

|Y | Y2 |X| X2 =  + − 2 2 2 2 EP [X ] EP [Y ] EP [X ] EP [Y 2 ]

|X|

|Y |

−  EP [X2 ] EP [Y 2 ]

2 ≥ 0.

Clearly, this implies that 1 |XY | ≤ 2

   EP [Y 2 ] 2 EP [X2 ] 2  X + Y . EP [X2 ] EP [Y 2 ]

Passing to expectations we obtain EP [|XY |] ≤

      1  EP [X2 ] EP [Y 2 ] + EP [X2 ] EP [Y 2 ] = EP [X2 ] EP [Y 2 ]. 2

The inequality (3.9) follows once we note that |EP [XY ]| ≤ EP [|XY |] holds. It is easy to see that, if Y = aX for some a ∈ R, then we have equality in (3.9). Indeed, in this case |EP [XY ]| = |a|EP [X2 ] =



EP [X2 ]EP [Y 2 ].

Finally, assume there is equality in (3.9) and set  a= 

EP [Y 2 ]

EP [X2 ]

.

We claim that either Y = aX or Y = −aX holds. Indeed, assume this is not the case and take b ∈ {a, −a}. Then, it follows from Proposition 2.4.6 that 0 < EP [(bX − Y )2 ] = b 2 EP [X2 ] + EP [Y 2 ] − 2bEP [XY ], leading to 2bEP[XY ] < b2 EP [X2 ] + EP [Y 2 ]. As a consequence, 2a|EP[XY ]| < a 2 EP [X2 ] + EP [Y 2 ].

76

3 Random Variables: Topology and Geometry

However, plugging the value of a in yields  2

EP [Y 2 ]

EP [X2 ]

|EP [XY ]| < 2EP [Y 2 ]

or, equivalently, |EP [XY ]|
0, X(ω1 )X(ω2 ) ≥ 1}. Show that A and B are closed, but their sum A + B is not. Exercise 3.7.8 Let M be a linear subspace of L such that M ∩ L+ = {0}. Show that M + L+ is closed. Exercise 3.7.9 Consider a set A ⊂ L. Show that the following statements are equivalent: (a) For every family {Ai ; i ∈ I } of open subsets of L, with arbitrary index set I , we have   A⊂ Ai ⇒ A ⊂ Aih for some i1 , . . . , im ∈ I. i∈I

(b) A is compact.

h∈{1,...,m}

82

3 Random Variables: Topology and Geometry

Exercise 3.7.10 Prove that for a functional π : L → R the following statements are equivalent: (a) {X ∈ L ; π(X) ∈ I } is open for every open interval I ⊂ R. (b) π is continuous. Exercise 3.7.11 Prove the following statements: (i) X + Y 22 + X − Y 22 = 2(X22 + Y 22 ) for all X, Y ∈ L. (ii) X + Y 2 = X2 + Y 2 for all orthogonal X, Y ∈ L. Exercise 3.7.12 Let M be a linear subspace of L. Prove that the orthogonal complement M⊥ = {X ∈ L ; EP [XY ] = 0 for every Y ∈ M} is a linear subspace of L and that the orthogonal projection PM : L → M satisfies the following properties: (i) PM is linear. (ii) X − PM (X) ∈ M⊥ for every X ∈ L. (iii) EP [XY ] = EP [PM (X)Y ] for all X ∈ L and Y ∈ M.

4

Extensions of Linear Functionals

In this chapter we provide a variety of extension and representation results for linear functionals defined on spaces of random variables. The basic extension result states that every linear functional defined on a proper subspace of random variables can be extended to the entire space of random variables preserving linearity. The main representation result is a version of the classical Riesz representation, which states that every linear functional defined on a space of random variables can be represented in terms of an expectation. We pay special attention to linear functionals that are strictly positive because the corresponding extension and representation results play a fundamental role in the study of financial markets.

Standing Assumption We fix a finite probability space (, P) with  = {ω1 , . . . , ωK }. In addition, we assume that supp(P) = .

4.1

Separation Theorems

In this section we discuss a variety of “separation” results that constitute the key ingredient in proving the extension and representation theorems for linear functionals in the following sections.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_4

83

84

4 Extensions of Linear Functionals

To introduce the concept of “separation”, consider a linear functional ψ : L → R. For every a ∈ R we define the hyperplane generated by ψ at the level a as Ha (ψ) := {X ∈ L ; ψ(X) = a}. The positive halfspace generated by ψ at the level a is defined as Ha+ (ψ) := {X ∈ L ; ψ(X) ≥ a}, and, the negative halfspace generated by ψ at the level a as Ha− (ψ) := {X ∈ L ; ψ(X) ≤ a}. The two halfspaces Ha+ and Ha− can be seen as being separated by the linear functional ψ in the following sense: If X is a random variable in Ha+ and Y a random variable in Ha− , then ψ(Y ) ≤ a ≤ ψ(X). In the same vein, it makes sense to view two subsets A and B of L as being separated by ψ precisely when they lie on opposite sides of some hyperplane generated by ψ. Therefore, two sets of random variables A and B are said to be separated by ψ whenever sup ψ(X) ≤ inf ψ(X) or

X∈A

X∈B

sup ψ(X) ≤ inf ψ(X).

X∈B

X∈A

Similarly, we say that A and B are strictly separated by ψ whenever sup ψ(X) < inf ψ(X) or

X∈A

X∈B

sup ψ(X) < inf ψ(X).

X∈B

X∈A

We focus on the separation of convex sets. The first separation result states that a closed convex set not containing 0 can be strictly separated from the set {0}. Proposition 4.1.1 Let A ⊂ L be a closed convex set with 0 ∈ A. Then, there exists a linear functional ψ : L → R such that inf ψ(X) > 0.

X∈A

Proof For every r ∈ (0, ∞) consider the set Ar = A ∩ Br2 (0) and take r large enough so that Ar is nonempty. For instance, we could take r = X2 for some X ∈ A. Note that the ball Br2 (0) is compact. Being a closed subset of a compact set, Ar is also compact. Since the 2-norm is a continuous function, it attains a minimum on Ar by virtue of the

4.1 Separation Theorems

85

Weierstrass Theorem, i.e., one can find Z ∈ Ar such that EP [Z 2 ] = Z22 = min X22 = min EP [X2 ]. X∈Ar

X∈Ar

By the definition of Ar , we have Z22 ≤ r. Since X2 > r for every X ∈ A \ Ar , it follows that EP [Z 2 ] = min EP [X2 ]. X∈A

Now, consider the linear functional ψ : L → R defined by ψ(X) = EP [ZX]. We show that ψ has the required property. First, observe that for every a ∈ (0, 1] and every X ∈ A we have that Z + a(X − Z) = aX + (1 − a)Z ∈ A by the convexity of A. This yields ψ(Z) = EP [Z 2 ] ≤ EP [(Z + a(X − Z))2 ] = ψ(Z) + 2aψ(X − Z) + a 2 EP [(X − Z)2 ], whence ψ(X − Z) +

a EP [(X − Z)2 ] ≥ 0. 2

Since a was an arbitrary number in (0, 1], we infer that ψ(X − Z) ≥ 0. Recalling that X was an arbitrary element of A, we conclude that inf ψ(X) ≥ ψ(Z).

X∈A

It remains to recall that A does not contain the zero random variable and observe that ψ(Z) = EP [Z 2 ] > 0 by the strict positivity of expectations recorded in Proposition 2.4.6.  The second separation result is an easy corollary of the preceding proposition and establishes that two disjoint closed convex sets can always be strictly separated if one of them is compact. This result is typically known as the Hahn-Banach Separation Theorem. Theorem 4.1.2 (Hahn-Banach Separation Theorem) Let A ⊂ L be a closed convex set and B ⊂ L a compact convex set with A ∩ B = ∅. Then, there exists a linear functional

86

4 Extensions of Linear Functionals

ψ : L → R such that sup ψ(X) < inf ψ(X). X∈B

X∈A

Proof Set C = B − A and note that 0 ∈ / C because A and B are disjoint. It is easy to see that C is convex. Moreover, being the sum of the compact set B and the closed set −A, it is also closed by Proposition 3.2.9. Hence, by Proposition 4.1.1, there exist a linear functional ψ : L → R and a constant c ∈ (0, ∞) such that ψ(Y − X) ≥ c > 0 for all X ∈ A and Y ∈ B. This implies that ψ(X) < ψ(X) +

c < ψ(X) + c ≤ ψ(Y ) 2

for all X ∈ A and Y ∈ B. Consequently, sup ψ(X) < sup ψ(X) + c ≤ inf ψ(Y ),

X∈A

Y ∈B

X∈A



proving the desired inequality.

A useful special case of the Hahn-Banach Separation Theorem is obtained by focusing on the separation between a linear subspace and a convex compact set. Recall that, given a functional ψ : L → R, the kernel of ψ is the set ker(ψ) := {X ∈ L ; ψ(X) = 0}. The same notation is used for functionals defined on linear subspaces of L; see Definition B.4.4. Theorem 4.1.3 Let M be a linear subspace of L and let B ⊂ L be a compact convex set with M ∩ B = ∅. Then, there exists a linear functional ψ : L → R such that M ⊂ ker(ψ) and inf ψ(X) > 0.

X∈B

Proof Since M is closed and convex, it follows from Theorem 4.1.2 that sup ψ(X) < inf ψ(X)

X∈M

X∈B

4.2 Extension Results

87

for a suitable linear functional ψ : L → R. To conclude the proof, we only need to prove that M ⊂ ker(ψ). To this end, take Y ∈ M and note that aψ(Y ) = ψ(aY ) < inf ψ(X) X∈B

must hold for any a ∈ R, which is only possible if ψ(Y ) = 0. It follows that M is indeed contained in ker(ψ).  Corollary 4.1.4 Let M be a linear subspace of L and take X ∈ L \ M. Then there exists a linear functional ψ : L → R such that M ⊂ ker(ψ) and ψ(X) = 1. Proof Since {X} is a compact convex set that is disjoint from M, it follows immediately from Theorem 4.1.3 that there exists a linear functional ψ : L → R such that M ⊂ ker(ψ) and ψ(X) > 0. Now, define a linear functional ϕ : L → R by setting ϕ(Y ) =

ψ(Y ) . ψ(X)

It is clear that M ⊂ ker(ϕ) and ϕ(X) = 1, concluding the proof.

4.2



Extension Results

As we will see later, an important part of mathematical finance has to do with the ability to “extend” a strictly-positive linear functional defined on a given subspace of L to a functional defined on the whole of L so that linearity and strict positivity are preserved. This section is devoted to introducing the notion of an extension and to showing that we can always ensure the existence of linear and strictly positive extensions.

Standing Assumption Throughout the whole section we assume that M ⊂ L is a proper linear subspace of L. In particular, dim(M) < K.

Definition 4.2.1 (Extension) Consider a functional π : M → R and let N be a linear subspace of L such that M ⊂ N . Any functional ψ : N → R satisfying ψ(X) = π(X) for every X ∈ M is called an extension of π (to N ).  Extension of Linear Functionals We start by pointing out a useful way of establishing whether a functional defined on L is a linear extension of a given linear functional defined on M.

88

4 Extensions of Linear Functionals

Proposition 4.2.2 Let π : M → R be a linear functional and let X ∈ M \ ker(π). Then, for a linear functional ψ : L → R the following statements are equivalent: (a) ψ is an extension of π. (b) ψ(X) = π(X) and ker(π) ⊂ ker(ψ). Proof It is clear that (a) implies (b). To prove the converse implication, assume that (b) holds and observe that M = ker(π) + span(X). This is because the dimension of ker(π) is equal to dim(M) − 1 by Proposition B.4.7 and X ∈ / ker(π). Hence, for every Y ∈ M we find Z ∈ ker(π) and a ∈ R such that Y = Z + aX. Our assumptions then imply that ψ(Y ) = ψ(Z) + aψ(X) = aψ(X) = aπ(X) = π(Z) + aπ(X) = π(Y ), proving that ψ is an extension of π.



Extending a linear functional π : M → R to a linear functional defined on the whole space L is easy and may be achieved in different ways. One possibility is to use the HahnBanach Theorem. Even though there are more direct ways to construct linear extensions, see Exercise 4.4.1, the proof below provides a nice way to practice the application of this important result. Here, we set Eπ (L) := {linear extensions of π to L}. Theorem 4.2.3 For every linear functional π : M → R the following statements hold: (i) π admits a linear extension to L. (ii) For every X ∈ L we have ⎧ ⎨{π(X)}, if X ∈ M, {ψ(X) ; ψ ∈ Eπ (L)} = ⎩R, if X ∈ / M. In particular, π admits infinitely many linear extensions to L. Proof We first establish (i). If π is the zero functional on M, then the zero functional on L is an obvious linear extension of π to L. If π is not the zero functional, then we can find U ∈ M such that π(U ) = 1 by linearity. Since ker(π) is a linear subspace of L and U ∈ ker(π), we can apply Corollary 4.1.4 to obtain a linear functional ψ : L → R such that ker(π) ⊂ ker(ψ) and ψ(U ) = 1. It follows from Proposition 4.2.2 that ψ is a linear extension of π to the entire space L. Next, we focus on (ii). The desired equality is trivial when X belongs to M, hence assume that X ∈ L \ M. In this case, we can apply Corollary 4.1.4 to obtain a linear

4.2 Extension Results

89

functional ϕ : L → R such that M ⊂ ker(ϕ) and ϕ(X) = 1. Now, take any linear extension ψ ∈ Eπ (L) and a ∈ R and consider the linear functional ψa = ψ + aϕ. It is clear that ψa belongs to Eπ (L) and satisfies ψa (X) = ψ(X) + a for every a ∈ R. This concludes the proof.  Remark 4.2.4 (Extensions to Subspaces) Let π : M → R be a linear functional and let N be a linear subspace of L containing M. It is clear that, by restricting to N any linear extension of π to the full space L, we obtain a linear extension of π defined on N . Hence, all the results of this section remain valid if we replace L by any subspace N containing the subspace M.  Extension of Strictly-Positive Linear Functionals In this section we take up the study of strictly-positive linear extensions and work under the following additional assumption.

Standing Assumption Throughout the remainder of this section we assume that M contains a strictly positive random variable.

The following lemma establishes an interesting “openness” property for strictly positive linear functionals that will be used to prove the main extension result of this section. Lemma 4.2.5 Let ϕ : L → R be a linear functional. Then, for every strictly positive linear functional ψ : L → R there exists a ∈ (0, ∞) such that ψ + bϕ is strictly positive for every b ∈ (−a, a). Proof Set ψb = ψ + bϕ for b ∈ R. Note that ψ(1ω ) > 0 for every ω ∈ . Hence, since  is a finite set, we can find a ∈ (0, ∞) such that ψb (1ω ) = ψ(1ω ) + bϕ(1ω ) > 0 for every ω ∈  and every b ∈ (−a, a). By Proposition 1.7.2, this means that ψb is strictly positive for every b ∈ (−a, a).  We can now prove that every strictly-positive linear functional π : M → R admits infinitely many extensions to a strictly-positive linear functional defined on the entire space of random variables L. Here, we set Eπ+ (L) := {strictly-positive linear extensions of π to L}.

90

4 Extensions of Linear Functionals

Theorem 4.2.6 Let π : M → R be a strictly positive linear functional. Then the following statements hold: (i) π admits a strictly-positive linear extension to L. (ii) For every X ∈ L \ M the set {ψ(X) ; ψ ∈ Eπ+ (L)} is a (nonempty) bounded open interval. In particular, π admits infinitely many strictly-positive linear extensions to L. Proof To establish (i), let B = {1ω1 , . . . , 1ωK } be the canonical basis of L and recall from Proposition 3.3.2 that its convex hull co(B) is compact. Since π is strictly positive and every element of co(B) is nonzero positive, ker(π) ∩ co(B) = ∅. Therefore, by Theorem 4.1.3, there exists a linear functional ϕ : L → R satisfying ker(π) ⊂ ker(ϕ) and inf

X∈co(B )

ϕ(X) > 0.

We infer from Proposition 1.7.2 that ϕ is strictly positive. Take now a nonzero positive U ∈ M and note that π(U ) > 0 as well as ϕ(U ) > 0 by strict positivity. As a result, we can define a strictly-positive linear functional ψ : L → R by setting ψ(X) =

π(U ) ϕ(X). ϕ(U )

Since, clearly, ψ(U ) = π(U ) and ker(π) ⊂ ker(ψ), Proposition 4.2.2 shows ψ is a strictly-positive linear extension of π to L. Next, we focus on (ii). For convenience, for every X ∈ L \ M we set I (X) = {ψ(X) ; ψ ∈ Eπ+ (L)}. To show that I (X) is an interval, take p, q ∈ I (X) and note that p = ϕ(X) and q = ψ(X) for some ϕ, ψ ∈ Eπ+ (L). For every a ∈ [0, 1], the functional aϕ +(1−a)ψ clearly belongs to Eπ+ (L) and therefore ap + (1 − a)q = aϕ(X) + (1 − a)ψ(X) ∈ I (X). This shows that I (X) is a convex subset of R and, thus, an interval. To prove that I (X) is bounded, take a strictly positive U ∈ M and n ∈ N large enough so that −nU ≤ X ≤ nU . Then, for every ψ ∈ Eπ+ (L), we have −nπ(U ) = ψ(−nU ) ≤ ψ(X) ≤ ψ(nU ) = nπ(U ) by monotonicity. Consequently, I (X) ⊂ [−nπ(U ), nπ(U )], establishing boundedness. Finally, we need to establish that I (X) is open. To this end, apply Corollary 4.1.4 to the linear subspace M and the compact convex set {X} to obtain a linear functional ϕ : L → R

4.2 Extension Results

91

such that M ⊂ ker(ϕ) and ϕ(X) = 1. Let p ∈ I (X) and assume that p = ψ(X) for some ψ ∈ Eπ+ (L). For every b ∈ R the linear functional ψ +bϕ is easily seen to belong to Eπ (L). Moreover, by Lemma 4.2.5, the functional ψ + bϕ is strictly positive for b ∈ (−a, a) for some a ∈ (0, ∞). Therefore, p + b = ψ(X) + bϕ(X) ∈ I (X) for every b ∈ (−a, a), and so (p − a, p + a) ⊂ I (X). This establishes that I (X) is open.  In the spirit of Theorem 4.2.3, we would like to complement the above existence result with a characterization of the range of values that the strictly-positive linear extensions of a functional π : M → R can take on a given random variable outside of M. As a preliminary observation, note that, in contrast to simple linear extensions, a strictlypositive linear extension ψ : L → R cannot take arbitrary values on a random variable X ∈ L \ M. Indeed, we must have π(W ) = ψ(W ) < ψ(X) < ψ(Z) = π(Z) for all W, Z ∈ M such that W  X  Z. This motivates the following definition. Definition 4.2.7 Let π : M → R be a strictly-positive linear functional. For X ∈ L and p ∈ R we say that p satisfies the π-compatibility condition at X whenever the following conditions are satisfied: (1) p > π(Z) for every Z ∈ M such that Z  X. (2) p < π(Z) for every Z ∈ M such that Z  X. The set of all such p’s is denoted by (X). Moreover, we set π − (X) := inf (X) and π + (X) := sup (X).



We now show that the values attained by the strictly-positive linear extensions of a given functional correspond precisely to the numbers that satisfy the above compatibility condition. Theorem 4.2.8 Let π : M → R be a strictly-positive linear functional. Then, for every X ∈ L we have

(X) = {ψ(X) ; ψ ∈ Eπ+ (L)}.

92

4 Extensions of Linear Functionals

In particular, the following statements hold: (i) π − (X) = inf{ψ(X) ; ψ ∈ Eπ+ (L)}. (ii) π + (X) = sup{ψ(X) ; ψ ∈ Eπ+ (L)}. Proof The inclusion “⊂” follows from the discussion preceding Definition 4.2.7. To show the inclusion “⊃”, take an arbitrary p ∈ (X). Assume first that X ∈ M and take a strictly-positive random variable U ∈ M. Then, for every n ∈ N we easily see that X − n1 U  X  X + n1 U , so that



 1 1 1 1 π(X) − π(U ) = π X − U < p < π X + U = π(X) − π(U ). n n n n Letting n → ∞ yields p = π(X) and establishes the desired inclusion. Now, assume that X ∈ L \ M and set N = M + span(X). Define a functional ϕ : N → R by setting ϕ(Z + aX) = π(Z) + ap. Note that ϕ is well defined. Indeed, for all Z, W ∈ M and a, b ∈ R such that Z + aX = W + bX we must have Z − W = (b − a)X and, hence, Z = W and a = b because X does not belong to M, so that ϕ(Z + aX) = ϕ(W + bX). It is clear that ϕ is linear and satisfies ϕ(X) = p. We claim that ϕ is strictly positive. To see this, take a nonzero positive Y ∈ N . Then Y = Z + aX for suitable Z ∈ M and a ∈ R. If a = 0, then ϕ(Y ) = π(Z) > 0 by the strict positivity of π. If a > 0, then − a1 Z  X and thus

 1 1 − π(Z) = π − Z < p = ϕ(X), a a where the strict inequality holds because p satisfies the π-compatibility condition at X. Rearranging delivers ϕ(Y ) > 0. Similarly, if a < 0, then − a1 Z  X and thus 

1 1 ϕ(X) = p < π − Z = − π(Z), a a where the strict inequality again holds because p satisfies the π-compatibility condition at X. Rearranging and bearing in mind that a < 0 yields ϕ(Y ) > 0. This shows that ϕ is strictly positive. To conclude the proof of the inclusion, it suffices to observe that any strictly-positive linear extension of ϕ to the full space L, which exists by Theorem 4.2.6, is automatically a strictly-positive linear extension of π assigning to X the value p.  The preceding results allow us to easily derive a variety of properties of the bounds introduced in Definition 4.2.7.

4.2 Extension Results

93

Proposition 4.2.9 Let π : M → R be a strictly-positive linear functional. Then, the following statements hold: (i) For every X ∈ L we have π + (X) = −π − (−X). (ii) π − : L → R is a superlinear increasing extension of π such that: (a) π − (X) < 0 for every X ∈ L such that X  0. (b) π − (X + Z) = π − (X) + π(Z) for all X ∈ L and Z ∈ M. (iii) π + : L → R is a sublinear increasing extension of π such that: (a) π + (X) > 0 for every X ∈ L such that X  0. (b) π + (X + Z) = π + (X) + π(Z) for all X ∈ L and Z ∈ M. (iv) For every X ∈ M we have π − (X) = π + (X) = π(X) and

(X) = {π(X)}. (v) For every X ∈ L \ M we have π − (X) < π + (X) and

(X) = (π − (X), π + (X)). Proof To prove (i), it suffices to note that, by Theorem 4.2.8, for every X ∈ L we have π − (−X) =

inf

ψ∈Eπ+ (L)

ψ(−X) =

inf

{−ψ(X)} = −

ψ∈Eπ+ (L)

sup

ψ∈Eπ+ (L)

ψ(X) = −π + (X).

Next, we focus on (ii). It follows from Theorem 4.2.6 that π − is finitely valued. By Theorem 4.2.8, π − (X) =

inf

ψ∈Eπ+ (L)

ψ(X)

for every X ∈ L. This immediately shows that π − is an extension of π satisfying π − (X + Y ) = ≥

inf

{ψ(X) + ψ(Y )}

inf

ψ(X) +

ψ∈Eπ+ (L) ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(Y )

= π − (X) + π − (Y ) for all X, Y ∈ L as well as π − (aX) =

inf

{aψ(X)} = a

ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(X) = aπ − (X)

94

4 Extensions of Linear Functionals

for all X ∈ L and a ∈ [0, ∞). Hence, π − is superlinear. To establish monotonicity, take X, Y ∈ L such that X ≥ Y and note that ψ(X) ≥ ψ(Y ) for every ψ ∈ Eπ+ (L). Then, we see that π − (X) =

inf

ψ∈Eπ+ (L)

ψ(X) ≥

inf

ψ∈Eπ+ (L)

ψ(Y ) = π − (Y ),

showing that π − is increasing. To prove that (a) holds, take X ∈ L such that X  0 and note that ψ(X) < 0 for every ψ ∈ Eπ+ (L). Consequently, π − (X) =

inf

ψ∈Eπ+ (L)

ψ(X) < 0.

Finally, to prove that (b) holds, take X ∈ L and Z ∈ M and observe that π − (X + Z) =

inf

{ψ(X) + π(Z)} =

ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(X) + π(Z) = π − (X) + π(Z).

The proof of (iii) can be obtained by following the lines of the proof of item (ii). Alternatively, it follows by combining (i) and (ii). Assertion (iv) is a direct consequence of Theorem 4.2.8. Finally, assertion (v) follows by combining Theorem 4.2.6 and Theorem 4.2.8.  We conclude by showing a special representation of the bounds introduced in Definition 4.2.7. Proposition 4.2.10 Let π : M → R be a strictly-positive linear functional. Then, for every X ∈ L the following statements hold: (i) π − (X) = sup{π(Z) ; Z ∈ M, Z ≤ X}. (ii) π + (X) = inf{π(Z) ; Z ∈ M, Z ≥ X}. Moreover, the supremum in (i) and the infimum in (ii) are attained. Proof It follows from Theorem 4.2.8 that π − (X) ≥ sup{π(Z) ; Z ∈ M, Z ≤ X},

(4.1)

π + (X) ≤ inf{π(Z) ; Z ∈ M, Z ≥ X}.

(4.2)

If X ∈ M, then item (iv) in Proposition 4.2.9 implies π − (X) = π(X) ≤ sup{π(Z) ; Z ∈ M, Z ≤ X},

4.3 Representation Results

95

π + (X) = π(X) ≥ inf{π(Z) ; Z ∈ M, Z ≥ X}. In view of (4.1) and (4.2), this shows the desired assertions. Now, assume that X ∈ L \ M. It follows from item (v) in Proposition 4.2.9 that neither π − (X) nor π + (X) can satisfy the π-compatibility condition at X. Note that for all W, Z ∈ M with W ≤ X ≤ Z we have π(W ) ≤ π − (X) < π + (X) ≤ π(Z) again by item (v) in Proposition 4.2.9 and by (4.1) and (4.2). Then, the violation of the π-compatibility condition implies that π − (X) = π(W ) for some W ∈ M with W ≤ X and, similarly, π + (X) = π(Z) for some Z ∈ M with Z ≥ X. In view of (4.1) and (4.2), this delivers the desired assertions and concludes the proof.  Remark 4.2.11 (Extensions to Subspaces) Let π : M → R be a strictly-positive linear functional and let N be a linear subspace of L containing M. It is clear that, by restricting to N any strictly-positive linear extension of π to the full space L, we automatically obtain a strictly-positive linear extension of π to the subspace N . Hence, all the results of this section remain valid if we replace L by any subspace N containing M. 

4.3

Representation Results

Standing Assumption Throughout the whole section we assume that M is a linear subspace of L (which may coincide with the entire L). In Chap. 3 we have equipped L with the inner product (·, ·)P : L × L → R defined by (X, Y )P = EP [X, Y ]. It is immediate that, for every fixed random variable D ∈ L, the functional π : L → R given by π(X) = (D, X)P is linear. One important result in the theory of inner-product spaces is the so-called Riesz Representation Theorem, see Proposition D.4.1, which tells

96

4 Extensions of Linear Functionals

us that every linear functional can be “represented” by such a “density” D ∈ L. As we will see, this result is at the center of much of mathematical finance. It is insightful to provide a direct proof in our specific context and obtain an explicit description of the “representing density”. This will also allow us to introduce the required terminology and point out certain subtleties that have to do with the fact that we will often be interested in representation of functionals that are defined on the linear subspace M rather than the whole of L. We start with the notion of a Riesz density. Definition 4.3.1 (Riesz Density) Let π : M → R be a linear functional. We say that a random variable D ∈ L is a Riesz density for π if π(X) = EP [DX] for every random variable X ∈ M.



In our setting, the Riesz representation theorem for linear functionals defined on L takes the following form. Theorem 4.3.2 (Riesz Representation) Every linear functional π : L → R admits a unique Riesz density D ∈ L, given by D=

 π(1ω ) 1ω . P(ω)

ω∈

Proof Consider the random variable D ∈ L defined above and take an arbitrary X ∈ L. Using the linearity of π we see that EP [DX] =

 ω∈

π(1ω ) X(ω) = π P(ω) P(ω)





 X(ω)1ω

= π(X).

ω∈

This shows that D is a Riesz density for π. To see that D is the only Riesz density for π, take any Riesz density D  ∈ L for π and note that P(ω)D(ω) = π(1ω ) = EP [D  1ω ] = P(ω)D  (ω) for every ω ∈ . Since we have assumed that P(ω) > 0 for every ω ∈ , we infer that D  must coincide with D, as needed.  Next, we focus on functionals defined on a proper subspace of L and show that they admit an infinity of Riesz densities.

4.3 Representation Results

97

Theorem 4.3.3 Assume that M = L and consider a linear functional π : M → R. For a random variable D ∈ L the following statements are equivalent: (a) D is a Riesz density for π. (b) D is the Riesz density for a linear extension of π to L. In particular, π admits infinitely many Riesz densities. Proof It is clear that (b) implies (a). Conversely, let D ∈ L be a Riesz density for π and define the functional ψ : L → R by setting ψ(X) = EP [DX]. Then, by the definition of a Riesz density, ψ is a linear extension of π with (unique) Riesz density D. This shows that (a) implies (b). Since π admits infinitely many linear extensions to L by Theorem 4.2.3, we conclude that π admits infinitely many Riesz densities.  We complement the above statement by showing that a linear functional defined on M always admits a unique Riesz density belonging to the space M. Recall that PM denotes the orthogonal projection onto M. Theorem 4.3.4 Assume that M = L and consider a linear functional π : M → R. Then, for every Riesz density D ∈ L for π the following statements hold: (i) PM (D) is a Riesz density for π and PM (D) ∈ M. (ii) PM (D) = PM (D  ) for every Riesz density D  ∈ L for π. In particular, π admits a unique Riesz density belonging to M. Proof Since D − PM (D) ∈ M⊥ by the definition of orthogonal projection, we immediately see that EP [PM (D)X] = EP [DX] − EP [(D − PM (D))X] = EP [DX] = π(X) for every X ∈ M. Hence, PM (D) is a Riesz density for π. Moreover, PM (D) belongs to M by definition. In fact, this is the only Riesz density in M. To see this, assume D  ∈ M is a Riesz density for π and note that EP [(D  − PM (D))X] = π(X) − π(X) = 0 for every X ∈ M. This implies that D  −PM (D) belongs to M⊥ . Since D  −PM (D) also belongs to M and M ∩ M⊥ = {0} by Proposition 3.6.3, we conclude that D  = PM (D). This shows that π admits a unique Riesz density in M and concludes the proof. 

98

4 Extensions of Linear Functionals

Representations and Strict Positivity We now focus on the Riesz representation of strictly-positive linear functionals.

Standing Assumption Throughout the remainder of this section we assume that M contains a strictly-positive random variable.

For a linear functional, strict positivity can be characterized in terms of the strict positivity of Riesz densities. Theorem 4.3.5 For every linear functional π : M → R the following statements are equivalent: (a) π is strictly positive. (b) π admits a strictly-positive Riesz density. In this case, π admits a unique strictly-positive Riesz density if M = L and infinitely many strictly-positive Riesz densities if M = L. Proof That (b) implies (a) follows directly from the strict positivity of expectations established in Proposition 2.4.6. To prove that (a) implies (b), assume that π is strictly positive and let ψ : L → R be a strictly-positive linear extension of π, which exists by Theorem 4.2.6. Moreover, let Dψ ∈ L be the unique Riesz density for ψ. Note that Dψ is also a Riesz density for π and that P(ω)Dψ (ω) = EP [Dψ 1ω ] = ψ(1ω ) > 0 for every ω ∈ . Since P(ω) > 0 for every ω ∈  by our standing assumption, we conclude that Dψ is strictly positive. If M = L, then Dψ is the unique Riesz density for π. Otherwise, π admits infinitely many strictly-positive Riesz densities, each being the Riesz density of one of the infinitely many strictly-positive linear extensions of π to L. This follows from Theorem 4.2.8.  Remark 4.3.6 Let π : M → R be a strictly-positive linear functional and assume that M = L. (i) We know from Theorem 4.3.3 that π has an infinity of Riesz densities. However, Theorem 4.3.5 does not say that all of them are strictly positive. In fact, we claim that π always admits Riesz densities that are not strictly positive. To see this, take X ∈ L \ M and a ∈ R \ (π − (X), π + (X)). It follows from Theorem 4.2.3 that π has a linear extension ψ : L → R such that ψ(X) = a. Note that ψ cannot be strictly

4.4 Exercises

99

positive due to Theorem 4.2.8. Then, the unique Riesz density of ψ is a Riesz density for π that is not strictly positive by Theorem 4.3.5. (ii) It follows from Theorem 4.3.3 that π admits a unique Riesz density in M. As shown by Exercise 4.4.9, it may happen that this special Riesz density is not strictly positive (in fact, it may even fail to be positive). In other words, the strict positivity of π cannot be characterized in terms of the strict positivity of its unique Riesz density in M. 

4.4

Exercises

In all exercises below we assume that (, P) is a finite probability space such that supp(P) = . Exercise 4.4.1 Let M be a proper linear subspace of L and consider a linear functional π : M → R. (i) Use Proposition B.3.6 to show that there exists a linear subspace N of L such that L = M + N and M ∩ N = {0}. (ii) Let ϕ : N → R be a linear functional. Show that the functional ψ : L → R defined by ψ(X) = π(XM ) + ϕ(XN ), where XM ∈ M and XN ∈ N satisfy XM + XN = X, is a (well-defined) linear extension of π. (iii) Deduce that π admits infinitely many linear extensions to L. This provides an alternative proof of Theorem 4.2.3. Exercise 4.4.2 Let M be a proper linear subspace of L containing a strictly-positive random variable and let π : M → R be a strictly-positive linear functional. Prove that: (i) π − is neither linear, nor strictly decreasing, and for some X ∈ L we may have X  0 ⇒ π − (X) > 0. (ii) π + is neither linear, nor strictly increasing, and for some X ∈ L we may have X  0 ⇒ π + (X) < 0. Exercise 4.4.3 Let M be a proper linear subspace of L containing a strictly-positive random variable U , let π : M → R be a strictly-positive linear functional such that π(U ) = 1, and let X ∈ L.

100

4 Extensions of Linear Functionals

(i) Use the representation in Proposition 4.2.10 to show that π − (X) = sup{a ∈ R ; X − aU ∈ L+ − ker(π)}, π + (X) = inf{a ∈ R ; aU − X ∈ L+ − ker(π)}. (ii) Show that L+ − ker(π) is closed; see Exercise 3.7.7. (iii) Deduce that the supremum and the infimum in (i) are attained. (iv) Deduce that π − (X) = π(W ) and π + (X) = π(Z) for some W, Z ∈ M such that W ≤ X ≤ Z. This provides an alternative way to establish the attainability result in Proposition 4.2.10. Exercise 4.4.4 Let M be a proper linear subspace of L containing a strictly-positive random variable and let π : M → R be a strictly-positive linear functional satisfying sup |π(X)| < ∞

X∈A

for some set A ⊂ L. Prove that the following statements are equivalent: (a) There exists c ∈ R such that infX∈A X ≥ c. (b) There exists c ∈ R such that supX∈A X ≤ c. (c) A is bounded. Exercise 4.4.5 Let M be a proper linear subspace of L containing a strictly-positive random variable, let π : M → R be a strictly-positive linear functional, and let X ∈ L. (i) Use Proposition 4.2.10 to show that there exist sequences (Wn ), (Zn ) ⊂ M such that Wn ≤ X ≤ Zn for every n ∈ N and π(Wn ) → π − (X), π(Zn ) → π + (X). (ii) Use Exercise 4.4.4 to show that (Wn ) and (Zn ) are bounded. (iii) Deduce that π − (X) = π(W ) and π + (X) = π(Z) for some W, Z ∈ M such that W ≤ X ≤ Z. This provides an alternative way to establish the attainability result in Proposition 4.2.10. Exercise 4.4.6 Let M be a linear subspace of L and let π : M → R be a strictly-positive linear functional. Show that, if M does not contain a strictly-positive element, then one can always find two random variables X, Y ∈ L such that π − (X) = −∞ and π + (Y ) = ∞.

4.4 Exercises

101

Exercise 4.4.7 Let M be a proper linear subspace of L and let π : M → R be a linear functional. Show that there exists a one-to-one correspondence between the set of linear extensions of π to the space L and the set of Riesz densities for π. Exercise 4.4.8 Let M be a proper linear subspace of L and let π : M → R be a strictlypositive linear functional. Show that there exists a one-to-one correspondence between the set of strictly-positive linear extensions of π to the space L and the set of strictly-positive Riesz densities for π. Exercise 4.4.9 Let  = {ω1 , ω2 , ω3 } and P(ω1 ) = P(ω2 ) = P(ω3 ) = random variables

1 3.

Consider the

X1 = 1, X2 = 31ω1 − 1ω2 + 1ω3 , D = 1ω1 − 1ω2 . Set M = span(X1 , X2 ) and consider the linear functional π : M → R defined by π(X1 ) = 1, π(X2 ) = 0. (i) Prove that π is strictly positive. (ii) Characterize all the Riesz densities of π. (iii) Show that D is the only Riesz density of π belonging to M. Exercise 4.4.10 Let M be a linear subspace of L containing the constant random variable 1. Prove that for a linear functional π : M → R the following statements are equivalent: (a) There exists a probability measure Q that is equivalent to P and such that dQ dP is a Riesz density for π. (b) There exists a probability measure Q that is equivalent to P and such that for every X ∈ M we have π(X) = eQ [X]. (c) π is strictly positive and π(1) = 1. Show that, if any of the above statements holds, then there exists a unique probability as above if M = L and infinitely many otherwise.

5

Single-Period Financial Markets

With this chapter we begin our study of financial markets. We consider a single-period economy where future uncertainty is modelled by a finite number of possible outcomes. At the initial date, agents can buy or sell a finite number of basic securities for a fixed price. Each of these securities entitles them to a terminal payoff, which depends on the future prevailing state of the economy. Through their trading activity agents set up portfolios that generate a payoff. If a payoff can be generated in this way it is said to be replicable. The market is said to be complete if every conceivable payoff is replicable. In some sense, this chapter is mainly meant to establish terminology and can be viewed as a dictionary between the language of mathematical finance and that of linear algebra.

5.1

The Elements of the Market

This section describes the basic model of a single-period financial market. For conciseness we sometimes refer to the “financial market” simply as the “market”. The assumptions and notation introduced here will be adhered to throughout the next sections and chapters. The Single-Period Economy We consider an economy with two dates, which are respectively called: • the initial date (t = 0). • the terminal date (t = 1).

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_5

103

104

5 Single-Period Financial Markets

At the initial date the state of the economy is known, but at the terminal date the economy can be in any one of K different states, which are represented by the different elements of a sample space  = {ω1 , . . . , ωK }. In this context, the elements of  are often referred to as the (terminal) states of the economy. We also fix a reference probability measure P satisfying supp(P) = . In other words, we have P(ω) > 0 for every ω ∈ , so that each state of the economy is possible. The probability P is often interpreted as an “objective” probability measure assigning to each state its “true” probability of occurrence and which is recognized as such by all agents. However, one may also think of P as being a “subjective” probability measure representing the beliefs of a particular agent. Be it as it may, most of our results do not require any probability measure to be singled out. In fact, the only thing that is needed is that agents agree that each state in  is a possible future state of the economy. This is reflected in the requirement supp(P) = . The explicit choice of a probability measure will only matter in Chap. 7 and in Chap. 8 when we discuss certain representations of the pricing functional to be introduced later on. The Unit of Account In our economy agents can make and receive payments at both the initial and the terminal date. We assume that all payments are expressed in a common unit of account or numéraire. A numéraire is a tradeable security or currency that is used as a unit of account. This means that all payments are expressed in units of the numéraire. The tradeable security or currency is not per se a numéraire, but becomes the numéraire once we start using it as a unit of account. Having a common unit of account allows us to compare and add up payments. Typically, the unit of account is a currency such as EUR, USD, or YEN, and for now it suffices to think of the accounting unit as a fixed currency which we call the numéraire currency. However, it is also possible, and sometimes convenient, to express payments in number of units of a traded security or portfolio of traded securities. We explore these examples as well as the process of converting payments from one unit of account into another in Sect. 5.5. Payments at the Initial Date A payment at date 0 is represented by a real number, which has to be interpreted as a number of units of the chosen unit of account. Payments can be made or received. In mathematical finance one allows payments that are made or received to be both positive and negative. The convention is as follows. Let p ∈ R denote a payment that an agent makes. If p is positive, then the agent actually pays the amount p, while if p is negative

5.1 The Elements of the Market

105

the agent actually receives the amount −p. Similarly, if p denotes a payment that an agent receives, then p being positive means that the agent actually receives the amount p and being negative that the agent pays −p. Hence, making a payment of p is the same as receiving a payment of −p. This convention, which at first may sound peculiar or confusing, turns out to be extremely useful. Payments at the Terminal Date A payment at date 1 is allowed to depend on the prevailing state of the economy. For this reason, one often says that payments at the terminal date are state contingent. The natural way to represent a state-contingent payment is by a random variable X ∈ L. For each state ω ∈ , the number X(ω) ∈ R represents a payment contingent to the state ω. Here, too, we distinguish between a payment that is made or received and follow the same rule for interpreting the sign of a payment as for payments at the initial date. Namely, if X(ω) is positive, then the agent pays the amount X(ω), while, if X(ω) is negative, the agent receives the amount −X(ω). Similarly, if X(ω) denotes a payment that an agent receives, then X(ω) being positive means that the agent receives the amount X(ω) and being negative that the agent pays the amount −X(ω). Financial Contracts A financial contract is any agreement between two agents that entitles one party, referred to as the buyer or the owner, to receive from the other party, called the seller or the issuer, a state-contingent payment at the terminal date in exchange for a payment at the initial date. The buyer pays a price p ∈ R in exchange for the financial contract. Prices are understood as payments made by the buyer. In line with our convention, if p is positive, then the buyer pays the amount p to the seller. However, if p is negative, then the buyer actually obtains the amount −p from the seller. The payoff of a financial contract is represented by a random variable X ∈ L. For each ω ∈ , the number X(ω) ∈ R represents the payoff, expressed in the fixed unit of account, to be received at the terminal date. In line with our convention, the buyer is entitled to obtain a payment of X(ω) whenever X(ω) is positive and is committed to effecting a payment of −X(ω) whenever X(ω) is negative. A positive payoff, i.e., a payoff that is positive in every future state of the economy, is also called a claim. Every random variable can be viewed as the payoff of a given financial contract. The set of payoffs is denoted by X , i.e., X := {payoffs of all financial contracts}. Even though the set X formally coincides with the set of random variables L, we use a new symbol for the space of payoffs because the elements of X carry a specific interpretation: They are payoffs in a particular unit of account. The set of claims is denoted by X+ .

106

5 Single-Period Financial Markets

Note that we have distinguished between the owner and the issuer of a contract as a matter of convenience, i.e., to make clear who pays the price and who receives the payoff. If a financial contract has price p ∈ R and payoff X ∈ X and we follow this logic, one can say that one party to the contract “owns” X and pays the price p and the other party “owns” −X and pays the price −p. The Financial Market We assume that at date 0 agents are able to buy or sell a finite number of financial instruments with positive payoff, called the basic securities. Throughout the entire study of one-period economies, we stipulate the following fundamental assumptions about the functioning of the market: There are No Trading Constraints The basic securities are infinitely divisible, i.e., can be traded in arbitrarily small or large quantities. Moreover, “short-selling” is allowed, i.e., agents may sell securities even though they do not own them. In this case, the seller needs to “deliver” the security to the buyer at the terminal date by paying an amount corresponding to the security’s payoff. The Market Is Frictionless The price at which a security can be bought or sold is the same. Moreover, the price per unit of a given security does not depend on the traded volume. In summary, this means that, at the initial date, there is a unique market price at which each basic security can be bought or sold and, for every λ ∈ R, the cost of buying λ units of a basic security equals λ times the unit price. All Agents Are Price Takers We assume that each investor can buy or sell as many securities as desired without changing their market price. In other words, agents cannot influence the price of traded securities through their trading activity. Remark 5.1.1 (Short-Selling) Short-selling can be viewed as a borrowing agreement where the repayment is according to the payoff of the basic security that has been sold short. When speculating, short-selling can be used to bet on the value of the security falling: After short-selling a security, the seller invests the proceeds in some other security. If at the terminal date the payoff of the purchased security is higher than the payoff of the security that has been sold short, then the agent makes a profit; otherwise, he or she suffers a loss.  The Basic Securities We consider N ∈ N basic traded securities. Given the above assumptions, a full description of the market requires only the specification of the initial price and of the terminal payoff for each of the basic securities. For each i ∈ {1, . . . , N}, the ith basic security

5.1 The Elements of the Market

107

is represented by a pair S i = (S0i , S1i ) ∈ R × X , where S0i is the price of the ith security at date 0 and S1i is its payoff at date 1. We always assume that each security has a strictly-positive price and a nonzero positive payoff and that for each state of the economy there is a basic security that has nonzero payoff precisely in that state: (1) S0i > 0 for every i ∈ {1, . . . , N}. (2) S1i  0 for every i ∈ {1, . . . , N}. (3) For every ω ∈  there exists i ∈ {1, . . . , N} such that S1i (ω) > 0. Since all payoffs are positive, condition (3) can be equivalently formulated as: (3 ) For every ω ∈  we have

N

i i=1 S1 (ω)

> 0.

Example 5.1.2 (Risk-Free Security) In mathematical finance it is standard to assume that one of the basic securities, say S 1 , is risk free in the sense that its terminal payoff S11 is a strictly-positive constant. In this case, one can always write S11 = (1 + r)S01 for some r ∈ (−1, ∞), which is interpreted as the interest rate paid by the security. Note that condition (3) above is automatically satisfied when one of the basic securities is risk free.  Visualizing Basic Securities In the context of our examples we often adopt a tree-like diagram of the following type to introduce a basic security:

: 5

8 2 4

This means that we work in a sample space with three states,  = {ω1 , ω2 , ω3 }, and we consider a security S with price S0 = 5 and payoff S1 = (S1 (ω1 ), S1 (ω2 ), S1 (ω3 )) = (8, 2, 4).

108

5 Single-Period Financial Markets

As we will see, the above graphical description is especially useful in the context of multiperiod financial markets. Basic Securities in Practice The abstract description of the financial market we have provided above is all we need for our purposes. Needless to say, that description of securities is very simplified, reducing them to their payoffs and stripping them of any additional features such as, say, voting rights. To convey a flavour of what a concrete market may look like, we give a brief description of the market for securities that are issued by a government or a corporation. For a more thorough introduction to the universe of financial securities we refer to the vast literature on the subject. Shares One of the main sources of financing for corporations is the issuance of shares or stock. These are securities that give the shareholder the right to participate in the profits of the firm. One can think of the profit being equally distributed amongst the shareholders through dividends that are paid every year. The payoffs associated with a share are risky. This is because the amount of profits a firm generates depends on a variety of unforeseeable factors. If the share entitles to only one large dividend payment at the terminal date, we say it is a non-dividend paying share. On the other hand, if dividends are paid at intermediate dates, we speak of a dividend-paying share. Of course, in a one-period model, this distinction is not necessary. In our modelling assumptions for the basic securities we have not allowed dividend payments. While important in practice, this is not central for our theory and would only unnecessarily complicate notation. Bonds Corporations can also meet their financing needs by issuing bonds. These are financial contracts whereby a firm borrows an amount of money, called the principal, which is repaid according to different modalities along with a pre-agreed interest on the principal. If we assume that the firm will never fail to meet its obligations, i.e., if we assume that bond issuers never default, then the payments received by the bondholder are known in advance and independent of the state of the economy prevailing when the payment is due, i.e., they are risk free. For this reason, bonds are often referred to as fixed-income securities. Bonds are senior to shares, i.e., before distributing profits to shareholders, the firm must first repay bondholders. Since profits are risky, it is possible that they do not suffice to pay bondholders, leading to the firm defaulting on its obligations. Hence, the assumption that bondissuers never default is a strong idealization. In our abstract modelling environment this can be taken into account by making a difference between the nominal payoff and the actual payoff, which takes into account the states in which the issuer defaults. In a zero-coupon bond, the principal and any interest on it are repaid at maturity of the bond. A coupon-paying bond also repays the principal at maturity, but has interest payments, called the coupons, at intermediate dates. Of course, this distinction only makes sense in a multi-period model. In our modelling assumptions for the basic

5.1 The Elements of the Market

109

securities we have not allowed coupon payments. Once again, this is not a major restriction for our purposes. Governments may also issue bonds to meet their financing needs. Because of the ability of governments to print money, government bonds are typically viewed as risk free. Of course, this is only risk free in “nominal” terms, since the printing of money may lead to a loss of purchasing power that translates into government bonds not being risk free in “real” terms. Financial Derivatives In practice, many financial contracts have payoffs that depend on the payoff of one or several securities. These contracts are typically called derivative contracts or just derivatives. Their payoff can be expressed in the general form X = f (S11 , . . . , S1N ), where f : RN → R is a function (that may only depend on some, perhaps only one, of its N entries). The basic securities are referred to as the underlying. We provide a few examples. Example 5.1.3 (Call/Put Option) Fix an index i ∈ {1, . . . , N} and a number p ∈ [0, ∞). Consider the function f : RN → R given by f (x) = max{x i − p, 0}. The financial contract with payoff X = f (S11 , . . . , S1N ) = max{S1i − p, 0} is called the call option on the ith basic security with strike price p. This derivative pays nothing if S1i is below or equal to p, and the excess of S1i over p otherwise. As such, it gives the right to “purchase” the ith basic security for the price p at the terminal date should this be advantageous for the option holder. Similarly, we may consider the function f : RN → R given by f (x) = max{p−x i , 0}. The financial contract with payoff X = f (S11 , . . . , S1N ) = max{p − S1i , 0} is called the put option on the ith basic security with strike price p. This derivative pays nothing if S1i is above or equal to p, and pays the difference between p and S1i otherwise. As such, it gives the right to “sell” the ith basic security for the price p at the terminal date should this be advantageous for the option holder. 

110

5 Single-Period Financial Markets

Example 5.1.4 (Exchange Option) Fix two distinct indices i, j ∈ {1, . . . , N} and a number p ∈ [0, ∞). Consider the function f : RN → R given by f (x) = max{x j −x i , 0}. The financial contract with payoff j

X = f (S11 , . . . , S1N ) = max{S1 − S1i , 0} is called the exchange option of the ith for the jth basic security. This derivative pays the spread of the j th basic security over the ith basic security if the spread is positive, and zero otherwise. In this sense, it allows to “exchange” the ith basic security into the j th one should the former be outperformed by the latter.  The One-Period Model in a Nutshell

Standing Assumption Throughout the chapter we consider a one-period economy with dates t ∈ {0, 1} where the states of the economy at date 1 are represented by the elements of a finite sample space  = {ω1 , . . . , ωK }. We assume that  is equipped with a probability measure P satisfying supp(P) = . The payoff space is denoted by X . We fix N basic securities represented by pairs S i = (S0i , S1i ) ∈ (0, ∞) × X+ \ {0}, i ∈ {1, . . . , N}. We assume that for each ω ∈  we have S1i (ω) > 0 for some i ∈ {1, . . . , N}.

5.2

Portfolios of Securities

By buying and selling in the financial market at the initial date agents can set up portfolios of basic securities. Definition 5.2.1 (Portfolio) An N-dimensional vector λ = (λ1 , . . . , λN ) ∈ RN is called a portfolio (of basic securities). 

5.2 Portfolios of Securities

111

For a portfolio λ ∈ RN the quantity λi represents the number of units of the ith basic security contained in the portfolio. We say that the portfolio is long λi units of the ith security whenever λi > 0, and short −λi units of the ith security whenever λi < 0. In the latter case, this means that the agent holding the portfolio has sold short −λi units of the ith security. There are two natural questions about any portfolio of basic securities: What is the cost of setting it up? And, what payoff does it generate? Definition 5.2.2 (Portfolio Price/Payoff) Consider a portfolio λ ∈ RN . The price of λ is the number V0 [λ] ∈ R defined by V0 [λ] :=

N 

λi S0i .

i=1

The payoff of λ is the random variable V1 [λ] ∈ X defined by V1 [λ] :=

N 

λi S1i .



i=1

For a portfolio λ ∈ RN the quantity V0 [λ] represents the amount an agent would have to pay to set up the portfolio λ at the initial date. In particular, the agent effects the payments associated with those basic securities in which he or she has a long position and receives the payments generated by those basic securities in which he or she has a short position. The random variable V1 [λ] represents the state-contingent payment due to the owner of the portfolio at date 1. Here, the agent receives the payments generated by those basic securities in which he or she has a long position and effects the payments associated with those basic securities in which he or she has a short position. It is clear, as stated next, that pricing a portfolio at the initial date and assigning a payoff at the terminal date are linear operations; see Definition B.4.1. The explicit verification is left as an exercise. Proposition 5.2.3 The following statements hold: (i) The functional V0 : RN → R is linear. (ii) The map V1 : RN → X is linear. In many situations it is convenient to have at hand a standard portfolio with strictlypositive price and strictly-positive terminal payoff. Our choice is the equal-weight market portfolio introduced next.

112

5 Single-Period Financial Markets

Example 5.2.4 (Equal-Weight Market Portfolio) defined by

The equal-weight market portfolio is

η := (1, . . . , 1) ∈ RN . The portfolio η corresponds to holding one unit of each basic security. Since all basic securities are assumed to have a strictly-positive price, it is clear that η has a strictlypositive price as well, i.e., V0 [η] =

N 

S0i > 0.

i=1

Moreover, by our initial assumption on the payoffs of the basic securities, it follows that η delivers a strictly-positive payoff in every state of the economy, i.e., V1 [η] =

N 

S1i > 0.

i=1

Throughout the book we use these facts without explicit reference.



Remark 5.2.5 (Market Portfolios) There are many methods to construct “market portfolios”, i.e., portfolios that contain all basic securities. Since we are only interested in a portfolio that has strictly-positive initial price and strictly-positive terminal payoff, assigning equal weights to all securities will do. In other contexts, however, it may sometimes be more opportune not to choose equal weights. For instance, one could choose weights based on the price of the securities, with securities having a higher price having a higher weight. Another method that is often used for constructing stock market indices in real life, where securities are in limited supply, is to choose weights based on the relative size of a company within the total market as measured by their market capitalization, i.e., by the number of outstanding shares times their price. 

5.3

Replicable Payoffs

By setting up a portfolio of traded securities agents can “generate” or “produce” a variety of terminal payoffs. Payoffs that can be obtained in this way are said to be replicable. Definition 5.3.1 (Replicable Payoff) A payoff X ∈ X is said to be replicable, or attainable, or marketed, if there exists a portfolio λ ∈ RN such that X = V1 [λ]. In this case, λ is called a replicating portfolio for X. The set of replicable payoffs is denoted by

5.3 Replicable Payoffs

113

M, i.e., we set M := {X ∈ X ; X = V1 [λ] for some λ ∈ RN }. The set M is also referred to as the marketed space.



It follows immediately from the linearity of the payoff map V1 that the marketed space M is a linear subspace of the space of all payoffs X . Proposition 5.3.2 The following statements hold for all portfolios λ, μ ∈ RN , all replicable payoffs X, Y ∈ M, and every a ∈ R: (i) If λ is a replicating portfolio for X and μ is a replicating portfolio for Y , then λ + μ is a replicating portfolio for X + Y . (ii) If λ is a replicating portfolio for X, then aλ is a replicating portfolio for aX. Proof Both assertions are immediate consequences of the linearity of V1 . The explicit verification is left as an exercise.  Corollary 5.3.3 The set M is a linear subspace of X containing a strictly-positive payoff. Proof That M is a linear subspace of X follows immediately from Proposition 5.3.2. We  conclude by noting that M contains the strictly-positive payoff V1 [η]. It is clear from the definition of a replicable payoff that the payoffs of the basic securities span the marketed space. Proposition 5.3.4 space is spanned by the payoffs of the basic securities,   The marketed i.e., M = span S11 , . . . , S1N . In particular, dim(M) ≤ min{N, K}. Proof The first assertion follows immediately since M is, by definition, the set of all linear combinations of the payoffs S11 , . . . , S1N . As dim(X ) = K, we must have dim(M) ≤ K. Moreover, the dimension of M cannot be larger than the number of elements that span it, i.e., dim(M) ≤ N.  A basic security is said to be redundant if its payoff is a linear combination of the payoffs of the remaining securities. Definition 5.3.5 (Redundant Security) Let i ∈ {1, . . . , N} be fixed. We say that the ith basic security is redundant if S1i can be expressed as a linear combination of the payoffs  S11 , . . . , S1i−1 , S1i+1 , . . . , S1N .

114

5 Single-Period Financial Markets

By definition, the absence of redundant securities is equivalent to the linear independence of the set {S11 , . . . , S1N }. Since dim(X ) = K, any collection of payoffs in X admits at most K linearly independent elements. Hence, assuming that none of the basic securities is redundant always implies that K ≥ N. This condition is often satisfied, since the number of states of the economy in most realistic models is much higher than the number of basic securities traded in the market. In principle, a marketed payoff may be replicated by different portfolios. In fact, the replicating portfolio is unique if and only if there are no redundant basic securities. Proposition 5.3.6 The following statements are equivalent: (a) None of the basic securities is redundant. (b) {S11 , . . . , S1N } is a basis for M. (c) For every X ∈ M there exists a unique λ ∈ RN such that X = V1 [λ]. If there are redundant securities, then every payoff in M admits infinitely many replicating portfolios. Proof If none of the basic securities is redundant, then {S11 , . . . , S1N } is a linearly independent set that spans M and is, thus, a basis for M. Hence, (a) implies (b). Assume now that {S11 , . . . , S1N } is a basis for M. Since S11 , . . . , S1N are linearly independent, for every X ∈ M there are unique scalars λ1 , . . . , λN ∈ R such that N 

λi S1i = X.

i=1

This follows from Proposition B.3.4. In other words, λ = (λ1 , . . . , λN ) ∈ RN is the unique replicating portfolio for X. This shows that (b) implies (c). Next, assume that every marketed payoff has a unique replicating portfolio. Then, since the zero payoff can be always replicated by the zero portfolio 0 ∈ RN , for each portfolio λ ∈ RN such that N 

λi S1i = V1 [λ] = 0

i=1

we must have λ = 0. This implies that S11 , . . . , S1N are linearly independent or, equivalently, that none of the basic securities is redundant. Hence, (c) implies (a) and this concludes the proof of the equivalence. Finally, if we have redundant basic securities, there exists a nonzero portfolio λ ∈ RN replicating the zero payoff, i.e., such that V1 [λ] = 0. Take any X ∈ M and let μ ∈ RN be

5.4 Complete Markets

115

a replicating portfolio for X. Then, for each a ∈ R, the portfolio μ + aλ is easily seen to replicate X, showing that X admits infinitely many replicating portfolios. 

5.4

Complete Markets

Markets in which every conceivable payoff is replicable deserve especial attention. Definition 5.4.1 (Complete Market) We say that the market is complete if every payoff is replicable, i.e., if M = X .  We start by showing that the market can only be complete if there are sufficiently many nonredundant securities. Proposition 5.4.2 The following statements are equivalent: (a) The market is complete. (b) There exist K nonredundant basic securities. In particular, if the market is complete, then N ≥ K. Proof Recall that M is spanned by the payoffs S11 , . . . , S1N . If M = dim(M), it follows from Proposition B.3.5 that there exists a subset of {S11 , . . . , S1N } consisting of M linearly independent payoffs forming a basis for M. Recalling that we have dim(X ) = K and noting that M = X if and only if M = K, we conclude that the market is complete if and only there exist K nonredundant basic securities.  An immediate consequence of the above proposition is that, if there are no redundant securities, then the market is complete if and only if there are as many basic securities as there are states of the economy. Corollary 5.4.3 Assume that none of the basic securities is redundant. Then, the market is complete if and only if N = K. A particularly simple type of claim is one that pays one unit of the reference unit of account in one pre-specified state of the economy and nothing in all other states. Definition 5.4.4 (Arrow-Debreu Payoff) For every ω ∈  the claim 1ω ∈ X is said to be an Arrow-Debreu payoff.  Since Arrow-Debreu payoffs form the canonical basis for X , market completeness is equivalent to each of such payoffs being replicable.

116

5 Single-Period Financial Markets

Proposition 5.4.5 The following statements are equivalent: (a) The market is complete. (b) Every Arrow-Debreu payoff is replicable. Proof If the market is complete, then every payoff is replicable. In particular, every Arrow-Debreu payoff is replicable. Conversely, if every Arrow-Debreu payoff is replicable, then 1ω ∈ M for every ω ∈ . Since {1ω1 , . . . , 1ωN } is a basis for X , we conclude that M = X , i.e., the market is complete.  Example 5.4.6 (Arrow-Debreu Market) We speak of an Arrow-Debreu market whenever we have K basic securities of the form S 1 = (S01 , 1ω1 ), . . . , S K = (S0K , 1ωK ) with S01 , . . . , S0K > 0. This market satisfies the assumptions stipulated in Sect. 5.1 and is clearly complete by Proposition 5.4.5. 

5.5

Changing the Unit of Account

When describing our economy, we have chosen a fixed but arbitrary unit of account or numéraire in which payments are expressed. There are two points to be made in this respect: The irrelevance of the choice of the unit of account and the irrelevance of the unit of account in which financial contracts are denominated. First, choosing the unit of account is very much the same as choosing the yardstick used to express distances in a map. Whether distances are measured in miles or kilometers certainly changes the actual figures, but it does not change relative distances: If point A is twice as far from point B when measured in miles, the same is true when distances are measured in kilometers. Hence, for everything where distances matter, it is irrelevant which measure of distance is chosen as long as all distances are measured consistently. Moreover, we can always convert distance from one measure of distance to another without losing information. The same is true in mathematical finance. For this reason, as we develop our subject—introducing new concepts and proving new results—it is important to convince ourselves that they do not depend on the particular choice of the accounting unit: If they hold under one unit of account, they should automatically hold under all conceivable units of account. We highlight this independence where appropriate. Second, choosing one unit of account does not mean that we can only look at financial contracts that are denominated in that particular unit of account. If we have two maps, one of which shows distances in miles and the other in kilometers, we can still compare the information by converting all distances into a common measure, for instance kilometers. We just need to know how to convert miles into kilometers. Similarly, even though different

5.5 Changing the Unit of Account

117

payments may be denominated in different units of account, e.g., currencies, we can express all payments with respect to a fixed currency and proceed just as if all payments were denominated in that fixed currency. Hence, we only need to know the “exchange rates” to convert from any currency into the fixed currency we have chosen as our common denominator. We now describe some typical examples of alternative units of account to the one we fixed at the beginning of this chapter. In doing so we develop a simple general framework that allows us to convert payments from one numéraire into another. For definiteness, we assume that the original unit of account is a numéraire currency. Example 5.5.1 (Numéraire Currency) Assume we wish to express all payments in a new currency; let R0 be the exchange rate from the original currency to the new one at date 0 and, similarly, let R1 be the exchange rate from the original currency to the new one at date 1. In other words, R0 ∈ R represents the number of units of the new currency we obtain in exchange for one unit of the original numéraire currency at date 0. Similarly, R1 ∈ L represents the state-contingent number of units of the new currency we obtain in exchange for one unit of the original numéraire currency at date 1. Clearly, it is meaningful to assume that R0 > 0 and R1 > 0. When converted into the new currency, a payment p ∈ R at date 0 and a state-contingent payment X ∈ X at date 1 expressed in the original  currency become R0 p and R1 X, respectively. Example 5.5.2 (Numéraire Security) Assume that one of the basic securities, say the first, has strictly-positive terminal payoff, i.e., S11 > 0. In this case, it is possible to use this basic security as a numéraire and express initial payments and terminal payoffs in units of this security. To this effect, we set R0 =

1 1 , R1 = 1 . S01 S1

Then, when converted into the new numéraire, a payment p ∈ R at date 0 and a statecontingent payment X ∈ X at date 1 expressed in the original currency become R0 p and R1 X, respectively. In particular, note that R0 S01 = R1 S11 = 1, which reflects the obvious fact that the value of a security relative to itself is always 1. When we use a basic security as the unit of account we call it the numéraire security.  Example 5.5.3 (Numéraire Portfolio) Generalizing the previous example, we can also choose to express initial payments and terminal payoffs in terms of any portfolio θ ∈ RN as long as it satisfies V0 [θ ] > 0 and V1 [θ ] > 0 (for instance, we could take θ to be the

118

5 Single-Period Financial Markets

equal-weight market portfolio). To this end, we set R0 =

1 1 , R1 = . V0 [θ ] V1 [θ ]

Then, when converted into the new numéraire, a payment p ∈ R at date 0 and a state-contingent payment X ∈ X at date 1 expressed in the original currency become R0 p and R1 X, respectively. Also here we have R0 V0 [θ ] = R1 V1 [θ ] = 1. When we use a portfolio as the unit of account, we refer to it as the numéraire portfolio.  In the above examples, we converted a payment expressed in the original numéraire currency into the new numéraire by multiplying a payment at date 0 by a strictly-positive scalar factor R0 ∈ R and a state-contingent payment at date 1 by a strictly-positive random factor R1 ∈ L. We give such pairs (R0 , R1 ) a special name. Definition 5.5.4 (Rescaling Process) Every pair R = (R0 , R1 ) ∈ R × L with strictlypositive components is called a rescaling, or exchange-rate, process. The set of rescaling processes is denoted by R.  As should be clear, we can use rescaling processes to model the conversion of payments from an arbitrary numéraire into another. Indeed, let R ∈ R be the rescaling process converting payments expressed in our original numéraire currency into payments expressed in a new numéraire, which for convenience we call A. Clearly, 1 := R

1 1 , R0 R1



defines a rescaling process that converts payments expressed in numéraire A back into the original numéraire currency. Moreover, if R  ∈ R is the rescaling process converting payments expressed in our original numéraire currency into payments expressed in yet another new numéraire, say B, then R := R

R0 R1 , R0 R1



is a rescaling process that converts payments expressed in numéraire A into payments expressed in numéraire B and is, thus, the “exchange rate” from numéraire A to numéraire B. Remark 5.5.5 (Interpretation of Rescaling) It is worth noting that not every rescaling process represents the exchange rate between actual numéraires. (Recall that we only allowed for a numéraire to be a currency, security, or portfolio.) Nevertheless, we can

5.5 Changing the Unit of Account

119

and will interpret rescaled payments as being expressed in a new unit of account that may possibly result from an “artificial” construct rather than an actual “exchange rate” between existing numéraires.  The Rescaled Market Let R ∈ R be a rescaling process. Consider a financial contract with initial price p ∈ R and terminal payoff X ∈ X expressed in the numéraire currency fixed at the beginning. When expressed in the new unit of account, the initial price and terminal payoff become  = R1 X. p  = R0 p, X We refer to them as the rescaled price and rescaled payoff of the contract (with respect to , i.e. R). The space of rescaled payoffs is denoted by X  := {rescaled payoffs of all financial contracts}. X  both “coincide” with the space L, but they Note that, in pure mathematical terms, X and X have a completely different financial interpretation: The elements in X represent payoffs  payoffs expressed in the new one. expressed in the original unit of account and those in X    = R1 X, then they represent the It is important to note that if X ∈ X and X ∈ X satisfy X “payoff” of the same financial contract: X is the payoff expressed in the original unit of  the payoff expressed in the new unit of account. account and X For each i ∈ {1, . . . , N}, the ith basic security can be represented under the new unit of account by the pair  i i i  S0 ,  S1 = (R0 S0i , R1 S1i ). S =  Since the “exchange rates” R0 and R1 are strictly positive, it is immediate to see that all the assumptions on the basic securities stipulated for the original unit of account continue to hold after conversion into the new unit of account, i.e., (1) (2) (3)

 S0i > 0 for every i ∈ {1, . . . , N}.  S1i  0 for every i ∈ {1, . . . , N}. For every ω ∈  there exists i ∈ {1, . . . , N} such that  S1i (ω) > 0.

As the theory developed so far is exclusively based on these initial assumptions on the basic securities, it follows that all the results established in this section remain valid in the context of every rescaled market. This is comforting because, being arbitrary, the choice of the original accounting unit should not have played any special role.

120

5 Single-Period Financial Markets

Remark 5.5.6 As stated at the beginning of this section, it is important to highlight which concepts do not depend on the particular choice of the accounting unit. The main structural notion introduced in this chapter is that of market completeness. It is not difficult to prove, see Exercise 5.6.9, that whether or not the market is complete does not depend on the unit of account.  Remark 5.5.7 (Notation for Rescaled Payments) Throughout this book we use the above “tilde” notation to distinguish payments after “rescaling” from payments expressed in the original unit of account; see Exercise 5.6.9. Note that this notation is imprecise because it does not make explicit reference to the dependence on the chosen rescaling process. However, this omission should cause no confusion because we only deal with one rescaling process at a time. Note that we do not need especial notation for portfolios because their entries represent units of the respective basic securities (and not of their payoffs).  “Changing the Numéraire” vs “Discounting” In the mathematical finance literature the process of expressing payments in a particular numéraire is often called discounting and the corresponding rescaled payments are referred to as discounted payments. The economic rationale is typically given when a risk-free security

: 1

1+r .. . 1+r

with r ∈ (−1, ∞) is chosen as numéraire security. The argument goes as follows: A unit of currency at date 0 cannot be compared to a unit of currency at date 1, but they can be made “comparable” by considering discounted prices with respect to the risk-free security. This explanation is, however, not entirely convincing. Discounting with respect to the risk-free security only allows us to compare a given state-independent payoff at date 1 (whose discounted payoff is, in fact, the corresponding price) with an amount of currency at date 0. For any other payoff at date 1 the associated discounted payoff continues to be a random variable and continues to be as incomparable with an amount of currency at date 0 as before. The only way to make payoffs at date 1 “comparable” with amounts of the currency at date 0 is by considering their prices at date 0. To avoid confusion, we refrain from using the terminology of “discounting”.

5.6 Exercises

5.6

121

Exercises

In all exercises below we consider the one-period economy described in Sect. 5.1 and adhere to the market specifications introduced there. Exercise 5.6.1 Prove that both the pricing functional V0 : RN → R and the payoff map V1 : RN → X are linear. Exercise 5.6.2 Show that for all portfolios λ, μ ∈ RN , all replicable payoffs X, Y ∈ M, and every scalar a ∈ R the following statements hold: (i) If λ is a replicating portfolio for X and μ is a replicating portfolio for Y , then λ + μ is a replicating portfolio for X + Y . (ii) If λ is a replicating portfolio for X, then aλ is a replicating portfolio for aX. Exercise 5.6.3 Show that the following statements are equivalent: (a) The map V1 : RN → X is injective, i.e., for all portfolios λ, μ ∈ RN we have V1 [λ] = V1 [μ] ⇒ λ = μ. (b) None of the basic securities is redundant. Exercise 5.6.4 (Put-Call Parity) Let i ∈ {1, . . . , N} and p ∈ (0, ∞). Show that the Put-Call Parity (with respect to the ith basic security) holds, i.e., max{S1i − p, 0} + p = max{p − S1i , 0} + S1i . This establishes a link between the payoff of a call and a put option with the same underlying and strike price. Exercise 5.6.5 Let  = {ω1 , ω2 } and consider the basic securities defined by 1

: 1

0 2

2

: 5

(i) Show that the market is complete. (ii) Find the replicating portfolios for 1ω1 . (iii) Show that every payoff has a unique replicating portfolio.

6 4

122

5 Single-Period Financial Markets

Exercise 5.6.6 Let  = {ω1 , ω2 } and consider the basic securities defined by 1

(i) (ii) (iii) (iv)

10

: 1

5

2

2

: 2

1

3

1

: 5

7

Show that the market is complete. Find the replicating portfolios for 1ω1 . Show that the market has redundant securities. Show that every payoff has infinitely many replicating portfolios.

Exercise 5.6.7 Let  = {ω1 , ω2 , ω3 } and consider the basic securities defined by

1

(i) (ii) (iii) (iv)

1 1 1

: 1

2

: 2

5 2 1

Show that the market is incomplete. Show that 31ω1 − 1ω3 is replicable. Show that 1ω1 is not replicable. Find a new security S 3 such that the extended market is complete.

Exercise 5.6.8 Let  = {ω1 , . . . , ωK } for K ≥ 2 and consider two basic securities defined by

1

: 1

1+r .. .

1+r

2

: s

(1 + r1 )s .. .

(1 + rK )s

where r, r1 , . . . , rK ∈ (−1, ∞) satisfy r1 ≥ · · · ≥ rK and s ∈ (0, ∞). (i) Show that the market is complete if and only if K = 2 and r1 > rK . (ii) If the market is incomplete, determine what is the minimal number of securities to add to the market in such a way that the extended market is complete. Exercise 5.6.9 Let R ∈ R be a rescaling process. Prove the following statements: 0 [λ] = R0 V0 [λ] for every λ ∈ RN . (i) V 1 [λ] = R1 V1 [λ] for every λ ∈ RN . (ii) V

5.6 Exercises

123

Deduce that the rescaled marketed space satisfies    X    ∈M . M= X∈X; R1 = X  if and only if M = X . This shows that whether or not the In particular, we have M market is complete does not depend on the chosen unit of account.

6

Market-Consistent Prices for Replicable Payoffs

One of the key tenets pervading much of mathematical finance is that it should not be possible to make a riskless profit, i.e., that a potential gain should always be balanced by a potential loss. There are two ways to ensure this. The first way is by prescribing the Law of One Price, which requires that all portfolios that replicate the same payoff have the same price. Indeed, agents could otherwise make an instantaneous riskless profit by short-selling the more expensive portfolio and buying the cheaper one. In a market where the Law of One Price holds it is possible to assign to every replicable payoff a market-consistent price in an unambiguous way. The resulting pricing rule is a linear functional defined on the marketed space. The second way of precluding riskless profits is by assuming the absence of arbitrage opportunities, i.e., the absence of portfolios with a nonzero positive payoff that have a nonpositive price. The absence of arbitrage opportunities is a stronger requirement than the Law of One Price and is equivalent to the strict positivity of the pricing rule.

Standing Assumption Throughout the entire chapter we consider the one-period economy described in Chap. 5 and adhere to the market specifications introduced there.

6.1

Price and Payoff Arbitrage

In this section we introduce the two basic types of arbitrage opportunities, i.e., of ways of setting up portfolios of basic securities that allow agents to make riskless profits. The first way of making a riskless profit is by exploiting price differences for portfolios replicating the same payoff. To see this, assume that two portfolios μ, ξ ∈ RN generate the same payoff, but have different prices. For example, assume that μ is cheaper than ξ , © Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_6

125

126

6 Market-Consistent Prices for Replicable Payoffs

so that V0 [μ] < V0 [ξ ] and V1 [μ] = V1 [ξ ]. In this case, an agent could sell the more expensive portfolio ξ and use the proceeds to buy the cheaper portfolio μ, ending up with the combined portfolio λ = μ − ξ . Clearly, since the payoff of μ exactly offsets the payoff of −ξ , the payoff generated by this portfolio at date 1 is V1 [λ] = V1 [μ] − V1 [ξ ] = 0. At the same time, at date 0 the agent pays the price V0 [λ] = V0 [μ] − V0 [ξ ] < 0, which, being a strictly-negative price, actually represents an instantaneous strictly-positive profit. In this way the agent can make a riskless profit at date 0. Such a strategy is called a price arbitrage. Definition 6.1.1 (Price Arbitrage) A portfolio λ ∈ RN is called a price arbitrage (opportunity) if it satisfies V0 [λ] < 0 and V1 [λ] = 0.  The second way of making a riskless profit is when different but comparable payoffs have “inconsistent” prices. To make this precise, assume that two portfolios μ, ξ ∈ RN satisfy V0 [μ] ≤ V0 [ξ ] and V1 [μ]  V1 [ξ ]. In this case, an agent could sell the less attractive portfolio ξ and buy the more attractive one μ. In balance, the agent would hold the portfolio λ = μ − ξ , which has a negative price V0 [λ] = V0 [μ] − V0 [ξ ] ≤ 0, but a nonzero positive payoff V1 [λ] = V1 [μ] − V1 [ξ ]  0. By doing so, the agent would effectively set up a portfolio λ with nonpositive (possibly strictly-negative) price with a nonzero positive payoff, i.e., with a payoff with the possibility of a strictly-positive gain, but no risk of a loss. Such a strategy is called a payoff arbitrage.

6.1 Price and Payoff Arbitrage

127

Definition 6.1.2 (Payoff Arbitrage) A portfolio λ ∈ RN is called a payoff arbitrage  (opportunity) if it satisfies V0 [λ] ≤ 0 and V1 [λ]  0. It is easy to show that the absence of payoff arbitrage opportunities automatically implies the absence of price arbitrage opportunities. Proposition 6.1.3 If payoff arbitrage opportunities do not exist, then price arbitrage opportunities do not exist either. Proof Assume that λ ∈ RN is a price arbitrage opportunity so that we have V0 [λ] < 0 and V1 [λ] = 0. Define the portfolio μ ∈ RN by setting μ=λ−

V0 [λ] η. V0 [η]

Then, by linearity, we immediately see that V0 [μ] = 0 and V1 [μ] = −

V0 [λ] V1 [η]. V0 [η]

Since η has a strictly-positive price as well as a strictly-positive payoff, it follows that  V1 [μ]  0 (actually V1 [μ] > 0). This proves that μ is a payoff arbitrage. As illustrated by the following example, the converse statement does not generally hold so that a market may allow for payoff arbitrage opportunities even though price arbitrage opportunities do not exist. Example 6.1.4 (Payoff Arbitrage Without Price Arbitrage) Let  = {ω1 , ω2 } and consider a market with two basic securities specified by 1

: 1

1 1

2

: 1

1 2

Since the payoffs S11 and S12 are linearly independent, every payoff admits a unique replicating portfolio by Proposition 5.3.6, so that no price arbitrage opportunities can exist. However, the portfolio λ = (−1, 1) ∈ R2 is an obvious payoff arbitrage opportunity, since V0 [λ] = 0 and V1 [λ] = 1ω2 .  In the sequel we focus on markets where no payoff arbitrage opportunity, hence no price arbitrage opportunity, can exist.

128

6 Market-Consistent Prices for Replicable Payoffs

Definition 6.1.5 (Arbitrage-Free Market) We say that the market is arbitrage free if there exists no payoff arbitrage opportunity (hence, a fortiori, no price arbitrage opportunity).  Remark 6.1.6 (Relevance of Arbitrage-Free Markets) One of the reasons to study arbitrage-free markets is that markets with arbitrage opportunities are not compatible with equilibrium prices. These are prices that the basic traded securities are expected to have when they are traded by a number of agents according to their individual preferences. Each potential price gives rise to a particular supply and demand for the various securities. We speak of equilibrium prices when, at those prices, supply for each of the basic securities matches demand. If the market is not arbitrage free, then no equilibrium prices can exist because, due to their attractiveness, there is always infinite demand for arbitrage opportunities.  Remark 6.1.7 (Changing the Unit of Account) It is immediate to see that a portfolio is a (price) arbitrage opportunity with respect to the fixed underlying numéraire if and only if it is a (price) arbitrage opportunity with respect to any other numéraire. In particular, whether a market is arbitrage free does not depend on the chosen unit of account. This fact will be freely used in the sequel. 

6.2

The Law of One Price

The absence of price arbitrage opportunities is intimately linked to the so-called Law of One Price. Definition 6.2.1 (Law of One Price) We say that the Law of One Price holds if, for all portfolios λ, μ ∈ RN , we have V1 [λ] = V1 [μ] ⇒ V0 [λ] = V0 [μ].



The Law of One Price requires that two portfolios with identical payoffs have the same price. The next result shows that the Law of One Price holds if and only if no price arbitrage opportunities exist. Proposition 6.2.2 The following statements are equivalent: (a) The Law of One Price holds. (b) The market admits no price arbitrage. (c) For every portfolio λ ∈ RN we have V1 [λ] = 0 ⇒ V0 [λ] = 0.

6.3 Market-Consistent Prices

129

Proof First, assume the Law of One Price holds and let λ ∈ RN be a portfolio with V1 [λ] = 0. Since the zero portfolio clearly has zero price and replicates the zero payoff, the Law of One Price implies that we must have V0 [λ] = 0. In particular, no price arbitrage opportunities can exist. This shows that (a) implies (b). Assume now that no price arbitrage opportunity is possible and take a portfolio λ ∈ RN satisfying V1 [λ] = 0. Note that V0 [λ] < 0 is not possible, since otherwise λ would be a price arbitrage opportunity. But V0 [λ] > 0 is also not possible, since otherwise −λ would be a price arbitrage opportunity. Hence, we must have V0 [λ] = 0, and so (b) implies (c). Finally, assume that (c) holds and take two portfolios λ, μ ∈ RN with V1 [λ] = V1 [μ]. Since we have V1 [λ − μ] = 0 by linearity, our assumption implies that V0 [λ − μ] = 0, and thus V0 [λ] = V0 [μ] again by linearity. This establishes that (c) implies (a) and concludes the proof of the proposition.  In an arbitrage-free market the Law of One Price holds automatically. This is an immediate consequence of the preceding proposition and of the fact that, by Proposition 6.1.3, the absence of payoff arbitrage opportunities implies the absence of price arbitrage opportunities. Corollary 6.2.3 If the market is arbitrage free, then the Law of One Price holds. We had seen in Proposition 5.3.6 that, if the market contains no redundant securities, every marketed payoff is replicated by a unique portfolio. Hence, we immediately obtain the following additional sufficient condition for the Law of One Price to hold. Corollary 6.2.4 If none of the basic securities is redundant, then the Law of One Price holds. Remark 6.2.5 (Changing the Unit of Account) It is immediate to verify that the validity of the Law of One Price does not depend on the chosen unit of account. This also follows from Remark 6.1.7 and Proposition 6.2.2. This fact will be freely used in the sequel. 

6.3

Market-Consistent Prices

Recall that, by definition, each marketed payoff X ∈ M can be replicated by a portfolio of basic securities, i.e., there exists a portfolio λ ∈ RN such that X = V1 [λ]. The amount V0 [λ] can be then naturally interpreted as the “cost of producing” or “manufacturing” X by trading in the market of the basic securities. The next proposition, however, shows that the notion of “replication cost” is not well defined unless the Law of One Price holds.

130

6 Market-Consistent Prices for Replicable Payoffs

Proposition 6.3.1 Assume the Law of One Price does not hold. Then, for every replicable payoff X ∈ M and every a ∈ R there exists a portfolio λ ∈ RN such that V0 [λ] = a and V1 [λ] = X. Proof Let μ ∈ RN be a replicating portfolio for X. Since the Law of One Price fails to hold, Proposition 6.2.2 implies that one can find a portfolio ξ ∈ RN replicating the zero payoff and such that V0 [ξ ] = 0. Then, the portfolio λ ∈ RN given by λ=μ+

a − V0 [μ] ξ V0 [ξ ]

is easily seen to satisfy V0 [λ] = a and V1 [λ] = X by linearity.



The above proposition says that, when the Law of One Price fails, every replicable payoff can be replicated at an arbitrary cost. By contrast, when the Law of One Price holds, every portfolio replicating a given payoff has the same price. This makes the following definition work. Definition 6.3.2 (Market-Consistent Price) every replicable payoff X ∈ M we set

Assume the Law of One Price holds. For

π(X) := V0 [λ], where λ ∈ RN is an arbitrary replicating portfolio for X. The number π(X) is called the market-consistent price of X. The functional π : M → R is called the pricing functional.  In our financial market only the basic securities are traded. However, every payoff different from the payoff of a basic security can be transacted privately between agents outside the market. We have called the price of a replicable payoff market consistent because it is the only price at which a replicable payoff should be transacted between rational agents that have access to the market. Indeed, consider a marketed payoff X ∈ M with replicating portfolio λ ∈ RN and assume a seller and a buyer are considering transacting it at the price p ∈ R: • If p > π(X), then no rational buyer would engage in the transaction, since he or she could directly access the market and “buy” the portfolio λ. In this case, the buyer would pay the lower price π(X) for a portfolio that entitles to the same payoff X at date 1. • If p < π(X), then no rational seller would want to engage in the transaction, since he or she could directly access the market and “sell” the portfolio λ, i.e., “buy” the portfolio −λ. By doing so the seller would obtain the higher amount π(X) while facing the same payment X at date 1.

6.4 The Pricing Functional

131

Hence, π(X) is the only price for X that is “consistent” with the prices in the market for the basic securities. Note that we have only defined market-consistent prices for replicable payoffs. Later on, in Chap. 8, we shall extend the notion of a market-consistent price to nonreplicable payoffs as well.

6.4

The Pricing Functional

In the preceding section we have shown that, whenever the Law of One Price holds, there exists a well-defined pricing functional π : M → R. In this section we show that the pricing functional is linear and that the absence of arbitrage opportunities can be characterized by its strict positivity.

Standing Assumption Throughout the entire section we assume that the Law of One Price holds.

The linearity of the pricing functional is an immediate consequence of the linearity of the portfolio valuation functional and of the portfolio payoff map. Theorem 6.4.1 The pricing functional π : M → R is linear. Proof Assume that X = V1 [λ] and Y = V1 [μ] for suitable portfolios λ, μ ∈ RN . For every a ∈ R, the portfolio aλ + μ is easily seen to replicate the marketed payoff aX + Y . Therefore, π(aX + Y ) = V0 [aλ + μ] = aV0[λ] + V0 [μ] = aπ(X) + π(Y ). The linearity of π follows by observing that π(0) = V0 [0] = 0.



Proposition 6.1.3 states that the absence of (payoff) arbitrage opportunities implies the absence of price arbitrage opportunities. In particular, in an arbitrage-free market, the pricing functional π is well defined. The following simple but fundamental result characterizes arbitrage-free markets as markets satisfying the Law of One Price and in which the pricing functional is strictly positive. In other words, arbitrage-free markets are precisely those markets in which every nonzero replicable claim has a strictly-positive market-consistent price.

132

6 Market-Consistent Prices for Replicable Payoffs

Theorem 6.4.2 The following statements are equivalent: (a) The market is arbitrage free. (b) π is strictly positive. Proof To prove that (a) implies (b), assume the market is arbitrage free and take a nonzero claim X ∈ M. Then, for every replicating portfolio λ ∈ RN for X we must have V0 [λ] > 0 by assumption, so that π(X) = V0 [λ] > 0. Hence, the functional π is strictly positive. To prove that (b) implies (a), assume that π is strictly positive. Then, for every portfolio λ ∈ RN with V1 [λ]  0 we have V0 [λ] = π(V1 [λ]) > 0 by the strict positivity of π. This shows that the market is arbitrage free, concluding the proof of the equivalence.  Theorem 6.4.2 justifies our detailed study of strictly-positive linear functionals in Chap. 4. In the next chapter we apply the extension and representation results obtained there to the pricing functional.

6.5

Exercises

In all exercises below we consider the one-period economy described in Chap. 5 and adhere to the market specifications introduced there. Exercise 6.5.1 Show that the following statements are equivalent: (a) The market is arbitrage free. (b) There exists no arbitrage opportunity λ ∈ RN with V0 [λ] = 0. Exercise 6.5.2 Assume that the Law of One Price holds. Show that the following statements are equivalent: (a) The market is arbitrage free. (b) ker(π) ∩ X+ = {0}. Exercise 6.5.3 Let  = {ω1 , ω2 } and consider the basic securities defined by 1

: 1

2 1

2

: 1

(i) Show that the market is complete. (ii) Show that the Law of One Price fails. (iii) Identify all the price arbitrage opportunities.

0 1

3

: 3

4 2

6.5 Exercises

133

(iv) Modify one security in such a way that the new market remains complete, but the Law of One Price holds. Exercise 6.5.4 Let  = {ω1 , ω2 } and consider the basic securities defined by 1

(i) (ii) (iii) (iv)

: 1

4 2

2

: 2

6 3

Show that the market is incomplete. Show that the Law of One Price fails. Identify all the price arbitrage opportunities. Modify one security in such a way that the new market remains incomplete, but the Law of One Price holds.

Exercise 6.5.5 Let  = {ω1 , ω2 } and consider the basic securities defined by 1

(i) (ii) (iii) (iv) (v) (vi)

: 1

0 2

2

: 4

12 9

Show that the market is complete. Show that the Law of One Price holds. Determine the market-consistent price of 41ω1 + 1ω2 . Show that the market is not arbitrage free. Identify all the arbitrage opportunities. Modify one security in such a way that the new market remains complete but becomes arbitrage free.

Exercise 6.5.6 Let  = {ω1 , ω2 , ω3 } and consider the basic securities defined by

1

(i) (ii) (iii) (iv) (v) (vi) (vii)

: 1

1 1 1

2

: 2

2 1 1

Show that the market is incomplete. Show that the Law of One Price holds. Show that 31ω1 + 1{ω2 ,ω3 } is replicable. Determine the market-consistent price of 31ω1 + 1{ω2 ,ω3 } . Show that the market is not arbitrage free. Identify all the arbitrage opportunities. Modify one security in such a way that the new market remains incomplete, but becomes arbitrage free.

134

6 Market-Consistent Prices for Replicable Payoffs

Exercise 6.5.7 Let  = {ω1 , . . . , ωK } for K ≥ 2 and consider two basic securities defined by

1

: 1

1+r .. .

2

: s

1+r

(1 + r1 )s .. .

(1 + rK )s

where r, r1 , . . . , rK ∈ (−1, ∞) satisfy r1 ≥ · · · ≥ rK and s ∈ (0, ∞). (i) Show that the Law of One Price holds if and only if • either r1 > rK , • or r1 = rK = r. (ii) If the Law of One Price fails, identify all the price arbitrage opportunities. (iii) Show that the market is arbitrage free if and only if • either r1 > r > rK , • or r1 = rK = r. (iv) If the market is not arbitrage free, identify all the arbitrage opportunities. Exercise 6.5.8 Let R ∈ R be a rescaling process and assume that the Law of One Price  prove that ∈M holds. For every replicable rescaled payoff X    = R0 π  π X

 X . R1

7

Fundamental Theorem of Asset Pricing

The Fundamental Theorem of Asset Pricing refers to a collection of results characterizing arbitrage-free markets in terms of either extensions or representations of the pricing functional. The first version of the Fundamental Theorem of Asset Pricing states that the absence of arbitrage opportunities is equivalent to the existence of a strictly-positive linear extension of the pricing functional from the marketed space to the entire payoff space. Each of these extensions can be viewed as a hypothetical pricing rule in a complete arbitragefree market under which the original basic securities preserve their prices. The other versions of the Fundamental Theorem of Asset Pricing are based on this extension result and provide useful representations of the pricing functional in terms of Riesz densities. The mathematical prerequisites for this chapter have been developed in detail in Chap. 4.

Standing Assumption Throughout the entire chapter we consider the one-period economy described in Chap. 5 and adhere to the market specifications introduced there. In addition, we work under the Law of One Price.

7.1

Pricing Extensions

By Theorem 6.4.2, the pricing functional of a complete arbitrage-free market is a strictlypositive linear functional defined on the entire payoff space X . We start by highlighting that, conversely, every strictly-positive linear functional defined on X can be viewed as the pricing functional of some complete arbitrage-free market.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_7

135

136

7 Fundamental Theorem of Asset Pricing

Proposition 7.1.1 For every strictly-positive linear functional ψ : X → R the ArrowDebreu market with basic securities given by S 1 = (ψ(1ω1 ), 1ω1 ), . . . , S K = (ψ(1ωK ), 1ωK ) is complete and arbitrage free and its pricing functional coincides with ψ. Proof Since ψ is strictly positive, it is immediate that the above basic securities satisfy the assumptions at the beginning of Chap. 5 and that the corresponding market is complete; see also Example 5.4.6. Note that, for every portfolio λ ∈ RK such that V1 [λ]  0 we have V0 [λ] =

K  i=1

λ ψ(1ωi ) = ψ i

 K

 λ 1ωi i

= ψ(V1 [λ]) > 0

i=1

by the strict positivity of ψ. This shows that the market is arbitrage free. In particular, the Law of One Price holds and we can define a pricing functional π. Since, by definition, the linear functionals π and ψ coincide on the elements of the canonical basis of X , we deduce from Remark 1.4.5 that they must coincide on the entire payoff space X .  Remark 7.1.2 (Characterizing Complete Arbitrage-Free Markets) Note that different sets of basic securities may give rise to a complete arbitrage-free market with the same pricing functional. While these markets certainly differ from a practical perspective, they are not genuinely different from a mathematical finance perspective because, in each of them, payoffs have precisely the same price. If we identify all complete arbitrage-free markets having the same pricing functional, then we may say that there is a one-toone correspondence between complete arbitrage-free markets and strictly-positive linear functionals defined on the payoff space X .  Under the Law of One Price, the absence of arbitrage opportunities was shown to be equivalent to the strict positivity of the pricing functional. This allows us to exploit the extension result for strictly-positive linear functionals obtained in Chap. 4 to show that the pricing functional of an arbitrage-free market can be always extended to the entire payoff space preserving linearity and, more importantly, strict positivity. By Proposition 7.1.1, this extension can be interpreted as the pricing functional of a hypothetical complete arbitrage-free market which “extends” the original market in the sense that, under the arbitrage-free extension, all replicable payoffs retain their original prices. In other words, in an arbitrage-free market, the prices of replicable payoffs are consistent with the prices of payoffs in a hypothetical complete arbitrage-free market. This is the content of our first version of the Fundamental Theorem of Asset Pricing. Before stating it, we introduce a more financially suggestive terminology for a strictly-positive linear extension of the pricing functional.

7.1 Pricing Extensions

137

Definition 7.1.3 (Arbitrage-Free Extension) A linear functional ψ : X → R is called an arbitrage-free extension of π if the following properties hold: (1) ψ(X) = π(X) for every replicable payoff X ∈ M. (2) ψ is strictly positive. The set of arbitrage-free extensions of π is denoted by E(π).



Remark 7.1.4 (On the Interpretation of Arbitrage-Free Extensions) One should not overestimate the economic significance of the market “extension” we have just described. Indeed, if a market is completed by allowing new securities to be traded, market forces will, in general, cause the prices of the original basic securities to change as well, as mentioned in Remark 6.1.6. In this case, the new pricing functional will not be an extension of the old pricing functional. As will be discussed later in this chapter, the power of arbitrage-free extensions lies in enabling us to represent the pricing functional in several useful ways. Moreover, as we will see in Chap. 8, arbitrage-free extensions do contain information about the range of reasonable prices at which nonreplicable payoffs may be transacted outside the market by two rational agents.  We state our first version of the Fundamental Theorem of Asset Pricing, an expression coined in Dybvig and Ross [7]. Theorem 7.1.5 (Fundamental Theorem of Asset Pricing, Version I) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists an arbitrage-free extension of π. In this case, if the market is incomplete, then π admits infinitely many arbitrage-free extensions. Proof If the market is complete, then π is defined on the whole of X . Hence, the desired assertions follow from Theorem 6.4.2. Hence, assume that the market is incomplete. To prove that (a) implies (b), assume that the market is arbitrage free so that, by Theorem 6.4.2, the pricing functional π is strictly positive. Then, it follows from Theorem 4.2.6 that π admits infinitely many strictly-positive linear extensions to the space X . To prove that (b) implies (a), it is enough to note that the existence of an arbitrage-free extension forces π to be strictly positive. It follows that the market is arbitrage free by Theorem 6.4.2. 

138

7.2

7 Fundamental Theorem of Asset Pricing

Pricing Densities

Dybvig and Ross [7] reserved the term Fundamental Theorem of Asset Pricing for the extension result stated in Theorem 7.1.5. In the next two sections we deal with two additional versions of the Fundamental Theorem of Asset Pricing that they were careful to call Representation Theorems because they characterize arbitrage-free markets in terms of the existence of particular representations of the pricing functional. Unfortunately, in the current literature that terminological distinction has virtually disappeared and these Representation Theorems are also referred to as the Fundamental Theorem of Asset Pricing. We will not attempt to undo this development and also refer to them as Fundamental Theorems of Asset Pricing. The first reformulation of the Fundamental Theorem of Asset Pricing is cast in the language of Riesz densities. The key concept is that of a pricing density. Definition 7.2.1 (Pricing Density) A random variable D ∈ L is called a pricing density (for π) if it satisfies the following properties: (1) π(X) = EP [DX] for every replicable payoff X ∈ M. (2) D is strictly positive. The set of pricing densities for π is denoted by D(π).



A pricing density provides a useful representation of the pricing functional that allows to tackle the pricing problem of a marketed payoff in a more efficient way: Instead of first finding a replicating portfolio and then computing its price, we can determine its marketconsistent price directly by computing a suitable expectation applied to the payoff itself. Remark 7.2.2 Recall that for every replicable payoff X ∈ M the market-consistent price π(X) is expressed in units of the numéraire at date 0 and the payoff X is expressed in units of the numéraire at date 1. Since a pricing density D ∈ L satisfies π(X) =



P(ω)D(ω)X(ω),

ω∈

the dimension of D is units of numéraire at date 0 per units of numéraire at date 1. The pricing density D thus provides a means of directly translating a future payoff into a price today. For this reason, a pricing density is also known in the literature under the name of (stochastic) discount factor or price deflator.  Note that a pricing density is just a strictly-positive Riesz density of the pricing functional. The following proposition highlights the strong link between pricing densities and arbitrage-free extensions of the pricing functional. More precisely, it shows that pricing densities correspond exactly to the Riesz densities of arbitrage-free extensions.

7.2 Pricing Densities

139

Proposition 7.2.3 (i) For every arbitrage-free extension ψ ∈ E(π) the Riesz density of ψ is a pricing density for π. (ii) For every pricing density D ∈ D(π) there exists an arbitrage-free extension of π with Riesz density equal to D. Proof To prove (i), take an arbitrage-free extension ψ ∈ E(π). If D ∈ L is the Riesz density of ψ, then for every replicable payoff X ∈ M we clearly have π(X) = ψ(X) = EP [DX]. Moreover, since ψ is strictly positive, it follows from Theorem 4.3.5 that D is strictly positive. Hence, D is a pricing density for π. To establish (ii), take an arbitrary pricing density D ∈ D(π) and define ψ : X → R by ψ(X) = EP [DX]. Clearly, the linear functional ψ coincides with π on the space M and its Riesz density is equal to D. Moreover, since D is strictly positive, ψ is strictly positive by Theorem 4.3.5. This shows that ψ is an arbitrage-free extension of π.  The above proposition enables us to immediately translate the Fundamental Theorem of Asset Pricing into the language of pricing densities. Theorem 7.2.4 (Fundamental Theorem of Asset Pricing, Version II) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists a pricing density for π. In this case, if the market is complete, then π admits a unique pricing density. Otherwise, it admits infinitely many pricing densities. Remark 7.2.5 Recall that, by Theorem 6.4.2, the market is free of arbitrage if and only if the pricing functional π is strictly positive. Consequently, Theorem 7.2.4 can also be derived from the general result on strictly-positive functionals recorded in Theorem 4.3.5. 

140

7.3

7 Fundamental Theorem of Asset Pricing

Pricing Measures

In this section we provide a third version of the Fundamental Theorem of Asset Pricing that is based on the notion of a pricing measure. To introduce this notion, suppose the market is arbitrage free and assume that • 1 is a replicable payoff such that π(1) = 1. This is equivalent to assuming that • there exists a portfolio θ ∈ RN such that V0 [θ ] = V1 [θ ] = 1. In this case, every pricing density D ∈ L satisfies EP [D] = EP [D1] = π(1) = 1 and can be therefore expressed, by Theorem 2.6.5, as a Radon-Nikodym density dQ dP for a suitable probability measure Q that is equivalent to P. The measure Q allows us to represent prices of marketed payoffs as simple expectations. More precisely, by the definition of a Radon-Nikodym density, we have  π(X) = EP [DX] = EP

dQ X = EQ [X] dP

for every replicable payoff X ∈ M. The key observation here is that, as remarked in Example 5.5.3, the above “special” assumptions on the payoff 1 can always be achieved by changing the unit of account and expressing payments in units of a numéraire portfolio θ ∈ RN . In this case, the preceding argument yields  π(X) X = EQ V0 [θ ] V1 [θ ] for every replicable payoff X ∈ M. This can be equivalently written, using the “tilde” notation for rescaled payments, see Remark 5.5.7 and Exercise 6.5.8, as ! "     = EQ X  π X   ∈ M. for every replicable payoff X

7.3 Pricing Measures

141

Standing Assumption Throughout the remainder of this section we fix a numéraire portfolio θ ∈ RN and use the “tilde” notation to denote payments in units of θ .

The probability measure described above is called a pricing measure. Definition 7.3.1 (Pricing Measure) A probability measure Q is called a θ -pricing measure (for π) if the following conditions are satisfied:   ! "   = EQ X  for every replicable payoff X  ∈ M. (1)  π X (2) Q is equivalent to P. The set of θ -pricing measures for π is denoted by Qθ (π).



As suggested by the discussion preceding the definition of a pricing measure, there is a strong link between pricing measures and pricing densities. Proposition 7.3.2 (i) For every pricing measure Q ∈ Qθ (π) we have that dQ π. dP is a pricing density for  (ii) For every pricing density D ∈ D( π ) there exists a pricing measure Q ∈ Qθ (π) such . that D = dQ dP Proof To establish (i), take an arbitrary Q ∈ Qθ (π). Then, the Radon-Nikodym density dQ dP is strictly positive and satisfies    ! " dQ    X  π X = EQ X = EP dP  by Proposition 2.6.4. This shows that dQ is a pricing density for the ∈ M for every X dP  and  rescaled pricing functional  π . To prove (ii), it suffices to observe that 1 ∈ M π (1) = 1 and rely on the argument at the beginning of this section.  The preceding result allows us to derive the following version of the Fundamental Theorem of Asset Pricing. This is a direct consequence of Theorem 7.2.4 once we recall that the absence of arbitrage opportunities does not depend on the choice of the underlying unit of account.

142

7 Fundamental Theorem of Asset Pricing

Theorem 7.3.3 (Fundamental Theorem of Asset Pricing, Version III) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists a θ -pricing measure for π. In this case, if the market is complete, then π admits a unique θ -pricing measure. Otherwise, it admits infinitely many θ -pricing measures. We conclude this section by describing the standard example of a pricing measure discussed in most textbooks on mathematical finance, namely the pricing measure when a risk-free security serves as the numéraire. Example 7.3.4 (Risk-Neutral Measure) Assume the first security is a risk-free security specified by

1

1+r .. .

: 1

1+r

where r ∈ (−1, ∞) is a given interest rate. In this case, the portfolio θ = (1, 0, . . . , 0) ∈ RN satisfies V0 [θ ] = 1 and V1 [θ ] = 1 + r and thus qualifies as a numéraire portfolio. A θ pricing measure is called a risk-neutral measure. To justify this terminology note that, for every risk-neutral measure Q, we have  π(X) = EQ

X 1+r

for every replicable payoff X ∈ M. In particular,  EQ

S1 − S1 X − π(X) =r= 1 1 0 π(X) S0

holds for every X ∈ M with nonzero price. The random variable in the above expectation is called the (rate of ) return on X and represents the increment in the value of X expressed in units of its initial value. Hence, under the “artificial” probability Q, the price of every marketed payoff depends only on its expected payoff and its expected return coincides with the rate of return r of the risk-free security. If Q were to be the real-world probability, this would mean that agents are risk neutral, i.e., they do not require an additional expected

7.4 Exercises

143

return from a risky payoff. In other words, under Q, marketed payoffs are priced as if agents were risk neutral. 

7.4

Exercises

In all exercises below we consider the one-period economy described in Chap. 5 and adhere to the market specifications introduced there. Exercise 7.4.1 Prove that the set of arbitrage-free extensions E(π), the set of pricing densities D(π), and the set of θ -pricing measures Qθ (π), where θ ∈ RN is an arbitrary numéraire portfolio, are in a one-to-one correspondence. More concretely, prove the following statements: (i) The map F : D(π) → E(π) defined by setting for every X ∈ X F (D)(X) = EP [DX] is a (well-defined) one-to-one correspondence. (ii) The map F : Qθ (π) → E(π) defined by setting for every X ∈ X  F (Q)(X) = EQ

V0 [θ ]X V1 [θ ]

is a (well-defined) one-to-one correspondence. In both cases, determine the corresponding inverse maps. In addition, find an explicit oneto-one correspondence between D(π) and Qθ (π) and determine the corresponding inverse map. Exercise 7.4.2 Assume the market is arbitrage free and consider a strictly-positive random variable D ∈ L. Show that the following statements are equivalent: (a) D is a pricing density for π. (b) For every i ∈ {1, . . . , N} we have EP [DS1i ] = S0i . (c) For every i ∈ {1, . . . , N} we have K  k=1

D(ωk )P(ωk )S1i (ωk ) = S0i .

144

7 Fundamental Theorem of Asset Pricing

This shows that a pricing density is uniquely determined by its action on the payoffs of the basic securities. Exercise 7.4.3 (Fundamental Theorem of Asset Pricing) Prove that the following statements are equivalent: (a) The market is arbitrage free. (b) There exist d1 , . . . , dK ∈ (0, ∞) such that for every i ∈ {1, . . . , N} we have K 

dk P(ωk )S1i (ωk ) = S0i .

k=1

In this case, show that the random variable D ∈ L given by D=

K 

dk 1ωk

k=1

is a pricing density for π. This gives a simple operational condition, expressed in terms of the basic securities, to check whether the market is arbitrage free and construct a pricing density. Exercise 7.4.4 Let θ ∈ RN be a candidate numéraire portfolio. Assume the market is arbitrage free and let Q be a probability measure equivalent to P. Show that the following statements are equivalent: (a) Q is a θ -pricing measure for π. (b) For every i ∈ {1, . . . , N} we have  EQ

S1i V1 [θ ]

 =

S0i . V0 [θ ]

(c) For every i ∈ {1, . . . , N} we have K  k=1

Q(ωk )

S0i S1i (ωk ) = . V1 [θ ](ωk ) V0 [θ ]

This shows that a pricing measure is uniquely determined by its action on the payoffs of the basic securities.

7.4 Exercises

145

Exercise 7.4.5 (Fundamental Theorem of Asset Pricing) Let θ ∈ RN be a candidate numéraire portfolio. Prove that the following statements are equivalent: (i) The market is arbitrage free. (ii) There exist q1 , . . . , qK ∈ (0, 1) such that for every i ∈ {1, . . . , N} K  k=1

qk

S0i S1i (ωk ) = . V1 [θ ](ωk ) V0 [θ ]

In this case, show that the coefficients q1 , . . . , qK add up to 1 and the probability measure Q given by Q(ω1 ) = q1 , . . . , Q(ωK ) = qK is a θ -pricing measure for π. This gives a simple operational condition, expressed in terms of the basic securities, to check whether the market is arbitrage free and construct a pricing measure. Exercise 7.4.6 Let  = {ω1 , ω2 } and assume that P(ω1 ) = P(ω2 ) = basic securities defined by 1

: 1

0 2

2

: 4

1 2.

Consider the

12 9

(i) Use Exercise 7.4.3 to show that the market is not arbitrage free. (ii) Confirm point (i) by using Exercise 7.4.5. Exercise 7.4.7 Let  = {ω1 , ω2 , ω3 } and let P(ω1 ) = P(ω2 ) = P(ω3 ) = 13 . Consider the basic securities defined by

1

: 1

1 1 1

2

: 2

5 2 1

Moreover, consider the portfolio θ = (1, 0) ∈ R2 . (i) (ii) (iii) (iv) (v)

Use Exercise 7.4.3 to show that the market is arbitrage free. Determine all the pricing densities for π. Confirm point (i) by using Exercise 7.4.5. Determine all the θ -pricing measures for π. Deduce that the market is incomplete.

146

7 Fundamental Theorem of Asset Pricing

Exercise 7.4.8 Let  = {ω1 , . . . , ωK } for K ≥ 2 and let P(ω1 ) = · · · = P(ωK ) = Consider the basic securities defined by

1

: 1

1+r .. .

1+r

2

: s

1 K.

(1 + r1 )s .. .

(1 + rK )s

where r, r1 , . . . , rK ∈ (−1, ∞) satisfy r1 ≥ · · · ≥ rK and s ∈ (0, ∞). Moreover, consider the portfolio θ = (1, 0) ∈ R2 . (i) Use Exercise 7.4.3 to show that the market is arbitrage free if and only if one of the following two conditions hold: • r1 > r > rK . • r1 = rK = r. (ii) Determine all the pricing densities for π. (iii) Confirm point (i) by using Exercise 7.4.5. (iv) Determine all the θ -pricing measures for π. (v) Deduce that, under no arbitrage, the market is complete if and only if the following two conditions hold: • K = 2. • r1 > rK . Exercise 7.4.9 Let R ∈ R be a rescaling process. Prove the following statements:  → R is an arbitrage-free extension of  (i) A functional ψ : X π if and only if the 1 X) functional ψ  : X → R given by ψ  (X) = ψ(R is an arbitrage-free extension R0 of π. (ii) A random variable D ∈ L is a pricing density for  π if and only if the random variable D  ∈ L given by D  = RR10D is a pricing density for π.

8

Market-Consistent Prices for General Payoffs

In this chapter we extend the concept of a market-consistent price, i.e., a price that is consistent with the prices of the basic securities traded in the market, from replicable payoffs to general, not necessarily replicable, payoffs. Because this extension relies on the strict positivity of the pricing functional, we focus exclusively on arbitrage-free markets. We show that, given the market environment, the set of prices at which rational buyers and sellers will contemplate transacting a nonreplicable payoff is a bounded open interval. We provide a variety of descriptions of this interval. The upper bound is the superreplication price and corresponds to the threshold above which no buyer should be willing to transact. The lower bound is the subreplication price and corresponds to the threshold below which no seller should be willing to transact.

Standing Assumption Throughout the entire chapter we work in the one-period economy described in Chap. 5 and adhere to the market specifications introduced there. In addition, we assume that the market is incomplete and arbitrage free.

8.1

Marketed Bounds

Consider two agents who have access to the market, but are interested in transacting a nonreplicable payoff X ∈ X . One agent is interested in selling and the other in buying X. Since the payoff is not marketed, both agents could, at first sight, transact X at any price they manage to agree on. At second sight, however, there is a range of prices outside of which it would not be “rational” or “attractive”, either for the seller or for the buyer, to transact.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_8

147

148

8 Market-Consistent Prices for General Payoffs

We first take the perspective of a seller who wants to deliver a contract with payoff X ∈ X . The prices of those replicable payoffs that are “better” or “more preferable” than X, i.e., that deliver a lower payoff in every future state of the economy, should provide a natural lower bound for the price a seller should be willing to receive for X. The reasoning from a buyer’s perspective is similar. In this case, a replicable payoff will be deemed “better” or “more preferable” than X if the corresponding payoff is higher than X in every future state of the economy. These benchmark marketed payoffs will be called marketed bounds. Definition 8.1.1 (Marketed Bound) Let X ∈ X . A marketed payoff Z ∈ M is said to be a marketed lower bound for X whenever Z ≤ X. The set of marketed lower bounds for X is denoted by M− (X). A marketed payoff Z ∈ M is said to be a marketed upper bound for X whenever  Z ≥ X. The set of marketed upper bounds for X is denoted by M+ (X). Since our initial assumptions on the basic securities ensure the existence of a strictlypositive replicable payoff, it is easy to see that every payoff admits marketed bounds. Proposition 8.1.2 For every payoff X ∈ X the sets M− (X) and M+ (X) are nonempty. Proof Take an arbitrary X ∈ X and a strictly-positive payoff U ∈ M. Then, for sufficiently large a ∈ (0, ∞) we have −aU ≤ X ≤ aU , showing that M− (X) and  M+ (X) are both nonempty.

8.2

Market-Consistent Prices

As said above, two agents willing to transact a payoff X ∈ X for the price p ∈ R should use the prices of the marketed bounds of X as the natural benchmark against which to establish whether p is too low to sell or too high to buy. From a seller’s perspective, the price p should satisfy p > π(Z) for every marketed lower bound Z ∈ M with Z  X. This is because, if there existed a marketed payoff Z ∈ M such that • Z(ω) ≤ X(ω) for every ω ∈  • Z(ω) < X(ω) for some ω ∈  • π(Z) ≥ p then it would be more advantageous for the agent to access the market and sell Z: He or she would then be receiving at least p and committing to paying Z, which for any seller is clearly more preferable than committing to paying X.

8.2 Market-Consistent Prices

149

Conversely, when assessing whether to buy X for the price p, the agent should require that p satisfies p < π(Z) for every marketed upper bound Z ∈ M with Z  X. Indeed, if there existed a marketed payoff Z ∈ M such that • Z(ω) ≥ X(ω) for every ω ∈  • Z(ω) > X(ω) for some ω ∈  • π(Z) ≤ p then the agent would be better off by buying Z at the market: He or she would then be paying at most p for the payoff Z, which for any buyer is clearly more preferable than the payoff X. The preceding discussion leads us to the notion of a market-consistent price. Definition 8.2.1 (Market-Consistent Price) Consider a payoff X ∈ X and a scalar p ∈ R. We say that p is a market-consistent seller price for X if for every marketed lower bound Z ∈ M− (X) we have Z  X ⇒ p > π(Z). The set of market-consistent seller prices for X is denoted by s (X). Similarly, we say that p is a market-consistent buyer price for X if for every marketed upper bound Z ∈ M+ (X) we have Z  X ⇒ p < π(Z). The set of market-consistent buyer prices for X is denoted by b (X). Finally, we say that p is a market-consistent price for X if p is both a marketconsistent seller and buyer price for X. The set of market-consistent prices for X is denoted by (X).  Remark 8.2.2 In Chap. 6 we introduced the notion of a market-consistent price for a replicable payoff, which was defined as the price of one (and, hence, any) of its replicating portfolios. It is not difficult to show that, for a replicable payoff, the notion based on replicating portfolios coincides with the general notion we have introduced in this chapter; see also Theorem 8.3.1.  The preceding discussion implies that no agent should ever consider selling a contract at a price that is not a market-consistent seller price. Similarly, an agent should never contemplate buying a contract at a price that is not a market-consistent buyer price. As a result, agents that have access to the market should never transact at prices that are not market consistent.

150

8 Market-Consistent Prices for General Payoffs

Remark 8.2.3 (Prices and Individual Preferences) A market-consistent price for a given payoff is a price at which it would not be foolish to transact that payoff. But at which of the market-consistent prices a transaction takes place, if at all, is a question that is driven by the individual preferences of the transacting parties. Every seller and every buyer will have an individual range of prices within which they are willing to transact. A transaction will only take place if the seller’s and the buyer’s individual ranges have nonempty intersection and the exact price at which it takes place will depend on the negotiating skills of the two parties.  We rely on the arbitrage-free extensions of the pricing functional to prove that every payoff admits a market-consistent price. In fact, Theorem 8.3.1 will show that every market-consistent price can be expressed in terms of an arbitrage-free extension. Recall that we have denoted by E(π) the set of arbitrage-free pricing extensions. Proposition 8.2.4 For every payoff X ∈ X and every arbitrage-free extension ψ ∈ E(π) we have that ψ(X) is a market-consistent price for X. Proof Let W ∈ M− (X) and Z ∈ M+ (X) satisfy W  X  Z. Then clearly π(W ) = ψ(W ) < ψ(X) < ψ(Z) = π(Z), by the strict monotonicity of ψ. This shows that ψ(X) is a market-consistent price for X and concludes the proof.  It is useful to know that market-consistent seller and buyer prices can be expressed in terms of market-consistent prices. This will allow us to convert all the results on marketconsistent prices into their seller’s and buyer’s counterparts. Proposition 8.2.5 For every payoff X ∈ X the following statements hold: (i) s (X) = {p ∈ R ; p ≥ q for some q ∈ (X)}. (ii) b (X) = {p ∈ R ; p ≤ q for some q ∈ (X)}. Proof We start by proving (i). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent seller price p ∈ s (X) and any market-consistent price r ∈ (X) and set q = min{p, r}. It is easy to see that q is a market-consistent price for X. This shows the desired inclusion. Next, we focus on (ii). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent buyer price p ∈ b (X) and any market-consistent price r ∈ (X) and set q = max{p, r}. It is easy to see that q is a market-consistent price for X. This shows the desired inclusion. 

8.3 Characterizing Market-Consistent Prices

8.3

151

Characterizing Market-Consistent Prices

In this section we study a variety of representations of the set of market-consistent prices. Our main interest is in market-consistent prices, rather than their seller’s and buyer’s versions, since these are the prices at which a potential transaction can occur. The corresponding results for buyer and seller prices can be derived from Proposition 8.2.5. Market-Consistent Prices and Pricing Extensions In Chap. 4, we have already encountered the concept of a market-consistent price in disguise. Indeed, given a payoff X ∈ X , the market-consistent prices of X correspond precisely to those numbers satisfying the π-compatibility condition at X recorded in Definition 4.2.7. The following fundamental result is a simple translation of Theorem 4.2.8 to our present setting and shows that market-consistent prices can be expressed in terms of arbitrage-free extensions of the pricing functional. This justifies the centrality, reflected by the name itself, of the Fundamental Theorem of Asset Pricing: Any market-consistent price can be interpreted as the price that the corresponding payoff possesses in a complete arbitrage-free “market extension”. Theorem 8.3.1 For every payoff X ∈ X we have

(X) = {ψ(X) ; ψ ∈ E(π)}. Remark 8.3.2 (Arbitrage-Free Prices) In Remark 7.1.2 we have interpreted an arbitragefree extension of the pricing functional as the pricing functional of an extended market that is arbitrage free and complete and preserves the prices of the original market. The above result tells us that a market-consistent price for a nonreplicable payoff coincides with the possible price that said payoff might have in an extended complete arbitrage-free market; see also Exercise 8.5.4. For this reason, most authors speak of “arbitrage-free prices” instead of market-consistent prices. Although this interpretation is clearly appealing and suggestive, it should be taken with a grain of salt, as explained in Remark 7.1.4.  Market-Consistent Prices and Pricing Densities By Proposition 7.2.3, pricing densities correspond precisely to the Riesz densities of arbitrage-free extensions. As a result, we can immediately reformulate Theorem 8.3.1 in terms of pricing densities. Recall that D(π) is the set of all pricing densities.

152

8 Market-Consistent Prices for General Payoffs

Theorem 8.3.3 For every payoff X ∈ X we have

(X) = {EP [DX] ; D ∈ D(π)}. Market-Consistent Prices and Pricing Measures The link between pricing densities and pricing measures established in Proposition 7.3.2 allows us to characterize market-consistent prices in terms of pricing measures as a direct consequence of Theorem 8.3.3. To do this, we have to express payments in units of a given numéraire portfolio. We use the “tilde” notation to denote the corresponding rescaled payments; see Remark 5.5.7. Recall that we have denoted by Qθ (π) the set of pricing measures with respect to a numéraire portfolio θ ∈ RN .  we have ∈X Theorem 8.3.4 Let θ ∈ RN be the numéraire portfolio. For every payoff X $   # ! "  ; Q ∈ Qθ (π) .  X  = EQ X

Remark 8.3.5 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 8.5.10: For every payoff X ∈ X we have    X

(X) = V0 [θ ]EQ ; Q ∈ Qθ (π) . V1 [θ ]

8.4



Sub- and Superreplication Prices

In this section we provide a detailed study of the bounds of the set of market-consistent prices. In particular, we show when such bounds are themselves market-consistent prices. As a preliminary result, we point out that the set of market-consistent prices of any given payoff is a bounded interval. Proposition 8.4.1 For every X ∈ X the set (X) is a bounded interval. Proof The statement follows by combining Theorems 4.2.6 and 8.3.1. To see this directly, take any p, q ∈ (X). Then, it is immediate to verify that ap + (1 − a)q ∈ (X) for every a ∈ [0, 1]. This shows that (X) is an interval. To show that (X) is bounded, take any marketed bounds W ∈ M− (X) and Z ∈ M+ (X) such that W  X  Z. Every market-consistent price p ∈ (X) must satisfy π(W ) < p < π(Z). This establishes that

(X) is bounded.  The lower and upper bounds of the set of market-consistent prices are called the suband superreplication price, respectively.

8.4 Sub- and Superreplication Prices

153

Definition 8.4.2 (Sub/Superreplication Price) Consider a payoff X ∈ X . The subreplication price of X is defined as π − (X) := inf (X). The superreplication price of X is defined as π + (X) := sup (X).



The next result, which is a simple reformulation of Proposition 4.2.10 in the language of marketed bounds, justifies the chosen terminology. Proposition 8.4.3 For every payoff X ∈ X the following statements hold: (i) π − (X) = sup{π(Z) ; Z ∈ M− (X)}. (ii) π + (X) = inf{π(Z) ; Z ∈ M+ (X)}. Moreover, the supremum in (i) and the infimum in (ii) are attained. As a direct consequence of the representation recorded in Proposition 8.4.6, we derive the following result highlighting the salient properties of sub- and superreplication prices. Alternatively, the result follows from Proposition 4.2.9. Proposition 8.4.4 The following statements hold: (i) π + (X) = −π − (−X) for every X ∈ X . (ii) π − : X → R is a superlinear increasing extension of π such that: (a) π − (X) < 0 for every X ∈ X such that X  0; (b) π − (X + Z) = π − (X) + π(Z) for all X ∈ X and Z ∈ M. (iii) π + : X → R is a sublinear increasing extension of π such that: (a) π + (X) > 0 for every X ∈ X such that X  0; (b) π + (X + Z) = π + (X) + π(Z) for all X ∈ X and Z ∈ M. Remark 8.4.5 Since the functionals π − and π + are superlinear and sublinear, respectively, they are automatically Lipschitz continuous with respect to the maximum norm (or equivalently with respect to every p-norm for p ∈ [1, ∞)), i.e., there exists a constant c ∈ (0, ∞) such that |π − (X) − π − (Y )| ≤ cX − Y ∞ , |π + (X) − π + (Y )| ≤ cX − Y ∞ , for all payoffs X, Y ∈ X . This follows from Corollary 3.4.5.



154

8 Market-Consistent Prices for General Payoffs

Representing Sub- and Superreplication Prices As a direct application of Theorem 8.3.1 we derive the following representation of suband superreplication prices. Proposition 8.4.6 For every payoff X ∈ X the following statements hold: (i) π − (X) = inf{ψ(X) ; ψ ∈ E(π)}. (ii) π + (X) = sup{ψ(X) ; ψ ∈ E(π)}. The representation of sub- and superreplication prices in terms of pricing densities is a direct consequence of Theorem 8.3.3. Proposition 8.4.7 For every payoff X ∈ X the following statements hold: (i) π − (X) = inf{EP [DX] ; D ∈ D(π)}. (ii) π + (X) = sup{EP [DX] ; D ∈ D(π)}. Similarly, we derive from Theorem 8.3.4 a representation of sub- and superreplication prices in terms of pricing measures.  the ∈X Proposition 8.4.8 Let θ ∈ RN be the numéraire portfolio. For every payoff X following statements hold:   $ #  ; Q ∈ Qθ (π) .  = inf EQ [X] (i)  π− X   # $  = sup EQ [X]  ; Q ∈ Qθ (π) . (ii)  π+ X Remark 8.4.9 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 8.5.10: For every payoff X ∈ X we have: ' % & ( (i) π − (X) = V0 [θ ] inf EQ V1X[θ] ; Q ∈ Qθ (π) . ' % & ( (ii) π + (X) = V0 [θ ] sup EQ V1X[θ] ; Q ∈ Qθ (π) .



Market-Consistent Prices and Replicability The following additional characterization of the set of market-consistent prices is a direct translation of Proposition 4.2.9 to our present setting. Theorem 8.4.10 For every payoff X ∈ X the following statements hold: (i) If X is replicable, then π − (X) = π + (X) and

(X) = {π(X)}.

8.5 Exercises

155

(ii) If X is not replicable, then π − (X) < π + (X) and

(X) = (π − (X), π + (X)). An immediate corollary gives derive the following equivalent conditions for a payoff to be replicable. What makes this result appealing is the fact that it allows us to establish whether a given payoff is replicable or not by simply checking if its sub- and superreplication prices coincide. In view of the representations of sub- and superreplication prices discussed below, this task is typically easier than looking for a replicating portfolio. Proposition 8.4.11 For every payoff X ∈ X the following statements are equivalent: (a) (b) (c) (d) (e)

8.5

X is replicable. X has a unique market-consistent price. π + (X) is a market-consistent price for X. π − (X) is a market-consistent price for X. π − (X) = π + (X).

Exercises

In all exercises below we consider the one-period economy described in Chap. 5 and adhere to the market specifications introduced there. In addition, we assume that the market is incomplete and arbitrage free. Exercise 8.5.1 Show that for every payoff X ∈ X the following statements hold: (i) M+ (X) = −M− (−X). (ii) b (X) = − s (−X). (iii) (X) = − (−X). Exercise 8.5.2 Show that for every payoff X ∈ X the following statements hold: (i) s (X) is an interval that is bounded to the left and unbounded to the right. (ii) b (X) is an interval that is bounded to the right and unbounded to the left. Exercise 8.5.3 Show that for a payoff X ∈ X and for a scalar p ∈ R the following statements may hold: (i) p > π(Z) for every Z ∈ M such that Z < X but p is not a market-consistent seller price for X.

156

8 Market-Consistent Prices for General Payoffs

(ii) p < π(Z) for every Z ∈ M such that Z > X but p is not a market-consistent buyer price for X. Exercise 8.5.4 Let p ∈ R and X ∈ X and extend the market for the basic securities by adding a new basic security S N+1 = (p, X). Assume that S 1 , . . . , S N maintain their prices and payoffs. Show that the extended market is arbitrage-free if and only if p is a market-consistent price for X. Exercise 8.5.5 Consider a marketed payoff U ∈ M satisfying π(U ) = 1. (i) Show that X+ − ker(π) consists of those payoffs that can be “subreplicated” at zero cost, i.e., X+ − ker(π) = {X ∈ X ; Z ≤ X for some Z ∈ ker(π)}. Prove that for every payoff X ∈ X we have π − (X) = sup{a ∈ R ; X − aU ∈ X+ − ker(π)}. (ii) Show that ker(π) − X+ consists of those payoffs that can be “superreplicated” at zero cost, i.e., ker(π) − X+ = {X ∈ X ; Z ≥ X for some Z ∈ ker(π)}. Prove that for every payoff X ∈ X we have π + (X) = inf{a ∈ R ; X − aU ∈ ker(π) − X+ }. The above equivalent formulations are sometimes used to define sub- and superreplication prices in the literature. Exercise 8.5.6 Prove the following statements: (i) π − is neither linear, nor strictly decreasing, and for some X ∈ X we may have X  0 ⇒ π − (X) > 0. (ii) π + is neither linear, nor strictly increasing, and for some X ∈ X we may have X  0 ⇒ π + (X) < 0.

8.5 Exercises

157

Exercise 8.5.7 Show that there exists a constant c ∈ (0, ∞) such that |X − Y | ≤

cX − Y ∞ V1 [η] V0 [η]

for all X, Y ∈ X . Use Proposition 8.4.4 to infer that |π − (X) − π − (Y )| ≤ cX − Y ∞ , |π + (X) − π + (Y )| ≤ cX − Y ∞ , for all X, Y ∈ X . This provides a direct way to establish the Lipschitz continuity of π − and π + ; see Remark 8.4.5. Exercise 8.5.8 Show that neither the supremum, nor the infimum in Proposition 8.4.6 are attained in the case of a nonreplicable payoff. Exercise 8.5.9 Let  = {ω1 , ω2 , ω3 } and let P(ω1 ) = P(ω2 ) = P(ω3 ) = 13 . Consider the basic securities defined by

1

: 1

1 1 1

2

5 2 1

: 2

The above market is incomplete and arbitrage free by Exercise 7.4.7. (i) Determine the set of market-consistent prices of 1ω1 . (ii) Determine the sub- and superreplication price of 1ω1 . (iii) Deduce whether 1ω1 is replicable or not. Exercise 8.5.10 Let R ∈ R be a rescaling process. Prove that for every rescaled payoff  we have ∈X X    = R0  X



 X R1



   = R0 π − ,  π− X

 X R1



   = R0 π + ,  π+ X

 X R1

 .

Random Variables: Information and Measurability

With this chapter we start collecting the necessary mathematical tools to model multiperiod financial markets. The objective is to introduce partitions of the sample space as a way of modeling the granularity of the information we may obtain about the outcome of a random experiment. The highest granularity corresponds to full information and the lowest to no information. Intermediate granularity levels correspond to partial information. A key concept related to a partition is that of a random variable that is measurable with respect to it, i.e., a random variable whose value can be determined after receiving the information in the granularity of the partition. Familiarity with these notions is indispensable for understanding the modelling of how information about the terminal state of the economy increases as time passes.

StandingAssumption Throughout the chapter we fix a finite sample space  = {ω1 , . . . , ωK }.

9.1

Partitions and Atoms

Ideally, after a random experiment has been performed, we would know exactly which of the possible outcomes materialized. However, we may sometimes gain only partial knowledge about the exact outcome. The following simple example illustrates this point. Example 9.1.1 Consider the sample space  = {1, 2, 3, 4, 5, 6} corresponding to rolling a die. Instead of learning the exact outcome, we may only learn that the outcome was an

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_9

159

9

160

9 Random Variables: Information and Measurability

odd or an even number. For instance, learning that the outcome was odd, we will know that it must have been 1, 3, or 5, but not which of these three possible outcomes it was. In this case, the “granularity” of information can be represented by the two sets A = {1, 3, 5}, Ac = {2, 4, 6}. Consider now the sample space  = {1, 2, 3, 4, 5, 6}2 corresponding to rolling the die twice. Naturally, after the first, but before the second trial we will only know the outcome of the first trial. In other words, we will know to which of the six sets A1 = {(1, a) ∈  ; a ∈ {1, . . . , 6}}, . . . , A6 = {(6, a) ∈  ; a ∈ {1, . . . , 6}} the outcome belongs to. For instance, we may observe that the first trial yields 3, which leaves room for the final outcome to be any element of A3 . In this case the “granularity” of information can be represented by the above six sets.  To generalize the idea contained in the preceding example we introduce the notion of a partition. Definition 9.1.2 (Partition, Atom) A family of events P = {A1 , . . . , Am } ⊂ E is called a partition of  if every member of P is nonempty and the following properties hold: (1) Ai ∩ Aj = ∅ for all i, j ∈ {1, . . . , m} such that i = j . (2)  = m i=1 Ai . The elements of P are called atoms.



As suggested by the above examples, we use partitions to represent the granularity in which information about the outcomes of the random experiment is made available to us. After uncertainty resolves, rather than learning the exact outcome of the experiment, we only learn to which of the atoms of the partition the outcome belongs. Hence, unless this atom coincides with an elementary event {ω} for some ω ∈ , there remains some uncertainty about the precise outcome. Example 9.1.3 Consider the sample space  = {1, 2, 3, 4, 5, 6} corresponding to rolling a die. The granularity of information corresponding to being told that the outcome was an odd or an even number is captured by the partition P = {{1, 3, 5}, {2, 3, 4}}.

9.2 Observable Events

161

Similarly, if what we learn is whether the outcome is a number larger than or equal to 5, the corresponding partition would be P = {{1, 2, 3, 4}, {5, 6}}. Consider now the sample space  = {1, 2, 3, 4, 5, 6}2 corresponding to rolling a die twice. The granularity of information that is available in the situation in which we are told the outcome of the first trial is captured by the partition P = {A1 , . . . , A6 } where Ak = {(k, a) ∈  ; a ∈ {1, . . . , 6}} for every k ∈ {1, . . . , 6}. We may consider an alternative situation in which, after the second trial, we are told whether the sum of the outcomes of the first and second trial is smaller or equal to 3. In this case the granularity of information can be represented by P = {A, Ac } where A = {(1, 1), (1, 2), (2, 1)}.



Example 9.1.4 (Trivial/Discrete Partition) The trivial partition P = {} corresponds to the worst possible level of granularity of information: We will just know that something occurred. On the other hand, the discrete partition P = {{ω1 }, . . . , {ωK }} corresponds to the finest level of granularity we could wish for: The exact outcome will be revealed to us. 

9.2

Observable Events

Let P be a partition of  modeling the granularity in which information about the outcome of the random experiment is made available to us. After uncertainty resolves, we will know to which of the atoms of P the true outcome belongs or, put differently, for every atom of P we will be able to “observe” whether it has occurred or not. Are there any other events whose occurrence or nonoccurrence can be “observed” once we have learned which of the atoms of P actually occurred?

162

9 Random Variables: Information and Measurability

Consider an event E ∈ E and assume that E is a union of atoms of P, i.e., E=

m 

Aij

j =1

for Ai1 , . . . , Aim ∈ P. In this case, since P is a partition of , the complement set E c will be the union of the remaining atoms. Hence, when we are told which atom has occurred, we will always be able to say whether it is contained in E or E c . In this sense, the event E is also “observable” whenever the granularity of information is given by P. On the other hand, if E is not a union of atoms, then there must exist some atom A ∈ P such that A ∩ E = ∅ and A ∩ E c = ∅ both hold. Hence, if we are told that A has occurred, we will not be able to say whether E or E c has actually occurred. In other words, E is not “observable” given the granularity of information implied by P. Example 9.2.1 Consider the sample space  = {1, 2, 3, 4, 5, 6}2 corresponding to rolling a die twice. The granularity of information that is available in the situation in which we are told the outcome of the first trial is captured by the partition P = {A1 , . . . , A6 }, where Ak = {(k, a) ∈  ; a ∈ {1, . . . , 6}} for every k ∈ {1, . . . , 6}. The event described as “the outcome of the first trial is even” is the set E = A2 ∪ A4 ∪ A6 and it is not an atom, but as soon as we learn the outcome of the first trial we will be able to say whether E occurred or not. On the other hand, assume that E is the event described as “throwing a double six”, i.e., E = {(6, 6)}. We will certainly be able to say whether the first trial was a 6 but in that case we will be in the dark about the outcome of the second trial. Hence, E is not “observable” based on P.  The preceding discussion shows that, if P represents the granularity of information as above, the only events in  whose occurrence can be ascertained are either the unions of atoms of P, or the empty set. This leads to the notion of observable events. Definition 9.2.2 (Observable Event) Let P be a partition of . We say that an event E ∈ E is P-observable if it belongs to the collection F (P) := {E ∈ E ; E is a union of elements of P} ∪ {∅}. The set F (P) is called the field generated by P.



9.2 Observable Events

163

Example 9.2.3 (Trivial/Discrete Partition) The partition for which we have the least observable events is the trivial partition P = {}. Indeed, in this case, we clearly have F (P) = {∅, }. On the opposite extreme, the discrete partition P = {{ω1 }, . . . , {ωK }} is the partition admitting most observable events. In fact, we have F (P) = E.



To study the properties of the field generated by a partition it is useful to introduce the abstract concept of a field. Definition 9.2.4 (Field) A collection of events F ⊂ E is called a field on  if the following properties are satisfied: (1)  ∈ F . (2) E c ∈ F for all E ∈ F . (3) E ∪ F ∈ F for all E, F ∈ F . A field is sometimes also called an algebra.



The following properties of a field follow easily from the above definition. Proposition 9.2.5 For every field F ⊂ E the following statements hold: (i) (ii) (iii) (iv) (v)

∅ ∈ F. E ∩ F ∈ F for all E, F ∈ F . E \ F ∈ F for all E, F ∈ F . m Ei ∈ F for all E1 , . . . , Em ∈ F . i=1 m i=1 Ei ∈ F for all E1 , . . . , Em ∈ F .

Proof To show (i), it is enough to note that  ∈ F and ∅ = c . To establish (ii) and (iii), take arbitrary E, F ∈ F . Since E c and F c also belong to F and E ∩ F = (E c ∪ F c )c and E \ F = E ∩ F c , we conclude that E ∩ F ∈ F and E \ F ∈ F , respectively. Now, we focus on (iv). We establish the assertion by induction on m ∈ N. Base Step The assertion is clearly true for m = 1.

164

9 Random Variables: Information and Measurability

Induction Step Assume the assertion holds for m ∈ N and consider m + 1 events in F . Set E = E1 ∪· · ·∪Em and note that E ∈ F by our induction hypothesis. Since E ∪Em+1 ∈ F by the definition of a field, we conclude that m+1 

Ei = E ∪ Em+1 ∈ F .

i=1

This concludes the induction argument. Finally, to show (v), take arbitrary E1 , . . . , Em ∈ F and note that m  i=1

 Ei =

m 

c Eic

.

i=1

Hence, being the complement of an event which, by item (iv), belongs to F , the above intersection also belongs to F by the definition of a field.  As suggested by the chosen terminology, the field generated by a partition P is indeed a field. In fact, it is the smallest field that contains P. Proposition 9.2.6 Let P be a partition of . Then, the field F (P) is the smallest field containing P. Proof Since every atom of P is an element of F (P) and  is the union of all atoms of P, we have that  ∈ F (P). Now, take a nonempty event E ∈ F (P) so that E is a union of atoms of P. Then E c is clearly the union of all remaining atoms, showing that E c ∈ F (P). Moreover, if E and F are events in F (P), then clearly E ∪F can be expressed as the union of all atoms contained in E and F , hence E ∪ F ∈ F (P). This proves that F (P) is a field. Now, assume that F is another field containing P. Then, F must contain all unions of atoms of P by Proposition 9.2.5. Hence, F must also contain F (P), showing that F (P) is the smallest field that contains P. 

9.3

Refining Partitions

Assume we learn about the outcome of the random experiment in the granularity given by a partition P. If Q is another partition, it is natural to ask whether Q is more “informative” than P or not. This is the same as asking whether there are more Q-observable events than P-observable events.

9.4 Measurable Random Variables

165

Definition 9.3.1 (Finer/Coarser Partition) Let P and Q be partitions of . We say that Q is finer than P, or equivalently that P is coarser than Q, whenever F (P) ⊂ F (Q).  Example 9.3.2 (Trivial/Discrete Partition) The trivial partition is coarser than any other partition of . On the opposite extreme, the discrete partition is finer than any other partition of .  The following proposition provides an easy characterization of when a partition is finer than another. Proposition 9.3.3 Let P and Q be partitions of . The following statements are equivalent: (a) Q is finer than P. (b) Each atom of P is a union of atoms in Q. (c) Each atom of Q is contained in some atom of P. Proof Since both P and Q are partitions, the equivalence of (b) and (c) is immediate. Now, assume that (a) holds and take some atom A ∈ P. Since F (P) ⊂ F (Q), we have A ∈ F (Q), so that A is a union of atoms of Q, proving (b). Finally, assume (b) holds so that P ⊂ F (Q). Since, by Proposition 9.2.6, the collection F (P) is the smallest field containing P, we conclude that F (P) ⊂ F (Q), completing the proof. 

9.4

Measurable Random Variables

A random variable X ∈ L can be always evaluated as soon as we are told the exact outcome of the random experiment. Hence, if information is revealed in the granularity of the discrete partition, then we can always say which value X has attained. But what if we learn about the outcome of the random experiment only in the granularity of a partition P that is coarser than the discrete partition? Can X be still evaluated even though we may not know the exact outcome? Example 9.4.1 Consider the sample space  = {1, 2, 3, 4, 5, 6} associated to rolling a die and let the granularity of information be given by the partition P = {{1, 2, 3, 4}, {5, 6}}. Consider the random variables X = 1{5,6} and Y = 1{1,2} . Assume we learn that the event {1, 2, 3, 4} has occurred. Then, we can say that X takes the value 0, but we cannot say anything about the value of Y . It could be 1, if 1 or 2 occurred, but it could also be 0, if 3 or 4 occurred. Hence, Y cannot be evaluated knowing only that {1, 2, 3, 4} occurred. If, on

166

9 Random Variables: Information and Measurability

the other hand, the event {5, 6} has occurred, then X must take the value 1 whereas Y must take the value 0. In summary, while X can be fully evaluated based on the information conveyed by P, the random variable Y cannot.  The above example suggests that, if information is revealed in the granularity of a partition P, a random variable can be fully evaluated only if it happens to be constant on the atoms of P. This leads to the following definition. Definition 9.4.2 (Measurable Random Variable) Let P be a partition of . A random variable X ∈ L is said to be P-measurable if it is constant on the atoms of P. In this case, for every atom A ∈ P, we denote by X(A) the value taken by X on A. The set of P-measurable random variables is denoted by L(P).  The following example clarifies the notion of measurability in the two extreme cases, namely that of the trivial partition and that of the discrete partition. Example 9.4.3 (Trivial/Discrete Partition) If P is the trivial partition, then L(P) consists of those random variables that are constant on . Indeed, the constant random variables are the only random variables that we can evaluate without any information about the outcome of the random experiment. On the other extreme, if P is discrete, then all random variables are P-measurable, i.e., L(P) = L. Indeed, in this case, we are told the exact outcome and this suffices to evaluate any random variable.  We start our study of measurability by providing a simple but useful equivalent measurability condition. Proposition 9.4.4 Let P be a partition of . For every random variable X ∈ L the following statements are equivalent: (a) X is P-measurable. (b) {X = a} is P-observable for every a ∈ R. Proof To prove that (a) implies (b), assume that X is constant on the atoms of P and take an arbitrary a ∈ R. If X never attains the value a, then {X = a} = ∅ ∈ F (P). Assume now that X attains the value a. Then, for every atom A ∈ P, either X(A) = a, or X(A) = a. Consequently {X = a} =

 A∈P X(A)=a

A ∈ F (P).

9.4 Measurable Random Variables

167

To prove that (b) implies (a), assume that {X = a} ∈ F (P) for every a ∈ R and consider an arbitrary atom A ∈ P. Moreover, take ω ∈ A. Since {X = X(ω)} is a union of atoms in P, we must have A ⊂ {X = X(ω)} and therefore we see that X = X(ω) on A. This shows that X ∈ L(P).  The following characterization of the measurability of an indicator function is a simple consequence of the above result. Corollary 9.4.5 Let P be a partition of . For every event E ∈ E the following statements are equivalent: (a) 1E is P-measurable. (b) E is P-observable. Proof To show that (a) implies (b), assume that 1E ∈ L(P). Since E = {1E = 1}, it follows from Proposition 9.4.4 that E ∈ F (P). To prove the converse implication, assume that E ∈ F (P). Then, it is clear that 1E takes the value 1 on each atom of P contained in E and the value 0 on each atom of P contained in the complement E c . Being constant on the atoms of P, the random variable 1E is P-measurable. This shows that (b) implies (a) and concludes the proof.  We now focus on the measurability of random variables obtained through the composition of given measurable random variables with a real-valued function. In particular, this will show that measurability is preserved by the vector space operations. Proposition 9.4.6 Let P be a partition of . Then, for all P-measurable random variables X1 , . . . , Xm ∈ L and for every function f : Rm → R, the random variable f (X1 , . . . , Xm ) is P-measurable. Proof If X1 , . . . , Xm are measurable with respect to P, they must be constant on the atoms  of P, and then f (X1 , . . . , Xm ) is also constant on the atoms of P. Corollary 9.4.7 Let P be a partition of . For all P-measurable random variables X, Y ∈ L and for every a ∈ R, the random variables X + Y, aX, XY, max{X, Y }, min{X, Y } are P-measurable. Similarly, for every family of P-measurable random variables C ⊂ L, the random variables sup X,

X∈C

inf X

X∈C

are P-measurable (provided the supremum and infimum are finite at every ω ∈ ).

168

9 Random Variables: Information and Measurability

Proof The first assertion is a direct consequence of Proposition 9.4.6. The last two assertions follow by noting that, provided they are finite on , both the supremum and the infimum are constant on the atoms of P if every element in C is.  The preceding result tells us that measurability is preserved under taking sums and products with scalars. This means that the set of random variables that are measurable with respect to a given partition is a vector space in its own right. Proposition 9.4.8 Let P = {A1 , . . . , Am } be a partition of . Then, L(P) is a linear subspace of L. Moreover, the set B = {1A1 , . . . , 1Am } is a basis for L(P). In particular, we have dim(L(P)) = card(P). Proof We have already seen that L(P) is a linear subspace of L. It is easy to see that B spans the entire L(P). Indeed, every X ∈ L(P) is constant on the atoms of P and can be  thus expressed as X = m i=1 X(Ai )1Ai . It remains to show that B is linearly independent. Assume that m 

ai 1Ai = 0

i=1

for some a1 , . . . , am ∈ R. This implies that, for every j ∈ {1, . . . , m}, aj =

m 

ai 1Ai (Aj ) = 0,

i=1

proving linear independence and concluding the proof.



We conclude this section by showing that a random variable that is measurable with respect to a partition P is also measurable with respect to any finer partition Q. This is in line with our intuition: If we are able to evaluate the random variable when information is provided in the granularity of P, then all the more so if information is provided in the finer granularity of Q. Proposition 9.4.9 Let P and Q be partitions of . If Q is finer than P, then we have L(P) ⊂ L(Q). Proof Since X is constant on the atoms of P and every atom of Q is contained in some atom of P by Proposition 9.3.3, it follows that X is constant on the atoms of Q as well. 

9.4 Measurable Random Variables

169

Partitions Generated by a Random Variable Sometimes we only gather information about the outcome of the random experiment indirectly, through the observation of a particular random variable, i.e., by observing which value this random variable has attained. We show how this fits into the framework of partitions developed in the previous sections. Let X ∈ L be the observed random variable. Assume further that a1 , . . . , am ∈ R are the distinct values that X can attain. Then, X=

m 

ai 1{X=ai } .

i=1

After the random experiment has been performed, we are able to observe the value that X has attained or, equivalently, we are able to tell to which of the sets {X = a1 }, . . . , {X = am } the outcome belongs. Hence, observing the outcome through X is the same as obtaining the information in the granularity of the partition {{X = a1 }, . . . , {X = am }}. This motivates the following definition. Definition 9.4.10 (Generated Partition) Let X ∈ L and assume that X takes the pairwise distinct values a1 , . . . , am ∈ R. The partition P(X) := {{X = a1 }, . . . , {X = am }} is called the partition generated by X and the field F (X) := F (P(X)) is called the field generated by X.



Example 9.4.11 (Trivial/Discrete Partition) Let X ∈ L. Then, P(X) is the trivial partition of  if and only if X is a constant random variable. In this case, learning the value of X provides no information about the actual outcome, since we observe the same value irrespective of the outcome. On the other hand, P(X) is the discrete partition if and only if X takes K distinct values.  The partition generated by a random variable encodes the minimal granularity of information we need to be able to evaluate that random variable. This is made precise by the following proposition.

170

9 Random Variables: Information and Measurability

Proposition 9.4.12 Let P be a partition of . For every random variable X ∈ L the following statements are equivalent: (a) X is P-measurable. (b) P(X) is coarser than P. In particular, P(X) is the coarsest partition with respect to which X is measurable. Proof To prove that (a) implies (b), assume that X is measurable with respect to P. Then, for every atom A ∈ P we must have A ⊂ {X = X(A)} because X is constant on A. Since {X = X(A)} ∈ P(X), it follows from Proposition 9.3.3 that P(X) is coarser than P. To prove that (b) implies (a) note that, by the definition of P(X), the random variable X is clearly P(X)-measurable. If A is an arbitrary atom of P, Proposition 9.3.3 shows that A is contained in an atom of P(X). Since X is constant on the atoms of P(X), it follows that X must be constant on A. Since A was arbitrary, this proves that X is P-measurable, concluding the proof.  Consider a random variable X ∈ L. A random variable Y ∈ L that is P(X)-measurable can be fully evaluated upon observation of X. The following result establishes that such random variables are, in fact, just functions of X. Proposition 9.4.13 For all X, Y ∈ L the following statements are equivalent: (a) Y is P(X)-measurable. (b) Y = f (X) for some function f : R → R. Proof It is clear that (b) implies (a). To prove the converse implication, assume X takes the values a1 , . . . , am ∈ R and set Ai = {X = ai } for i ∈ {1, . . . , m}. Now, if Y is P(X)measurable, then it must take a constant value on each of the atoms Ai s. Take any function f : R → R such that f (ai ) = Y (Ai ) for every i ∈ {1, . . . , m}. Clearly, for every ω ∈  we find i ∈ {1, . . . , m} such that ω ∈ Ai , or equivalently X(ω) = ai . But then f (X)(ω) = f (X(ω)) = f (ai ) = Y (Ai ) = Y (ω), proving that Y = f (X). This shows that (a) implies (b).



9.4 Measurable Random Variables

171

Partitions Generated by Multiple Random Variables The above construction can be generalized in an obvious way to the situation where we learn about the outcome of the random experiment through the observation of different random variables X1 , . . . , Xm ∈ L. This means that, if the outcome of the random experiment is ω ∈ , then what we are able to observe is the vector (X1 (ω), . . . , Xm (ω)) ∈ Rm . To model this situation assume that, for every i ∈ {1, . . . , m}, the random variable Xi takes the distinct values a1i , . . . , arii ∈ R, so that Xi =

ri 

aji 1{Xi =a i } . j

j =1

After the experiment has been performed we are able to say whether the revealed outcome belongs to the intersection m 

{Xi = aji i }

i=1

or not for every multi-index j = (j1 , . . . , jm ) ∈ {1, . . . , r1 }×· · ·×{1, . . . , rm }. This leads to the following natural extension of Definition 9.4.10. Definition 9.4.14 (Generated Partition) Let X1 , . . . , Xm ∈ L and let a1i , . . . , arii ∈ R be the distinct values taken by Xi for each i ∈ {1, . . . , m}. Then, for every multi-index j = (j1 , . . . , jm ) ∈ {1, . . . , r1 } × · · · × {1, . . . , rm } we set Ej =

m 

{Xi = aji i }.

i=1

The partition of  given by P(X1 , . . . , Xm ) := {Ej ; j ∈ {1, . . . , r1 } × · · · × {1, . . . , rm } : Ej = ∅} is called the partition generated by X1 , . . . , Xm , and the field F (X1 , . . . , Xm ) := F (P(X1 , . . . , Xm )) is called the field generated by X1 , . . . , Xm .



172

9 Random Variables: Information and Measurability

Learning about the outcome of the random experiment through the values that the random variables X1 , . . . , Xm ∈ L take is equivalent to receiving the information in the granularity of the generated partition P(X1 , . . . , Xm ). Similarly to the case of a single random variable, the partition P(X1 , . . . , Xm ) corresponds to the minimal granularity of information that allows to simultaneously evaluate all the random variables X1 , . . . , Xm . The proof of the multi-dimensional counterpart of Proposition 9.4.12 is left as an exercise. Proposition 9.4.15 Let P be a partition of . For all X1 , . . . , Xm ∈ L the following statements are equivalent: (a) X1 , . . . , Xm are P-measurable. (b) P(X1 , . . . , Xm ) is coarser than P. In particular, P(X1 , . . . , Xm ) is the coarsest partition of  with respect to which X1 , . . . , Xm are all measurable. We conclude this chapter with the multi-dimensional counterpart to Proposition 9.4.13, which is also left as an exercise. Proposition 9.4.16 For all X1 , . . . , Xm , Y ∈ L the following statements are equivalent: (a) Y is P(X1 , . . . , Xm )-measurable. (b) Y = f (X1 , . . . , Xm ) for some function f : Rm → R.

9.5

Exercises

In all exercises below we assume that  is a finite sample space with K elements. Exercise 9.5.1 Let F be a field of events on . Show that there exists a partition P of  such that F = F (P). Exercise 9.5.2 Let P1 , . . . , Pm be partitions of . Prove that the following statements are equivalent: (a) P1 , . . . , Pm are independent under P. (b) F (P1 ), . . . , F (Pm ) are independent under P. Exercise 9.5.3 For every random variable X ∈ L prove the following statements: (i) P(X) = {} if and only if X is constant. (ii) P(X) = {ω1 , . . . , ωK } if and only if X takes K different values.

9.5 Exercises

173

Exercise 9.5.4 Let P = {A1 , . . . , Am } be a partition of  and take X ∈ L. Show that P(X) = P if and only if there exist pairwise distinct numbers a1 , . . . , am ∈ R such that X=

m 

ai 1Ai .

i=1

Deduce that for every partition P of  there exists a random variable X ∈ L such that P(X) = P. Exercise 9.5.5 Let X1 , . . . , Xm ∈ L and let a1i , . . . , arii ∈ R be the distinct values taken by Xi for every i ∈ {1, . . . , m}. Define Ej =

m 

{Xi = aji i }.

i=1

for every multi-index j = (j1 , . . . , jm ) ∈ {1, . . . , r1 } × · · · × {1, . . . , rm }. Prove that {Ej ; j ∈ {1, . . . , r1 } × · · · × {1, . . . , rm } : Ej = ∅} is a partition of . Exercise 9.5.6 Let P be a partition of . Prove that for all random variables X1 , . . . , Xm ∈ L the following statements are equivalent: (a) X1 , . . . , Xm ∈ L(P). (b) P(X1 , . . . , Xm ) is coarser than P. In particular, P(X1 , . . . , Xm ) is the coarsest partition of  with respect to which X1 , . . . , Xm are all measurable. Exercise 9.5.7 Prove that for all random variables X1 , . . . , Xm , Y ∈ L the following statements are equivalent: (a) Y ∈ L(P(X1 , . . . , Xm )). (b) Y = f (X1 , . . . , Xm ) for some function f : Rm → R.

Conditional Probabilities and Expectations

10

In the previous chapter we discussed how sometimes we obtain only partial information about the outcome of a random experiment. The granularity in which we receive information comes in the form of a partition of the underlying sample space and only allows us to evaluate random variables that are measurable with respect to that partition. In this chapter we investigate how the assessment of probabilities and, hence, expectations is affected by the degree of granularity in which information is provided. This leads us to the concepts of conditional probabilities and conditional expectations.

StandingAssumption We fix a finite probability space (, P) with  = {ω1 , . . . , ωK }.

10.1

Conditional Probabilities

Consider an event A ∈ E such that P(A) > 0 and assume we only know that the outcome of the random experiment belongs to A. Can we say something about the probability that the outcome is a particular ω ∈ A? Or, slightly more generally, can we say something about the probability that the outcome belongs to a given event E ∈ E if all we know is that it belongs to the event A? Assuming we can, let us denote this probability by P(E|A). Clearly, this new probability should satisfy P(E|A) = 0 whenever E ∩ A = ∅ because every event that is incompatible with A is, by necessity, impossible. It is also clear that we should have P(A|A) = 1 because we are sure that A contains the outcome of the random experiment.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_10

175

176

10 Conditional Probabilities and Expectations

Moreover, the relative probabilities of the elementary events in A should not be altered, i.e., we should have P({ω}|A) =

P(ω) P(A)

for every ω ∈ A. If P(·|A) is a probability measure, then additivity would imply that P(E|A) =



P({ω}|A) =

ω∈E

 P(ω) P(E ∩ A) = . P(A) P(A)

ω∈A∩E

In particular, if A = , we would have P(E|) = P(E). This is intuitively clear: If we have no information about the terminal outcome of the random experiment, then the original probabilities should not change. The following result establishes the existence of a unique probability measure satisfying the above properties. Proposition 10.1.1 For every event A ∈ E with P(A) > 0 there exists a unique probability measure P(· |A) satisfying the following properties: (i) (ii) (iii) (iv)

P(E|A) = 0 for every E ∈ E such that E ∩ A = ∅. P(E|A) = P(E) P(A) for every E ∈ E such that E ⊂ A. P(A|A) = 1. supp(P(· |A)) = A ∩ supp(P).

Moreover, for every E ∈ E we have P(E|A) =

P(E ∩ A) . P(A)

(10.1)

In particular, P(· |) = P. Proof First let P(· |A) be a probability measure satisfying properties (i) to (iv). Then, using additivity we see that P(E|A) = P(E ∩ A|A) + P(E ∩ Ac |A) = P(E ∩ A|A) =

P(E ∩ A) P(A)

for every event E ∈ E. This proves uniqueness and tells us that P(· |A) must satisfy (10.1). Define now P(· |A) as in (10.1). To conclude we just need to prove that we obtain a probability measure with the desired properties. Clearly, P(· |A) takes values in the interval

10.1 Conditional Probabilities

177

[0, 1] and P(|A) =

P(A) P( ∩ A) = = 1. P(A) P(A)

Moreover, if E, F ∈ E are disjoint events, then P(E ∪ F |A) = =

P((E ∪ F ) ∩ A) P((E ∩ A) ∪ (F ∩ A)) = P(A) P(A) P(E ∩ A) P(F ∩ A) + = P(E|A) + P(F |A), P(A) P(A)

where we have used the additivity of P in the third equality. Hence, P(· |A) is indeed a probability measure. Properties (i) to (iv) are obvious from the definition of P(· |A).  The probability measure introduced in the previous proposition is called the conditional probability measure given A. Definition 10.1.2 (Conditional Probability) Consider an event A ∈ E and assume that P(A) > 0. The probability measure P(· |A) defined in Proposition 10.1.1 is called the conditional probability measure given A. For every ω ∈  we simply write P(ω|A) instead of P({ω}|A).  Remark 10.1.3 As stated in Proposition 10.1.1, the support of the conditional probability P(· |A) coincides with A ∩ supp(P), which is a proper subset of  if A is a proper subset of . This justifies why we did not rule out probability measures allowing for nonempty impossible events in Chap. 2.  The next result shows that the notion of independence can be equivalently formulated using the language of conditional probabilities. Indeed, two events are independent precisely when the probability that one of them will occur is not influenced by knowing whether the other one will occur or not, and viceversa. The immediate verification is left as an exercise. Proposition 10.1.4 For all events A, B ∈ E such that P(A) > 0 and P(B) > 0 the following statements are equivalent: (a) A and B are independent under P. (b) P(A|B) = P(A). (c) P(B|A) = P(B).

178

10 Conditional Probabilities and Expectations

The Law of Total Probability, which we prove next, shows how to recover the probability measure from the conditional probabilities associated with the atoms of a given partition of . Proposition 10.1.5 (Law of Total Probability) Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for every event E ∈ E we have P(E) =



P(E|A)P(A).

A∈P

Proof Take an arbitrary E ∈ E. Since P is a partition, we can write E = A∈P E ∩ A. Then, using the additivity of P and the definition of a conditional probability we easily obtain P(E) =



P(E ∩ A) =

A∈P

10.2



P(E|A)P(A).

A∈P



Expectations Conditional on Events

Assume we only know that the event A ∈ E occurred, but have no information about the exact outcome and consider a random variable X ∈ L. Unless X is constant on A, this knowledge will not be sufficient to evaluate X. However, as we have seen in the preceding section, we do have an assessment of the probability of every event in . This assessment can be used to come up with an expectation about the value of X, given our knowledge that A occurred: We just need to take the expected value of X with respect to the conditional probability measure given the event A. Definition 10.2.1 (Conditional Expectation) Let A ∈ E be such that P(A) > 0 and let X ∈ L. The conditional expectation of X with respect to A is the number EP [X|A] := EP(· |A) [X] =



P(ω|A)X(ω).



ω∈

The following simple reformulation of conditional expectations is often easier to work with.

10.2 Expectations Conditional on Events

179

Proposition 10.2.2 Let A ∈ E be such that P(A) > 0. Then, for every X ∈ L we have EP [X|A] =

EP [1A X] . P(A)

Proof It follows from the additivity of expectations that EP [X|A] =

 P({ω} ∩ A) 1  EP [1A X] X(ω) = . P(ω)X(ω) = P(A) P(A) P(A)

ω∈



ω∈A

Since conditional expectations are simply expectations with respect to conditional probabilities, they enjoy all the general properties we have established for expected values. Our next result highlights a few of them. The proof is left as an exercise. Proposition 10.2.3 Let A ∈ E be such that P(A) > 0. The following statements hold for all X, Y ∈ L, every a ∈ R, and every event E ∈ E: (i) (ii) (iii) (iv) (v)

EP [X + Y |A] = EP [X|A] + EP [Y |A]. EP [aX|A] = aEP [X|A]. EP [1E |A] = P(E|A). EP [X|A] ≥ EP [Y |A] whenever X ≥ Y on A ∩ supp(P). EP [X|A] > EP [Y |A] whenever X  Y on A ∩ supp(P).

The next result should be viewed as the counterpart to the Law of Total Probability, established in Proposition 10.1.5, and is sometimes called the Law of Total Expectation. Proposition 10.2.4 (Law of Total Expectation) Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for every X ∈ L we have EP [X] =



P(A)EP [X|A].

A∈P

Proof Since P is a partition of , we can always express X as X=



1A X.

A∈P

Therefore, it immediately follows from Proposition 10.2.2 that EP [X] =

 A∈P

EP [1A X] =

 A∈P

P(A)EP [X|A].



180

10.3

10 Conditional Probabilities and Expectations

Expectations Conditional on Partitions

Assume that we learn about the outcome of the random experiment in the granularity given by a partition P of the sample space  and consider a random variable X ∈ L. If X is P-measurable, we will be able to evaluate it as soon as we know to which of the atoms of P the outcome of the experiment belongs. On the other hand, if X is not Pmeasurable, we will not be able to evaluate it. We will, however, be able to determine the conditional expectation EP [X|A], where A ∈ P is the atom we know to contain the actual outcome. Hence, we will be also able to evaluate the random variable introduced in the next definition, which is called the conditional expectation with respect to P. Definition 10.3.1 (Conditional Expectation) Let P be a partition of  and assume that P(A) > 0 for every A ∈ P. Moreover, let X ∈ L. The conditional expectation of X with respect to P is the random variable in L(P) defined by EP [X|P] :=



EP [X|A]1A .



A∈P

The following representation, which is typically found in the literature as an equivalent way to define conditional expectations follows directly from Proposition 10.2.2. Proposition 10.3.2 Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for every X ∈ L we have EP [X|P] =

 EP [1A X] 1A . P(A)

A∈P

The next result, known as Kolmogorov’s Theorem, provides a useful characterization of conditional expectations with respect to a partition. Theorem 10.3.3 (Kolmogorov Theorem) Let P be a partition of Ω and assume that P(A) > 0 for every A ∈ P. For all X ∈ L and Y ∈ L(P) the following statements are equivalent: (a) Y = EP [X|P]. (b) EP [1A Y ] = EP [1A X] for every A ∈ P. (c) EP [1E Y ] = EP [1E X] for every E ∈ F (P). Proof We first prove that (a) implies (b). Assume Y = EP [X|P]. Then, by Proposition 10.3.2, 1A Y = 1A

 EP [1B X] EP [1A X] 1B = 1A P(B) P(A)

B∈P

10.3 Expectations Conditional on Partitions

181

for every A ∈ P. Taking expectations on both sides we obtain  EP [1A Y ] = EP

EP [1A X] EP [1A X] 1A = EP [1A ] = EP [1A X]. P(A) P(A)

Next, assume that (b) holds and take E ∈ F (P). By the definition of F (P), there exist A1 , . . . , Ar ∈ P such that E = ri=1 Ai . Note that 1E = 1A1 + · · · + 1Ar since A1 , . . . , Ar are pairwise disjoint. Hence, EP [1A Y ] =

r  i=1

EP [1Ai Y ] =

r 

EP [1Ai X] = EP [1A X].

i=1

This establishes (c). We conclude by showing that (c) implies (a). For convenience, set Z = EP [X|P]. Being P-measurable, Y and Z are constant on the atoms of P. Hence, for every A ∈ P we obtain Z(A) =

EP [1A X] EP [1A Y ] EP [1A Z] = = = Y (A), P(A) P(A) P(A)

where the second equality is due to Proposition 10.3.2. This shows that Z = Y and concludes the proof of the equivalence.  As a direct consequence of the preceding theorem, we obtain the following criterion for two random variables to have the same conditional expectation with respect to a partition P. In particular, this says that, for a random variable X ∈ L, to determine the behaviour of EP [X|P] on any observable event, it suffices to know the “local behaviour” of X on that event. Corollary 10.3.4 Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for all X, Y ∈ L the following statements are equivalent: (a) EP [X|P] = EP [Y |P]. (b) EP [1A X] = EP [1A Y ] for every A ∈ P. (c) EP [1E X] = EP [1E Y ] for every E ∈ F (P). The next result generalizes Proposition 3.6.6 by showing that, given a partition P of the sample space  and a random variable X ∈ L, the conditional expectation EP [X|P] is the P-measurable random variable that is closer to X in the sense of the 2-distance. In other words, one can view EP [X|P] as the best “approximation” of X amongst all the random variables that we can fully evaluate upon revelation of P. Here, we denote by PL(P ) the orthogonal projection onto the space L(P).

182

10 Conditional Probabilities and Expectations

Proposition 10.3.5 Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for every X ∈ L we have EP [X|P] = PL(P ) (X). In particular, X − EP [X|P]2 =

inf

Y ∈L ( P )

X − Y 2 .

Proof Take X ∈ L and note that PL(P ) (X) ∈ L(P). Since X − PL(P ) (X) is orthogonal to every random variable in L(P), it must be orthogonal to 1A for every A ∈ P. In other words, for each A ∈ P we must have EP [1A (X − PL(P ) (X))] = 0, or equivalently EP [1A X] = EP [1A PL(P ) (X)]. The first assertion is now easily seen to follow from Theorem 10.3.3. The last assertion is a direct consequence of Proposition 3.6.5.  Example 10.3.6 (Trivial/Discrete Partition) Consider a random variable X ∈ L. If P is the trivial partition, then EP [X|P] = EP [X|]1 = EP [X]. In this case, the conditional expectation is constant and coincides with the usual expected value. In line with Proposition 3.6.6, this is the best approximation of X in the absence of any information. On the other extreme, if P is the discrete partition and supp(P) = , EP [X|P] =

 ω∈

EP [X|ω]1ω =



X(ω)1ω = X.

ω∈

In this case, since we will be revealed the exact outcome, the best approximation of X coincides with X itself.  The next result collects some basic properties of conditional expectations with respect to a partition P of . The first assertion shows that the expectation of a conditional expectation coincides with the expectation of the underlying random variable. In other words, from the point of view of their expected values, a random variable and its conditional expectation are indistinguishable. The second assertion shows that, not surprisingly, the conditional expectation of a P-measurable random variable coincides with the random variable itself: On arrival of the information P, we will already know the exact value of the random variable. The remaining statements show how conditional expectations behave with respect to the operations and inequalities between random variables.

10.3 Expectations Conditional on Partitions

183

Proposition 10.3.7 Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, the following statements hold for all X, Y ∈ L and a ∈ R: (i) (ii) (iii) (iv) (v) (vi) (vii)

EP [EP [X|P]] = EP [X]. EP [X|P] = X whenever X ∈ L(P). EP [X + Y |P] = EP [X|P] + EP [Y |P]. EP [aX|P] = aEP [X|P]. EP [X|P] ≥ EP [Y |P] whenever X ≥ Y on supp(P). EP [X|P]  EP [Y |P] whenever X  Y on supp(P). EP [X|P] > EP [Y |P] whenever X  Y on A ∩ supp(P) for every A ∈ P.

Proof To establish (i), note that, since P is a partition, we can write X=



1A X.

A∈P

Then, using the linearity of expectations, we obtain EP [EP [X|P]] =

 EP [1A X]  EP [1A ] = EP [1A X] = EP [X]. P(A)

A∈P

A∈P

Assertion (ii) follows immediately from Theorem 10.3.3. The remaining assertions are direct consequences of Proposition 10.2.3.  The following property, which is sometimes referred to as “taking out what is known”, will prove to be extremely useful in the sequel. Proposition 10.3.8 (Taking Out What Is Known) Let P be a partition of  such that P(A) > 0 for every A ∈ P. Then, for all X ∈ L and Z ∈ L(P) we have EP [ZX|P] = ZEP [X|P]. Proof Set Y = EP [X|P] and note that, being a product of P-measurable random variables, ZY is also P-measurable. Moreover, for every atom A ∈ P we have EP [1A ZY ] = Z(A)EP [1A Y ] = Z(A)EP [1A X] = EP [1A ZX], where we have used Theorem 10.3.3 in the second equality. We can now apply Theo rem 10.3.3 again to conclude that ZY = EP [ZX|P]. Another useful and fundamental property of conditional expectations has to do with iterated conditional expectations and is referred to as the “tower property”. To introduce

184

10 Conditional Probabilities and Expectations

it, let P be a partition of  and consider a random variable X ∈ L. As we have said, the conditional expectation EP [X|P] can be interpreted as the best P-measurable “approximation” of X with respect to the 2-distance. Consider now another partition of , denoted by Q, which is assumed to be finer than P. In line with the above interpretation, the two conditional expectations EP [X|P] and EP [EP [X|Q]|P] provide the best Pmeasurable “approximations” of X and EP [X|Q], respectively. The tower property tells us that these two best “proxies” coincide. In other words, if on our way to determine the best “approximation” with respect to P we first determine the best “approximation” with respect to the finer partition Q, this prior “approximation” step will have no impact on the final result. Proposition 10.3.9 (Tower Property) Let P and Q be partitions of  and assume that Q is finer than P. Moreover, assume that P(A) > 0 for every A ∈ Q. Then, for every X ∈ L we have EP [EP [X|Q]|P] = EP [X|P]. Proof Set Y = EP [X|Q] and Z = EP [Y |P]. First, note that Z is P-measurable. Further, take A ∈ P and recall that, by Proposition 9.3.3 A can be expressed as a union of atoms of Q. Then, EP [1A X] =

 B∈Q B⊂A

EP [1B X] =



EP [1B Y ] = EP [1A Y ] = EP [1A Z],

B∈Q B⊂A

where we used Theorem 10.3.3 in the second and fourth equalities. Now Theorem 10.3.3  shows that Z = EP [X|P] and this concludes the proof. If the partition generated by a random variable X ∈ L and another partition P are independent, then the conditional expectation of X given P should coincide with the standard expected value of X. Indeed, in this case, the arrival of information in the granularity of P is irrelevant when assessing the expectation of X. Proposition 10.3.10 Let P be a partition of  such that P(A) > 0 for every A ∈ P and take X ∈ L. If P and P(X) are independent under P, then EP [X|P] = EP [X]. Proof By assumption, the random variables X and 1A are independent under P for every choice of A ∈ P. Hence, Proposition 2.4.5 implies that EP [1A X] = EP [1A ]EP [X] = P(A)EP [X]

10.4 Changing the Probability Measure

185

for every A ∈ P. From this, we immediately obtain EP [X|P] =

 EP [1A X]  1A = EP [X]1A = EP [X], P(A)

A∈P

A∈P

where we used Proposition 10.3.2 in the first equality.

10.4



Changing the Probability Measure

We conclude this chapter by explaining how conditional expectations are affected by a change of the underlying probability measure. Theorem 10.4.1 Let P be a partition of  and consider a probability measure Q such that Q(A) > 0 for every A ∈ P. Assume that Q is dominated by P and let D ∈ L be a Radon-Nikodym density for Q with respect to P. Then, EP [D|P] is strictly positive and for every X ∈ L we have EQ [X|P] =

EP [DX|P] . EP [D|P]

(10.2)

Proof First of all, note that P(A) > 0 for every A ∈ P, because Q is dominated by P and satisfies Q(A) > 0 for every A ∈ P by assumption. In addition, observe that EP [1A D] = EQ [1A ] = Q(A) > 0 for every A ∈ P. Applying Proposition 10.3.2, we get EP [D|P] =

 Q(A) 1A > 0. P(A)

(10.3)

A∈P

To conclude the proof, take an arbitrary X ∈ L and note that EQ [X|P] =

 EQ [1A X]  EP [1A DX] P(A) 1A = 1A Q(A) Q(A) P(A)

A∈P

A∈P

 EP [DX|A] EP [DX|P] = 1A = EP [D|A] EP [D|P] A∈P

by Propositions 10.2.2, 10.3.2, and by (10.3).



186

10 Conditional Probabilities and Expectations

D Remark 10.4.2 If in the statement of the preceding theorem we set DP = E [D| P] , Q then we can write (10.2) as EQ [X|P] = EP [DP X|P]. Hence, DP can be viewed as a “P-conditional” Radon-Nikodym density for Q with respect to P. We explore this interpretation in Exercise 10.5.4. 

10.5

Exercises

In all exercises below we assume that (, P) is a finite probability space. Exercise 10.5.1 (Bayes Formula) Prove that for all events A, B ∈ E such that P(A) > 0 and P(B) > 0 we have P(B|A) =

P(A|B)P(B) . P(A)

Exercise 10.5.2 Show that for all events A, B ∈ E such that P(A) > 0 and P(B) > 0 the following statements are equivalent: (a) A and B are independent under P. (b) P(A|B) = P(A). (c) P(B|A) = P(B). Exercise 10.5.3 Let P be a partition of  such that P(A) > 0 for every A ∈ P. Moreover, assume that D ∈ L is positive on supp(P) and satisfies EP [D|P] = 1. Prove that any function Q : E → R of the form Q(E) =



EP [1E D|A]qA ,

A∈P

where the coefficients qA belong to [0, 1] and add up to 1, is a probability measure satisfying the following properties: (i) Q(A) = qA for every A ∈ P. (ii) Q(E|A) = EP [1E D|A] for all A ∈ P and E ∈ E. (iii) Q is dominated by P. Moreover, Q is equivalent to P if and only if D is strictly positive on supp(P) and qA > 0 for every A ∈ P. Exercise 10.5.4 (Conditional Radon-Nikodym Density) Consider a probability measure Q and assume that Q is dominated by P. Let P be a partition of  and assume

10.5 Exercises

187

that Q(A) > 0 for every A ∈ P. We say that D ∈ L is a P-conditional Radon-Nikodym density for Q with respect to P whenever EQ [X|P] = EP [DX|P] for every random variable X ∈ L. Show that for every random variable D ∈ L the following statements are equivalent: (a) D is a P-conditional Radon-Nikodym density for Q with respect to P.  & ) '−1 dQ ) on supp(P). (b) D = dQ dP EP dP )P In particular, if supp(P) = , then the random variable in (b) is the unique P-conditional Radon-Nikodym density for Q with respect to P. Exercise 10.5.5 (Conditional Radon-Nikodym Theorem) Let P be a partition of  such that P(A) > 0 for every A ∈ P. Prove that for every random variable D ∈ L the following statements are equivalent: (a) D is positive on supp(P) and EP [D|P] = 1. (b) There exists a probability measure Q such that Q is dominated by P and satisfies Q(A) > 0 for every A ∈ P and D=

) −1

 dQ dQ )) on supp(P). EP P dP dP )

In this case, D is strictly positive on supp(P) if and only if Q is equivalent to P.

11

Conditional Linear Functionals

In this chapter we introduce a special class of maps, called conditional functionals, that generalize conditional expectations with respect to a partition. A special focus is given to conditional functionals that are linear. From a mathematical point of view, this and the following chapter on extensions of conditional linear functionals are more demanding than the previous ones, but a careful reading will pay dividends when we study pricing functionals in multi-period models.

StandingAssumption We fix a finite probability space (, P) with  = {ω1 , . . . , ωK }. We also fix a partition P of  such that P(A) > 0 for every A ∈ P and a linear subspace M ⊂ L satisfying A ∈ P, X ∈ M ⇒ 1A X ∈ M.

11.1

Conditional Functionals

For a random variable X ∈ L and an event E ∈ E, the random variable 1E X can be viewed as the “localization” of X to E. We start by highlighting a number of equivalent conditions for M to be “stable” with respect to localizations. The simple proof is left as an exercise.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_11

189

190

11 Conditional Linear Functionals

Proposition 11.1.1 The following statements are equivalent: (a) 1A X ∈ M for all A ∈ P and X ∈ M. (b) 1E X ∈ M for all E ∈ F (P) and X ∈ M. (c) ZX ∈ M for all Z ∈ L(P) and X ∈ M. In this chapter we focus on special maps π : M → L(P). Before we introduce them, we highlight that the class of maps taking values into the vector space L(P) carries a very natural vector space structure. Definition 11.1.2 (Operations) For all maps π : M → L(P) and ρ : M → L(P) we define the sum of π and ρ as the map π + ρ : M → L(P) given by (π + ρ)(X) := π(X) + ρ(X). Similarly, for all π : M → L(P) and a ∈ R we define the product of π by a as the map aπ : M → L(P) given by (aπ)(X) := aπ(X). Moreover, for all π : M → L(P) and Z ∈ L(P) we define the product of π by Z as the map Zπ : M → L(P) given by (Zπ)(X) := Zπ(X).



Let π : M → L(P). After localizing X ∈ L to an atom A ∈ P, the P-measurable random variable π(1A X) is well defined. Of course, one can first determine π(X) and then localize to A to obtain 1A π(X). In general these two procedures deliver two different results (even in the case where π is linear; see Exercise 11.4.3). If π “commutes” with P-localizations, then we speak of a P-conditional functional. Definition 11.1.3 (Conditional Functional) π(0) = 0 and the P-localization property

Any map π : M → L(P) satisfying

π(1A X) = 1A π(X) for all A ∈ P and X ∈ M is called a P-conditional functional.



Example 11.1.4 (Conditional Expectations) The conditional expectation EP [·|P] is the prototype of a P-conditional functional defined on L. This follows immediately from Theorem 10.3.3; see also Proposition 10.3.8. 

11.1 Conditional Functionals

191

The following proposition shows that a conditional functional also “commutes” with localizations to the underlying observable events. This will be freely used in the sequel. Proposition 11.1.5 For a map π : M → L(P) the following statements are equivalent: (a) π is a P-conditional functional. (b) π(1E X) = 1E π(X) for all E ∈ F (P) and X ∈ L. Proof It is clear that (b) implies (a). In particular, if (b) holds, then π(0) = π(1∅ 0) = 1∅ π(0) = 0. To show that (a) implies (b), assume that π is a P-conditional functional and take an event E ∈ F (P) and X ∈ L. For every A ∈ P such that A ⊂ E we have 1A π(1E X) = π(1A 1E X) = π(1A X) = 1A π(X). Similarly, for every A ∈ P such that A ⊂ E c we have 1A π(1E X) = π(1A 1E X) = π(0) = 0. Therefore, π(1E X) =



1A π(1E X) =

A∈P



1A π(X) = 1E π(X).

A∈P A⊂E

This concludes the proof of the equivalence.



Let π : M → L(P) and let X ∈ M. Since π(X) is P-measurable, as soon as we know that A ∈ P has occurred, we also know that π(X) has taken the value π(X)(A). In general, π(X)(A) depends on the values that X takes on the whole of . However, if π is a P-conditional functional, then π(X)(A) depends only on the values that X takes on the atom A. This is the distinguishing feature of a conditional functional: Conditional on knowing A, the behaviour of π(X) is fully determined by the local behaviour of X on A. Proposition 11.1.6 Let π : M → L(P) be a P-conditional functional. Then, for every event E ∈ F (P) and for all X, Y ∈ M we have X = Y on E ⇒ π(X) = π(Y ) on E.

192

11 Conditional Linear Functionals

Proof Assume that X = Y on E so that 1E X = 1E Y . Then, we clearly have 1E π(X) = π(1E X) = π(1E Y ) = 1E π(Y ). This means that π(X) = π(Y ) on E.



Using the preceding proposition we can prove that a conditional functional is “linear” on pairwise disjoint observable events. Proposition 11.1.7 Let π : M → L(P) be a P-conditional functional. Then, for all X1 , . . . , Xm ∈ M and all pairwise disjoint E1 , . . . , Em ∈ F (P) we have π

 m i=1

 1Ei Xi

=

m 

1Ei π(Xi ).

i=1

Proof Set X = 1E1 X1 + · · · + 1Em Xm and note that, since the events E1 , . . . , Em are pairwise disjoint, X = Xi on Ei for every i ∈ {1, . . . , m}. Then, it follows from Proposition 11.2.4 that π(X) = π(Xi ) on Ei for every i ∈ {1, . . . , m}, which immediately yields the desired equality.  Remark 11.1.8 It is clear that the set of P-conditional functionals is a real vector space when equipped with the standard sum and product by a scalar. 

11.2

Localizations

To understand the nature of a P-conditional functional, it is useful to introduce the concept of its localization to an atom of P. Definition 11.2.1 (Localization) Let π : M → L(P). For every A ∈ P the functional π(·|A) : M → R defined by π(X|A) := π(X)(A) is called the localization of π to A.



Example 11.2.2 (Conditional Expectations) The conditional expectations with respect to the atoms of P are precisely the localizations of the conditional expectation with respect to P. In other words, setting π = EP [·|P], we have π(X|A) = EP [X|A] for all X ∈ L and A ∈ P. This explains our choice of notation. 

11.2 Localizations

193

We start by highlighting that a conditional functional is uniquely determined by its localizations. This is immediate to see because the localizations correspond precisely to the coordinates with respect to the canonical basis of L(P). In addition, we describe a useful way to construct conditional functionals having prescribed localizations. Proposition 11.2.3 For a map π : M → L(P) the following statements are equivalent: (a) For every A ∈ P there exists a functional πA : M → R such that πA (0) = 0 and for every X ∈ M π(X) =



πA (1A X)1A .

(11.1)

A∈P

(b) π is a P-conditional functional. In this case, we have π(X|A) = πA (1A X) for all X ∈ M and A ∈ P. Proof To prove that (a) implies (b), assume that π can be written as in (11.1). Clearly, we have π(0) = 0. Moreover, for all A ∈ P and X ∈ M we have π(1A X) =



πB (1A 1B X)1B = πA (1A X)1A = 1A π(X).

A∈P

This shows that π is a P-conditional functional satisfying π(X|A) = πA (1A X) for all X ∈ M and A ∈ P. To establish that (b) implies (a), assume that π is a P-conditional functional and take X ∈ M. Since π(X) is P-measurable, we easily see that π(X) =



π(X)(A)1A =

A∈P



π(X|A)1A .

A∈P

We conclude by noting that π(0|A) = π(0)(A) = 0 for every A ∈ P.



We state two simple yet useful properties of localizations that will be repeatedly used in the sequel without explicit mention. Proposition 11.2.4 For every P-conditional functional π : M → L(P) the following statements hold: (i) For all X ∈ M and A ∈ P we have X = Y on A ⇒ π(X|A) = π(Y |A).

194

11 Conditional Linear Functionals

(ii) For all X ∈ M and A, B ∈ P we have ⎧ ⎨π(X|A), if B = A, π(1B X|A) = ⎩0, if B = A. Proof Assertion (i) follows immediately from Proposition 11.1.6. Since 1A X = X on A, it follows from item (i) that π(1A X|A) = π(X|A). Now, assume that B = A. Since 1B X = 0 on the event A and since π(0) = 0 holds, we infer again from item (i) that  π(1B X|A) = π(0|A) = 0. This establishes (ii) and concludes the proof.

11.3

Linearity, Convexity, Monotonicity

We review here the concepts of linearity, convexity, sublinearity, and monotonicity for a map π : M → L(P). We show that, if π is a P-conditional functional, then these properties can be characterized through the corresponding properties of its P-localizations. Moreover, stronger forms of linearity, convexity, and sublinearity hold when π is a Pconditional functional. Linearity We start by highlighting that the linearity of a conditional functional is equivalent to the linearity of all of its localizations. The easy proof is omitted. Definition 11.3.1 (Linear Map) A map π : M → L(P) is said to be linear if for all X, Y ∈ M and a ∈ R we have: (1) π(X + Y ) = π(X) + π(Y ) (additivity). (2) π(aX) = aπ(X) (homogeneity).



Proposition 11.3.2 For every P-conditional functional π : M → L(P) the following statements are equivalent: (a) π is linear. (b) π(·|A) is linear for every A ∈ P. Linear conditional functionals enjoy a stronger form of homogeneity, which we call conditional homogeneity. Conditional homogeneity is just a more general version of

11.3 Linearity, Convexity, Monotonicity

195

the “taking out what is known” property of conditional expectations proved in Proposition 10.3.8. Proposition 11.3.3 Every P-conditional linear functional π : M → L(P) is Pconditionally homogeneous, i.e., we have π(ZX) = Zπ(X) for all Z ∈ L(P) and X ∈ M. Proof Take Z ∈ L(P) and X ∈ M and note that Z=



Z(A)1A .

A∈P

It follows from linearity and conditionality that π(ZX) =

 A∈P

Z(A)π(1A X) =



Z(A)1A π(X) = Zπ(X).

A∈P



Convexity and Sublinearity The notions of convexity and sublinearity for a map π : M → L(P) are straightforward generalizations of the corresponding notions for scalar-valued maps. Definition 11.3.4 (Convex/Sublinear Map) Let π : M → L(P). We say that π is convex if for all X, Y ∈ M and all a ∈ [0, 1] we have π(aX + (1 − a)Y ) ≤ aπ(X) + (1 − a)π(Y ). We say that π is concave if −π is convex. We say that π is sublinear if the following properties are satisfied for all X, Y ∈ M and a ∈ [0, ∞): (1) π(X + Y ) ≤ π(X) + π(Y ) (subadditivity). (2) π(aX) = aπ(X) (positive homogeneity). We say that π is superadditive, respectively superlinear, whenever −π is subadditive, respectively sublinear.  Like linearity, both convexity and sublinearity of a conditional functional can be characterized by the corresponding properties of its localizations. The easy verification is left as an exercise. This is important because it immediately implies that all the results for convex and sublinear functionals established in Chap. 1 carry over to conditional functionals.

196

11 Conditional Linear Functionals

Proposition 11.3.5 For every P-conditional functional π : M → L(P) the following statements hold: (i) π is convex if and only if π(·|A) is convex for every A ∈ P. (ii) π is sublinear if and only if π(·|A) is sublinear for every A ∈ P. Similar to what we established in Proposition 11.3.3 for linear conditional functionals, convex and positively-homogeneous conditional functionals exhibit a stronger form of convexity and sublinearity, respectively. Proposition 11.3.6 For every P-conditional functional π : M → L(P) the following statements hold: (i) If π is convex, then it is P-conditionally convex, i.e., for all X, Y ∈ M and Z ∈ L(P) such that 0 ≤ Z ≤ 1 we have π(ZX + (1 − Z)Y ) ≤ Zπ(X) + (1 − Z)π(Y ). (ii) If π is positively homogeneous, then it is P-conditionally positively homogeneous, i.e., for all X ∈ M and Z ∈ L(P) such that Z ≥ 0 we have π(ZX) = Zπ(X). Proof To prove (i), take X, Y ∈ M and Z ∈ L(P) such that 0 ≤ Z ≤ 1 and observe that 

ZX + (1 − Z)Y =

1A (Z(A)X + (1 − Z(A))Y )).

A∈P

Then, the convexity of π implies that π(ZX + (1 − Z)Y ) =



1A π(Z(A)X + (1 − Z(A))Y )

A∈P





1A (Z(A)π(X) + (1 − Z(A))π(Y ))

A∈P

= Zπ(X) + (1 − Z)π(Y ), where the first equality follows from Proposition 11.1.7. To establish (ii), take X ∈ M and Z ∈ L(P) such that Z ≥ 0 and observe that ZX =

 A∈P

1A Z(A)X.

11.3 Linearity, Convexity, Monotonicity

197

Then, the positive homogeneity of π implies that π(ZX) =



1A π(Z(A)X) =

A∈P



1A Z(A)π(X) = Zπ(X),

A∈P

where the first equality follows from Proposition 11.1.7.



Monotonicity We now turn to order preserving maps π : M → L(P). The following monotonicity properties are natural generalizations of the corresponding notions for scalar-valued functionals introduced in Chap. 4. Definition 11.3.7 ((Strictly) Increasing Map) increasing whenever

A map π : M → L(P) is said to be

X, Y ∈ M, X ≥ Y ⇒ π(X) ≥ π(Y ) and strictly increasing whenever X, Y ∈ M, X  Y ⇒ π(X)  π(Y ). We say that π is (strictly) decreasing if −π is (strictly) increasing.



We provide a characterization of when a conditional functional is (strictly) increasing in terms of its localizations. The simple verification is left as an exercise. Proposition 11.3.8 For every P-conditional functional π : M → L(P) the following statements hold: (i) π is increasing if and only if π(·|A) is increasing for every A ∈ P. (ii) π is strictly increasing if and only if for all X, Y ∈ M such that X  Y there exists A ∈ P such that π(X|A) > π(Y |A). In Proposition 1.7.2 we have shown that a linear functional is increasing if and only if it assigns positive values to positive random variables. The following result is the conditional counterpart. Its simple proof is left as an exercise. Proposition 11.3.9 For every linear map π : M → L(P) the following statements hold: (i) π is increasing if and only if π(X) ≥ 0 for every X ∈ M with X ≥ 0. (ii) π is strictly increasing if and only if π(X)  0 for every X ∈ M with X  0.

198

11 Conditional Linear Functionals

The above result justifies the following more common terminology for linear functionals. Definition 11.3.10 ((Strictly) Positive Map) A linear map π : M → L(P) is said to be positive if it is increasing and strictly positive if it is strictly increasing.  Example 11.3.11 (Conditional Expectations) It follows from Proposition 10.3.7 that EP [·|P] is strictly positive whenever supp(P) = .  Continuity We conclude by proving that every conditional functional that is convex is automatically continuous. Definition 11.3.12 (Continuous Map) A map π : M → L(P) is said to be continuous if for every sequence (Xn ) ⊂ M and every X ∈ M we have Xn → X ⇒ π(Xn ) → π(X).



The announced continuity result is a direct generalization of Proposition 3.4.4. Proposition 11.3.13 Let π : M → L(P) be a convex P-conditional functional. Then, for every p ∈ [1, ∞] and every r ∈ (0, ∞) there exists a constant cr,p ∈ (0, ∞) such that π(X) − π(Y )p ≤ cr,p X − Y p

(11.2)

for all X, Y ∈ Mr , where Mr = {X ∈ M ; Xp ≤ r}. In particular, π is continuous. Proof Let p ∈ [1, ∞] and r ∈ (0, ∞) be fixed. By applying Proposition 3.4.4 to the localizations π(·|A)’s, which are convex by Proposition 11.3.5, we find cr,p ∈ (0, ∞) such that |π(X|A) − π(Y |A)| ≤ cr,p X − Y p for every A ∈ P and for all X, Y ∈ Mr . Consequently, π(X) − π(Y )∞ ≤ cr,p X − Y p for all X, Y ∈ Mr . We conclude by noting that we can replace the maximum norm with the p-norm, up to adjusting the constant, by Proposition 3.1.4.  Similarly, we can extend to conditional functionals the continuity result for sublinear functionals recorded in Corollary 3.4.5.

11.4 Exercises

199

Corollary 11.3.14 Let π : M → L(P) be a sublinear P-conditional functional. Then, for every p ∈ [1, ∞] there exists a constant cp ∈ (0, ∞) such that π(X) − π(Y )p ≤ cp X − Y p .

(11.3)

for all X, Y ∈ M. Proof We proceed as in the proof of Proposition 11.3.13. Let p ∈ [1, ∞] be fixed. By applying Corollary 3.4.5 to the localizations π(·|A)’s, which are sublinear by Proposition 11.3.5, we find a constant cp ∈ (0, ∞) such that |π(X|A) − π(Y |A)| ≤ cp X − Y p for every A ∈ P and for all X, Y ∈ M. As a result, we get π(X) − π(Y )∞ ≤ cp X − Y p for all X, Y ∈ M. We conclude by noting that we can replace the maximum norm with the p-norm, up to adjusting the constant, by Proposition 3.1.4.  Remark 11.3.15 The continuity property established in (11.2) is called local Lipschitz continuity and, similarly, property (11.3) is called (global) Lipschitz continuity. It follows from the above results that, being linear, the conditional functional EP [·|P] is Lipschitz continuous. 

11.4

Exercises

In all exercises below we assume that (, P) is a finite probability space. Moreover, we assume that P is a partition of  such that P(A) > 0 for every A ∈ P and that M is a linear subspace of L satisfying A ∈ P, X ∈ M ⇒ 1A X ∈ M. Exercise 11.4.1 Show that the following statements are equivalent: (a) 1A X ∈ M for all A ∈ P and X ∈ M. (b) 1E X ∈ M for all E ∈ F (P) and X ∈ M. (c) ZX ∈ M for all Z ∈ L(P) and X ∈ M.

200

11 Conditional Linear Functionals

Exercise 11.4.2 For every A ∈ P consider the map PA : M → L defined by PA (X) = 1A X, and set MA = {PA (X) ; X ∈ M}. Prove that MA is a linear subspace of M and that PA (X) is the orthogonal projection of X onto MA for every X ∈ M. Exercise 11.4.3 Assume that the partition P is not trivial and take A ∈ P. Show that the map π : M → L(P) defined by ψ(X) = EP [X]1A is linear but not P-conditional. Exercise 11.4.4 Show that for every P-conditional functional π : M → L(P) the following statements hold: (i) (ii) (iii) (iv) (v)

π is linear if and only if π(·|A) is linear for every A ∈ P. π is convex if and only if π(·|A) is convex for every A ∈ P. π is sublinear if and only if π(·|A) is sublinear for every A ∈ P. π is increasing if and only if π(·|A) is increasing for every A ∈ P. π is strictly increasing if and only if for all X, Y ∈ M such that X  Y there exists A ∈ P such that π(X|A) > π(Y |A).

Exercise 11.4.5 Consider a map π : M → L(P). Prove that π is a P-conditional functional if and only if for every A ∈ P there exists a functional πA : M → R such that πA (0) = 0 and π(X) =



πA (1A X)1A

A∈P

for every X ∈ M. In this case, deduce the following statements: (i) (ii) (iii) (iv) (v) (vi)

For all X ∈ M and A ∈ P we have π(X|A) = πA (1A X). If πA is linear for every A ∈ P, then π is linear. If πA is convex for every A ∈ P, then π is convex. If πA is sublinear for every A ∈ P, then π is sublinear. If πA is increasing for every A ∈ P, then π is increasing. If πA is strictly increasing for every A ∈ P, then π is strictly increasing.

Exercise 11.4.6 Consider the following properties of a map π : M → L(P): (i) π is convex and π(0) = 0. (ii) π is subadditive and π(0) = 0. (iii) π is positively homogeneous. Show that any two of the above properties are equivalent under the other one.

11.4 Exercises

201

Exercise 11.4.7 Show that for every linear map π : M → L(P) the following statements hold: (i) π is increasing if and only if π(X) ≥ 0 for every X ∈ M with X ≥ 0. (ii) π is strictly increasing if and only if π(X)  0 for every X ∈ M with X  0.

12

Extensions of Conditional Linear Functionals

In this chapter we extend to conditional linear functionals the extension and representation results established for scalar functionals in Chap. 4. In particular, we obtain a conditional version of the Riesz Representation Theorem in which conditional expectations play the role of expectations. We pay special attention to conditional linear functionals that are strictly positive because the corresponding extension and representation results play a fundamental role in the study of multi-period financial markets.

Standing Assumption We fix a finite probability space (, P) with  = {ω1 , . . . , ωK }. We assume that supp(P) = . In addition, we fix a partition P of  and a linear subspace M ⊂ L satisfying A ∈ P, X ∈ M ⇒ 1A X ∈ M.

12.1

Scalarizations

In the next sections we generalize to conditional functionals the extension and representation results obtained in Chap. 4. This could be achieved by using the extension and representation results for the corresponding localizations. However, it is more convenient to combine localizations and work with another standard functional associated to a conditional functional, called its scalarization.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_12

203

204

12 Extensions of Conditional Linear Functionals

Definition 12.1.1 (Scalarization) Let π : M → L(P) be a P-conditional functional. The functional απ : M → R defined by απ (X) :=



π(X|A)

A∈P

is called the scalarization of π.



The following result highlights the link between localizations and scalarizations and shows that a conditional functional is completely determined by its scalarization. This follows immediately from Proposition 11.2.4. Proposition 12.1.2 Let π : M → L(P) be a P-conditional functional. Then, for all X ∈ M and A ∈ P we have π(X|A) = απ (1A X). The next result shows that all the properties of conditional functionals we have discussed in the preceding section, namely linearity, convexity, sublinearity, and monotonicity, can be characterized by the corresponding properties of their scalarizations. This is a direct consequence of Proposition 12.1.2 and the results on localizations established in Sect. 11.3. The explicit verification is left as an exercise. Proposition 12.1.3 For every P-conditional functional π : M → L(P) the following statements hold: (i) (ii) (iii) (iv) (v)

π π π π π

is linear if and only if απ is linear. is convex if and only if απ is convex. is sublinear if and only if απ is sublinear. is increasing if and only if απ is increasing. is strictly increasing if and only if απ is strictly increasing.

Remark 12.1.4 (Scalarizations vs Localizations) In this chapter we are mainly interested in extension results for strictly-positive conditional functionals. The preceding result tells us that we can characterize the strict positivity of a conditional functional in terms of the strict positivity of its scalarization. As seen in Proposition 11.3.8, this is not true if we use localizations instead of scalarizations. This explains our focus on scalarizations in this part. An alternative way to scalarize a conditional functional is presented in Exercise 12.4.2. 

12.2 Extension Results

12.2

205

Extension Results

Standing Assumption Throughout the whole section we assume that M is a proper linear subspace of L.

In this section we are concerned with generalizing to conditional linear functionals the extension results established for standard linear functionals in Chap. 4. Definition 12.2.1 (Extension) Let π : M → L(P) and let N be a linear subspace of L such that M ⊂ N . A map ψ : N → L(P) satisfying ψ(X) = π(X) for every X ∈ M is said to be an extension of π (to N ).  Extension of Conditional Linear Functionals We start by focusing on conditional linear extensions. As a preliminary result, we show that the link between conditional linear functionals and their scalarizations can be exploited to reduce the problem of the existence and multiplicity of conditional extensions to an extension problem for standard linear functionals. Here, for a P-conditional linear functional π : M → L(P) we set Eπ (L) := {P-conditional linear extensions of π to L}. Moreover, we define Eαπ (L) := {linear extensions of απ to L}. Proposition 12.2.2 Let π : M → L(P) be a P-conditional linear functional. Then, for every X ∈ L we have {ψ(X) ; ψ ∈ Eπ (L)} =



 ψA (1A X)1A ; ψA ∈ Eαπ (L) for every A ∈ P .

A∈P

Proof To prove the inclusion “⊂”, take an extension ψ ∈ Eπ (L). Then, for every X ∈ M we have αψ (X) =

 A∈P

ψ(X|A) =

 A∈P

π(X|A) = απ (X),

206

12 Extensions of Conditional Linear Functionals

showing that αψ is an extension of απ . Further, Proposition 12.1.3 shows that αψ is linear, so that αψ ∈ Eαπ (L). Since ψ(X) =



ψ(X|A)1A =

A∈P



αψ (1A X)1A

A∈P

by Proposition 12.1.2, the desired inclusion holds. To show the inclusion “⊃”, let ψA belong to Eαπ (L) for every A ∈ P and define a map ψ : L → L(P) by ψ(X) =



ψA (1A X)1A .

A∈P

It suffices to show that ψ ∈ Eπ (L). By Proposition 11.2.3, ψ is a P-conditional linear functional. Moreover, for every X ∈ M we have ψ(X) =

 A∈P

απ (1A X)1A =



π(X|A)1A = π(X)

A∈P

by Proposition 12.1.2, showing that ψ is an extension of π.



In Theorem 4.2.3 we saw that a linear functional defined on M admits infinitely many linear extensions to L. Hence, the preceding result implies that a conditional linear functional defined on M also admits infinitely many conditional linear extensions to L. The following theorem records this result and complements it by a description of the range of possible values attained by conditional linear extensions. Theorem 12.2.3 For every P-conditional linear functional π : M → L(P) the following statements hold: (i) π admits a P-conditional linear extension to L. (ii) For every X ∈ L we have {ψ(X) ; ψ ∈ Eπ (L)} = {p ∈ L(P) ; p(A) ∈ (X|A) for every A ∈ P}, where ⎧ ⎨{π(1 X|A)}, if 1 X ∈ M, A A

(X|A) = ⎩R, if 1A X ∈ / M. In particular, π admits infinitely many P-conditional linear extensions to L.

12.2 Extension Results

207

Proof It follows from Theorem 4.2.3 and Proposition 12.2.2 that π has infinitely many Pconditional linear extensions to L. By Proposition 12.2.2, to conclude the proof it suffices to show, that ⎧ ⎨{π(1 X|A)}, if 1 X ∈ M, A A {α(1A X) ; α ∈ Eαπ (L)} = ⎩R, if 1A X ∈ / M, for every X ∈ L and every A ∈ P. This is a direct consequence of Theorem 4.2.3 once we  observe that απ (1A X) = π(1A X|A) whenever 1A X ∈ M, by Proposition 12.1.2. Remark 12.2.4 (Conditional Linear Extensions to Subspaces) Let π : M → L(P) be a P-conditional linear functional and N be a linear subspace of L containing M. It is clear that, by restricting to N any P-conditional linear extension of π to the full space L, we obtain a P-conditional linear extension of π to the subspace N . Hence, all the results of this section remain valid if we replace L with N .  Extensions of Strictly-Positive Conditional Linear Functionals

Standing Assumption Throughout the remainder of this section we assume that M contains a strictly-positive random variable.

The possibility to extend a strictly-positive conditional linear functional preserving strict positivity will be key to extending the arbitrage theory to the setting of a multiperiod financial market. Fortunately, we can once again rely on scalarizations and infer the existence of such extensions from the extension results for scalar functionals obtained in Chap. 4. The proof of the following proposition follows the same lines as that of Proposition 12.2.2 and is left as an exercise. Here, for a strictly-positive P-conditional linear functional π : M → L(P) we set Eπ+ (L) := {strictly-positive P-conditional linear extensions of π to L}. Moreover, we define Eα+π (L) := {strictly-positive linear extensions of απ to L}. Proposition 12.2.5 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Then, for every X ∈ L we have {ψ(X) ; ψ ∈ Eπ+ (L)} =

 A∈P

 ψA (1A X)1A ; ψA ∈ Eα+π (L) for every A ∈ P .

208

12 Extensions of Conditional Linear Functionals

Thanks to this proposition, the existence of strictly-positive conditional linear extensions can be immediately derived from our results on standard functionals. Theorem 12.2.6 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Then the following statements hold: (i) π admits a strictly-positive P-conditional linear extension to L. (ii) For every X ∈ L and every A ∈ P with 1A X ∈ / M the set {ψ(X|A) ; ψ ∈ Eπ+ (L)} is a (nonempty) bounded open interval. In particular, π admits infinitely many strictly-positive P-conditional linear extensions to the space L. Proof It follows from Theorem 4.2.6 and Proposition 12.2.5 that (i) holds. Now, take X ∈ L and A ∈ P satisfying 1A X ∈ / M. Note that {ψ(X|A) ; ψ ∈ Eπ+ (L)} = {ψA (1A X) ; ψA ∈ Eα+π (L)} by Proposition 12.2.5. That the above set is a (nonempty) bounded open interval follows immediately from Theorem 4.2.6. This establishes (ii) and concludes the proof.  We complement the above existence result by a description of the range of values attained by strictly-positive conditional linear extensions, which is based on the corresponding result for standard functionals. To this end, we introduce the conditional counterpart to the compatibility notion recorded in Definition 4.2.7. Definition 12.2.7 Let π : M → L(P) be a strictly-positive P-conditional linear functional. For X ∈ L and p ∈ L(P) we say that p satisfies the π-compatibility condition at X if the following conditions hold for every A ∈ P: (1) p(A) > π(Z|A) for every Z ∈ M such that Z ≤ X and Z  X on A. (2) p(A) < π(Z|A) for every Z ∈ M such that Z ≥ X and Z  X on A. The set of all such p’s is denoted by (X). Moreover, we set π − (X) := inf (X) and π + (X) := sup (X).



We record a simple characterization of the above compatibility condition that will be used in the proof of the following results without reference.

12.2 Extension Results

209

Lemma 12.2.8 Let π : M → L(P) be a strictly-positive P-conditional linear functional. For X ∈ L and p ∈ L(P) we have p ∈ (X) if and only if the following conditions hold for every A ∈ P: (1) p(A) > π(Z|A) for every Z ∈ M such that Z  X on A. (2) p(A) < π(Z|A) for every Z ∈ M such that Z  X on A. Proof Clearly, we only have to establish the “only if” implication. To this end, assume that p ∈ (X) and take A ∈ P. Moreover, let W, Z ∈ M satisfy W  X  Z on A. For any U, V ∈ M such that U ≤ X ≤ V we clearly have 1A W +1Ac U ≤ X ≤ 1A Z +1Ac V as well as 1A W + 1Ac U  X  1A Z + 1Ac V on A. As p satisfies the π-compatibility condition at X, we get π(W |A) = π(1A W + 1Ac U |A) < p(A) < π(1A Z + 1Ac V |A) = π(Z|A) by Proposition 11.2.4. This shows the desired implication.



The next result shows the connection between strictly-positive conditional linear extensions and the above compatibility condition. Theorem 12.2.9 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Then, for every X ∈ L we have

(X) = {ψ(X) ; ψ ∈ Eπ+ (L)}. In particular, the following statements hold: (i) π − (X) = inf{ψ(X) ; ψ ∈ Eπ+ (L)}. (ii) π + (X) = sup{ψ(X) ; ψ ∈ Eπ+ (L)}. Proof For every X ∈ L we denote by απ (X) the set of all numbers in R satisfying the απ -compatibility condition at X as introduced in Definition 4.2.7. Now, fix X ∈ L. Then, it follows from Theorem 4.2.8 and Proposition 12.2.5 that {ψ(X) ; ψ ∈ Eπ+ (L)} = {p ∈ L(P) ; p(A) ∈ απ (1A X) for every A ∈ P}. To conclude the proof we have to show that

(X) = {p ∈ L(P) ; p(A) ∈ απ (1A X) for every A ∈ P}.

(12.1)

To show the inclusion “⊂” in (12.1), take an arbitrary p ∈ (X) and fix A ∈ P. Let W, Z ∈ M satisfy W  1A X  Z. Note that W  X  Z on A. Further, observe that

210

12 Extensions of Conditional Linear Functionals

W ≤ 0 ≤ Z on Ac , so that W ≤ 1A W and 1A Z ≤ Z. Since p satisfies the π-compatibility condition at X, it follows that απ (W ) ≤ απ (1A W ) = π(W |A) < p(A) < π(Z|A) = απ (1A Z) ≤ απ (Z), where we used the monotonicity of απ recorded in Propositions 12.1.3 and 12.1.2. This shows that p(A) satisfies the απ -compatibility condition at 1A X. To establish the inclusion “⊃” in (12.1), take p ∈ L(P) and assume that p(A) satisfies the απ -compatibility condition at 1A X for every A ∈ P. Now take W, Z ∈ M such that W  X  Z on A for some A ∈ P. Since 1A W  1A X  1A Z, we must have π(W |A) = απ (1A W ) < p(A) < απ (1A Z) = π(Z|A) by Proposition 12.1.2. This shows that p satisfies the π-compatibility condition at X and concludes the proof.  The following result collecting a variety of properties of the bounds introduced in Definition 12.2.7 is a direct consequence of the preceding results. Proposition 12.2.10 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Then, the following statements hold: (i) For every X ∈ L we have π + (X) = −π − (−X). (ii) π − : L → L(P) is a superlinear increasing P-conditional extension of π such that: (a) π − (X|A) < 0 for all X ∈ L and A ∈ P with X  0 on A. (b) π − (X + Z) = π − (X) + π(Z) for all X ∈ L and Z ∈ M. (iii) π + : L → L(P) is a sublinear increasing P-conditional extension of π such that: (a) π + (X|A) > 0 for all X ∈ L and A ∈ P with X  0 on A. (b) π + (X + Z) = π + (X) + π(Z) for all X ∈ L and Z ∈ M. (iv) For all X ∈ L and A ∈ P with 1A X ∈ M we have π − (X|A) = π + (X|A) = π(1A X|A) and {p(A) ; p ∈ (X)} = {π(1A X|A)}. (v) For all X ∈ L and A ∈ P with 1A X ∈ / M we have π − (X|A) < π + (X|A) and {p(A) ; p ∈ (X)} = (π − (X|A), π + (X|A)).

12.2 Extension Results

211

Proof To prove (i), it suffices to note that, by Theorem 12.2.9, for every X ∈ L we have π − (−X) =

inf

ψ∈Eπ+ (L)

ψ(−X) =

inf

{−ψ(X)} = −

ψ∈Eπ+ (L)

sup

ψ∈Eπ+ (L)

ψ(X) = −π + (X).

Next, we focus on (ii). It follows from Theorem 12.2.6 that π − (X) is a well-defined random variable for every X ∈ L. Since π − (X) is the infimum of P-measurable random variables, Corollary 9.4.7 implies that π − (X) is P-measurable for every X ∈ L. By Theorem 12.2.9, we have π − (X) = infψ∈Eπ+ (L) ψ(X) for every X ∈ L. This immediately shows that π − is an extension of π satisfying π − (X + Y ) = ≥

inf

{ψ(X) + ψ(Y )}

inf

ψ(X) +

ψ∈Eπ+ (L) ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(Y )

= π − (X) + π − (Y ) for all X, Y ∈ L, as well as π − (aX) =

{aψ(X)} = a

inf

ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(X) = aπ − (X)

for all X ∈ L and a ∈ [0, ∞). Hence, π − is superlinear. Moreover, π − (1A X) =

inf

{1A ψ(X)} = 1A

ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(X) = 1A π − (X)

for all X ∈ L and A ∈ P. Since π − (0) = π(0) = 0, we infer that π − is a P-conditional functional. To establish monotonicity, take X, Y ∈ L such that X ≥ Y and note that ψ(X) ≥ ψ(Y ) for every ψ ∈ Eπ+ (L). Consequently, π − (X) =

inf

ψ∈Eπ+ (L)

ψ(X) ≥

inf

ψ∈Eπ+ (L)

ψ(Y ) = π − (Y ),

showing that π − is increasing. To prove that (a) holds, take X ∈ L such that X  0 and note that ψ(X)  0 for every ψ ∈ Eπ+ (L). Thus, we have π − (X) =

inf

ψ∈Eπ+ (L)

ψ(X)  0.

212

12 Extensions of Conditional Linear Functionals

Finally, to prove that (b) holds, take X ∈ L and Z ∈ M and observe that π − (X + Z) =

inf

{ψ(X) + π(Z)} =

ψ∈Eπ+ (L)

inf

ψ∈Eπ+ (L)

ψ(X) + π(Z) = π − (X) + π(Z).

The proof of assertion (iii) can be obtained by following the lines of the proof of item (ii). Alternatively, it follows by combining (i) and (ii). Assertion (iv) is a direct consequence of Theorem 12.2.9 and assertion (v) follows by combining Theorem 12.2.6 and Theorem 12.2.9.  Proposition 12.2.11 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Then, for every X ∈ L the following statements hold: (i) π − (X) = sup{π(Z) ; Z ∈ M, Z ≤ X}. (ii) π + (X) = inf{π(Z) ; Z ∈ M, Z ≥ X}. Moreover, the supremum in (i) and the infimum in (ii) are attained. Proof It follows immediately from Theorem 12.2.9 that π − (X) ≥ sup{π(Z) ; Z ∈ M, Z ≤ X},

(12.2)

π + (X) ≤ inf{π(Z) ; Z ∈ M, Z ≥ X}.

(12.3)

Now, fix A ∈ Pt and note that, by Proposition 12.2.5 and Theorem 12.2.9, we have π − (X|A) = π + (X|A) =

inf

ψA (1A X),

sup

ψA (1A X).

ψA ∈Eα+π (L)

ψA ∈Eα+π (L)

By Theorem 4.2.8 and Proposition 4.2.10, there exist WA , ZA ∈ M satisfying the inequalities WA ≤ 1A X ≤ ZA and such that π − (X|A) = (απ )− (1A X) = απ (WA ), π + (X|A) = (απ )+ (1A X) = απ (ZA ). Observe that WA ≤ 0 ≤ ZA on A, so that WA ≤ 1A WA and ZA ≥ 1A ZA . Set W =

 A∈P

1A WA ∈ M and Z =

 A∈P

1A ZA ∈ M.

12.3 Representation Results

213

Note that W ≤ X ≤ Z. Moreover, for every A ∈ P we have π − (X|A) = απ (WA ) ≤ απ (1A WA ) = απ (1A W ) = π(W |A),

(12.4)

π + (X|A) = απ (ZA ) ≥ απ (1A ZA ) = απ (1A Z) = π(Z|A),

(12.5)

where we used the monotonicity of απ recorded in Propositions 12.1.3 and 12.1.2. Assertion (iv) follows by combining (12.2) and (12.4). Assertion (v) follows by combining (12.3) and (12.5).  Remark 12.2.12 (Strictly-Positive Conditional Linear Extensions to Subspaces) Let π : M → L(P) be a strictly-positive P-conditional linear functional and N a linear subspace of L containing M. It is clear that, by restricting to N any strictly-positive P-conditional linear extension of π to the full space L, we obtain a strictly-positive Pconditional linear extension of π to N . Hence, all the results of this section remain valid if we replace L by its subspace N . 

12.3

Representation Results

We had already mentioned that conditional expectations are the prototype examples of conditional linear functionals. In this section we show that a conditional linear functional can always be represented in terms of a conditional expectation. The key to this result, which generalizes the classical Riesz Representation to the conditional setting, is the fact that conditional linear functionals are fully determined by their scalarizations. This allows us to exploit the representation results for standard linear functionals and translate them to the conditional case. We start by generalizing the notion of a Riesz density to our conditional setting. Definition 12.3.1 (Riesz Density) Let π : M → L(P) be a P-conditional linear functional. We say that a random variable D ∈ L is a Riesz density for π whenever π(X) = EP [DX|P] for every random variable X ∈ M.



As announced above, there exists an intimate link between Riesz densities for conditional linear functionals and Riesz densities for their scalarizations. Proposition 12.3.2 For every P-conditional linear functional π : M → L(P) and every D ∈ L the following statements hold:

214

12 Extensions of Conditional Linear Functionals

 1 (i) Assume that D is a Riesz density for π. Then, the random variable A∈P P(A) 1A D is a Riesz density for απ .  (ii) Assume that D is a Riesz density for απ . Then, the random variable A∈P P(A)1A D is a Riesz density for π. Proof To prove (i), assume that D is a Riesz density for π and set D  = Then, for every X ∈ L, απ (X) =



π(X|A) =

A∈P





1 A∈P P(A) 1A D.

EP [DX|A] = EP [D  X]

A∈P

by Proposition 10.2.2, showing that D  is a Riesz density for απ . To prove (ii), assume that  D is a Riesz density for απ and set D  = A∈P P(A)1A D. Then, for every X ∈ L, π(X) =



απ (1A X)1A =

A∈P



EP [D1A X]1A = EP [D  X|P]

A∈P

by Propositions 10.2.2 and 12.1.2, showing that D  is a Riesz density for π.



We can now extend the classical Riesz representation to conditional linear functionals. We start by focusing on the simple case where M is the entire space L, in which case every conditional linear functional admits a unique Riesz density. Theorem 12.3.3 (Riesz Representation) Every P-conditional π : L → L(P) admits a unique Riesz density D ∈ L, given by D=

linear

functional

  π(1ω |A) 1ω . P(ω|A)

A∈P ω∈A

Proof It follows from Theorem 4.3.2 that απ admits a unique Riesz density D ∈ L, which has the form D=

 απ (1ω ) 1ω . P(ω)

ω∈

Proposition 12.3.2 implies that π has a unique Riesz density D  ∈ L given by D =

 A∈P

P(A)1A D =

  P(A)   π(1ω )(A) απ (1ω )1ω = 1ω . P(ω) P(ω|A)

A∈P ω∈A

A∈P ω∈A



Next, we show that every conditional linear functional defined on a proper subspace of L admits an infinity of Riesz densities.

12.3 Representation Results

215

Theorem 12.3.4 Assume that M = L and consider a P-conditional linear functional π : M → L(P). For a random variable D ∈ L the following statements are equivalent: (a) D is a Riesz density for π. (b) D is the Riesz density of a P-conditional linear extension of π to L. In particular, π admits infinitely many Riesz densities. Proof It is clear that (b) implies (a). To prove the converse implication, assume that D is a Riesz density for π and let D  ∈ L be given by D =

 A∈P

1 1A D. P(A)

Since D  is a Riesz density for απ by Proposition 12.3.2, we infer from Theorem 4.3.3 that D  is the Riesz density of a linear extension α ∈ Eαπ (L). The map ψ : L → L(P) given by ψ(X) =



α(1A X)1A

A∈P

belongs to Eπ (L) by Proposition 12.2.2 and satisfies ψ(X) =

 A∈P

EP [D  1A X]1A =

 EP [1A DX] 1A = EP [DX|P] P(A)

A∈P

for every X ∈ L by Proposition 10.3.2, showing that D is a Riesz density for ψ. Since π admits infinitely many P-conditional linear extensions to L by Theorem 12.2.6, we conclude that π admits infinitely many Riesz densities.  We complement the above statement by showing that a conditional linear functional defined on M always admits a unique Riesz density belonging to the same space M. Here, we denote by PM the orthogonal projection onto M. Theorem 12.3.5 Assume that M = L and consider a P-conditional linear functional π : M → L(P). Then, for every Riesz density D ∈ L for π the following statements hold: (i) PM (D) is a Riesz density for π belonging to M. (ii) PM (D) = PM (D  ) for every Riesz density D  ∈ L for π. In particular, π admits a unique Riesz density belonging to M.

216

12 Extensions of Conditional Linear Functionals

Proof As a preliminary step we establish that PM is a P-conditional (linear) functional. To this effect, take X ∈ L and A ∈ P and note first that, for every Y ∈ M, we have EP [(1A X − 1A PM (X))Y ] = EP [(X − PM (X))1A Y ] = 0, where we used that X − PM (X) ∈ M⊥ . Hence, the difference 1A X − 1A PM (X) belongs to M⊥ and therefore PM (1A X) = 1A PM (X) by Proposition 3.6.3. Let D ∈ L be a Riesz density for π and define D  ∈ L by D =

 A∈P

1 1A D. P(A)

We know from Proposition 12.3.2 that D  is a Riesz density for απ . Then, it follows from Theorem 4.3.4 and from the P-conditionality of PM that PM (D  ) =

 A∈P

1 1A PM (D) P(A)

is the unique Riesz density for απ belonging to M. Using again Proposition 12.3.2 we infer that PM (D) is the unique Riesz density for π belonging to M. This concludes the proof of the theorem.  Representations and Strict Positivity We now specialize our representation result to conditional linear functionals that are strictly positive.

Standing Assumption Throughout the remainder of this section we assume that M contains a strictly-positive random variable.

In line with the results established in the context of standard functionals, the strict positivity of a conditional linear functional can be characterized at the level of Riesz densities. Theorem 12.3.6 For every P-conditional linear functional π : M → L(P) the following statements are equivalent: (a) π is strictly positive. (b) π admits a strictly-positive Riesz density.

12.4 Exercises

217

In this case, π admits a unique strictly-positive Riesz density whenever M = L, and infinitely many strictly-positive Riesz densities whenever M = L. Proof That (b) implies (a) follows immediately from Proposition 10.3.7. To show the converse implication, assume that π is strictly positive. Then, απ is also strictly positive by Proposition 12.1.3 and so it admits a strictly-positive Riesz density by Theorem 4.3.5. As a direct consequence of Proposition 12.3.2, we infer that π admits a strictly-positive Riesz density as well. If M = L, then π admits a unique Riesz density by Theorem 12.3.3. Otherwise, απ admits an infinity of strictly-positive Riesz densities by Theorem 4.3.5, each of which yields a different strictly-positive Riesz density for π, again by Proposition 12.3.2.  Remark 12.3.7 (On Strictly-Positive Riesz Densities) The observations made in Remark 4.3.6 apply also to conditional linear functionals: If π : M → L(P) is a strictly-positive P-conditional linear functional and M = L, then π always admits Riesz densities that are not strictly positive and the unique Riesz density belonging to M need not be strictly positive (in fact, it may even fail to be positive). In particular, the strict positivity of π cannot be characterized in terms of the strict positivity of its unique Riesz density in M. 

12.4

Exercises

In all exercises below we assume that (, P) is a finite probability space such that supp(P) = . Moreover, we assume that P is a partition of  such that P(A) > 0 for every A ∈ P and that M is a linear subspace of L satisfying A ∈ P, X ∈ M ⇒ 1A X ∈ M. Exercise 12.4.1 Show that for every P-conditional functional π : M → L(P) the following statements hold: (i) (ii) (iii) (iv) (v)

π π π π π

is linear if and only if απ is linear. is convex if and only if απ is convex. is sublinear if and only if απ is sublinear. is increasing if and only if απ is increasing. is strictly increasing if and only if απ is strictly increasing.

218

12 Extensions of Conditional Linear Functionals

Exercise 12.4.2 For every P-conditional functional π : M → L(P) we define a functional βπ : M → R by setting βπ (X) = EP [π(X)]. Show that the following statements hold: (i) (ii) (iii) (iv) (v) (vi) (vii)

(1A X) for all X ∈ M and A ∈ P. π(X|A) = βπP(A) π is linear if and only if βπ is linear. π is convex if and only if βπ is convex. π is sublinear if and only if βπ is sublinear. π is increasing if and only if βπ is increasing. π is strictly increasing if and only if βπ is strictly increasing. Every Riesz density for π is a Riesz density for βπ and viceversa.

This shows a different way to scalarize a conditional linear functional. Exercise 12.4.3 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Show that: (i) π − is neither linear nor strictly decreasing and for some X ∈ L we may have X  0 ⇒ π − (X) > 0. (ii) π + is neither linear nor strictly increasing and for some X ∈ L we may have X  0 ⇒ π + (X) < 0. Exercise 12.4.4 Let π : M → L(P) be a strictly-positive P-conditional linear functional. Show that the following statements hold: (i) απ − = (απ )− . (ii) απ + = (απ )+ . Exercise 12.4.5 Assume P and Q are partitions of  and Q is finer than P. For every P-conditional linear functional π : L(Q) → L(P) prove the following statements: (i) For every Riesz density D ∈ L of π the random variable EP [D|Q] is a Q-measurable Riesz density for π. (ii) For all Riesz densities D1 , D2 ∈ L of π we have EP [D1 |Q] = EP [D2 |Q]. (iii) There exists a unique Q-measurable Riesz density for π.

12.4 Exercises

219

(iv) Let D ∈ L(Q) be a Riesz density for π. Then, the following assertions are equivalent: (a) π is strictly positive. (b) D is strictly positive. Exercise 12.4.6 Let M be a linear subspace of L containing the constant random variable 1. Prove that for every P-conditional linear functional π : M → L(P) the following statements are equivalent: (a) There exists a probability measure Q that is equivalent to P and such that ) −1

 dQ )) dQ P EP dP dP ) is a Riesz density for π. (b) There exists a probability measure Q that is equivalent to P and such that for every X ∈ M we have π(X) = EQ [X|P]. (c) π is strictly positive and π(1) = 1. Show that, in this case, there exists a unique probability as above if P is trivial and M = L, and infinitely many otherwise.

13

Information and Stochastic Processes

In this short chapter we describe how to model the flow of information through time. As time progresses, we will typically know more and more about the terminal outcome of a random experiment. In line with the framework developed in the previous chapters, the flow of information can be represented by a sequence of refining partitions, which we call an information structure. A time sequence of random variables is usually called a stochastic process. If each component of a process is measurable with respect to the corresponding component of the information structure, then we speak of an adapted process.

Standing Assumption We fix a finite sample space  such that  = {ω1 , . . . , ωK }.

13.1

Information Structures

In this section we discuss how to model the process of updating information about the outcome of our random experiment through time.

Standing Assumption Throughout the chapter we consider a finite number of time points or dates t = 0, 1, . . . , T , where T ∈ N is a fixed terminal date.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_13

221

222

13 Information and Stochastic Processes

As times passes, the level of information about the terminal outcome will generally tend to increase, because some of the possible outcomes may turn out to be impossible once additional information is revealed. In the preceding chapters we have discussed how to use the notion of a partition in order to describe the arrival of new information. We can then naturally model the sequence of information by means of a sequence of partitions. Definition 13.1.1 (Information Structure) A sequence P = (P0 , . . . , PT ) of partitions of  is called an information structure on  if Pt +1 is finer than Pt for every date t ∈ {0, . . . , T − 1}.  If information is revealed through a sequence of partitions, then, as time progresses, more and more information is made available to us. We illustrate by a simple example how to visualize an information structure. Example 13.1.2 Consider the sample space  = {ω1 , ω2 , ω3 , ω4 }. We assume that T = 2 and consider the information structure given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}. The information structure can be visualized by using the following tree: {ω1 , ω2 } { } {ω3 , ω4 }

{ω1 } {ω2 } {ω3 } {ω4 }

We imagine that no information about the terminal outcome is revealed to us at time 0. Thus, we only know that the final outcome will belong to . At time 1, however, the first piece of real information arrives. The space  is divided into the two atoms {ω1 , ω2 } and {ω3 , ω4 } and we learn to which of these two sets the terminal outcome belongs. Assume we learn that it belongs to {ω1 , ω2 }. Finally, at time 2, the event {ω1 , ω2 } will split into the elementary events {ω1 } and {ω2 } and we will learn which of ω1 and ω2 is indeed the realized outcome. It follows that we will gradually find out the path to the terminal outcome by following the branch of the tree to which it belongs.  The notion of a filtration is the counterpart to the notion of an information structure in terms of fields. Definition 13.1.3 (Filtration) A sequence F = (F0 , . . . , FT ) of fields on  is called a  filtration on  if Ft ⊂ Ft +1 for every t ∈ {0, . . . , T − 1}.

13.2 Stochastic Processes

223

It is immediate from the definition of a finer partition that the collection of the fields associated to the components of a given information structure is a filtration. The easy verification is left as an exercise. Proposition 13.1.4 For every information structure P on  the sequence of fields (F (P0 ), . . . , F (PT )) is a filtration on .

13.2

Stochastic Processes

A sequence of random variables indexed by our time parameter is called a stochastic process. Definition 13.2.1 (Stochastic Process) A sequence of random variables in L X := (X0 , . . . , XT ) ∈ LT +1 is called a (stochastic) process. The process with zero components is denoted by 0. The set of processes is denoted by L.  By definition, the set L coincides with the Cartesian product LT +1 . This implies that the structure of the space L can be easily transferred to the set of processes L in a componentwise fashion. In what follows we are mainly interested in introducing notation related to the vector space and the order structure of the space of processes. Definition 13.2.2 (Operations) Consider two stochastic processes X, Y ∈ L. The sum of X and Y is the process X + Y ∈ L defined by X + Y := (X0 + Y0 , . . . , XT + YT ). For every a ∈ R, the product of X by a is the process aX ∈ L defined by aX := (aX0 , . . . , aXT ). For every Z ∈ L the product of X by Z is the process ZX ∈ L defined by ZX := (ZX0 , . . . , ZXT ).

224

13 Information and Stochastic Processes

The product of X and Y is the process XY ∈ L defined by XY := (X0 Y0 , . . . , XT YT ).



The proof of the following result is left as a straightforward exercise. Theorem 13.2.3 For every ω ∈  and every t ∈ {0, . . . , T − 1} consider the process Xω,t ∈ L defined by Xω,t = (0, . . . , 0, 1ω , 0, . . . , 0), where 1ω occupies the (t + 1)th position in the sequence. Then, B = {Xω1 ,0 , . . . , XωK ,0 , . . . , Xω1 ,T , . . . , X ωK ,T } is a vector space basis for L. In particular, we have dim(L) = K(T + 1). In the same vein, we can easily transfer the order relations for random variables to the set of stochastic processes. Definition 13.2.4 (Equalities and Inequalities) processes X, Y ∈ L we write: (i) (ii) (iii) (iv) (v)

X X X X X

=Y = Y ≥Y Y >Y

For every event E ∈ E and all

on E whenever Xt = Yt on E for every t ∈ {0, . . . , T }. on E whenever Xt = Yt on E for some t ∈ {0, . . . , T }. on E whenever Xt ≥ Yt on E for every t ∈ {0, . . . , T }. on E whenever X ≥ Y on E and X = Y on E. on E whenever Xt > Yt on E for every t ∈ {0, . . . , T }.

The converse inequalities X ≤ Y , X  Y , and X < Y are defined similarly. If E = , then we typically omit any reference to E.  Definition 13.2.5 (Positive Process) process X ∈ L. We say that:

Let E ∈ E be fixed and consider a stochastic

(i) X is positive on E whenever X ≥ 0 on E. (ii) X is nonzero positive on E whenever X  0 on E. (iii) X is strictly positive on E whenever X > 0 on E. We say that X is (nonzero/strictly) negative on E if −X is (nonzero/strictly) positive on E. We omit any reference to E in the case that E = . 

13.3 Adapted Processes

13.3

225

Adapted Processes

Consider a given sequence of partitions representing the time evolution of information. If a stochastic process is taken to model the time evolution of a certain quantity whose value can be observed at each point in time, then the process should consist of random variables that are measurable with respect to the corresponding partitions. Definition 13.3.1 (Adapted Process) Let P be an information structure on . A stochastic process X ∈ L is said to be P-adapted whenever Xt is Pt -measurable for every t ∈ {0, . . . , T }. The set of P-adapted processes is denoted by L(P).  Since the vector space operations for random variables preserve measurability by Corollary 9.4.7, it follows that adaptedness is also preserved by the vector space operations for processes. Proposition 13.3.2 Let P be an information structure on . Then, the set L(P) is a linear subspace of L. As in the case of random variables, we can specify the “minimal” information structure to which a given stochastic process is adapted. Definition 13.3.3 (Generated Information Structure) For every process X ∈ L and every t ∈ {0, . . . , T } define PtX := P(X0 , . . . , Xt ). The sequence P X = (P0X , . . . , PTX ) is called the information structure generated by X.  The information structure generated by a given process is the “minimal” information structure to which the process is adapted, as made precise by the following proposition. This is a direct consequence of the measurability result established in Proposition 9.4.15. Proposition 13.3.4 Let P be an information structure on . For every process X ∈ L the following statements are equivalent: (a) X is P-adapted. (b) PtX is coarser than Pt for every t ∈ {0, . . . , T }. In particular, X is P X -adapted.

226

13 Information and Stochastic Processes

The next result extends to stochastic processes the characterization of measurability with respect to a partition generated by random variables recorded in Proposition 9.4.16. The easy verification is left as an exercise. Proposition 13.3.5 For all stochastic processes X, Y ∈ L the following statements are equivalent: (a) Y is P X -adapted. (b) For every t ∈ {0, . . . , T } there exists a function ft : Rt +1 → R such that we have Yt = ft (X0 , . . . , Xt ).

13.4

Martingale Processes

In this section we introduce the notion of a martingale process and provide a simple characterization of the martingale property. The theory of martingales occupies a central role in stochastic analysis and its applications to mathematical finance. As mentioned in the introduction, our treatment is not based on stochastic analysis and, hence, martingales will only play a marginal role in the sequel. It should be emphasized that this choice is made possible because of the nature of our underlying model. More sophisticated models, e.g., those allowing for trading in continuous time, make the use of stochastic analysis almost unavoidable.

Standing Assumption Throughout this section we equip  with a probability measure P.

Definition 13.4.1 (Martingale) Let P be an information structure on . A P-adapted process X ∈ L is a P-supermartingale under P if for all 0 ≤ s < t ≤ T we have Xs ≥ EP [Xt |Ps ] and a P-submartingale under P if for all 0 ≤ s < t ≤ T we have Xs ≤ EP [Xt |Ps ]. We say that X is a P-martingale under P if it is both a P-supermartingale and a Psubmartingale under P.  Let X ∈ L be a P-martingale and take 0 ≤ s < t ≤ T . By Proposition 10.3.5, the conditional expectation EP [Xt |Ps ] coincides with the orthogonal projection of the random

13.4 Martingale Processes

227

variable Xt onto the subspace L(Ps ). In other words, the above conditional expectation is the Ps -measurable random variable minimizing the distance to Xt and, in this sense, provides the best prediction of Xt based on the information available at time s. The martingale property therefore says that such a best prediction is given precisely by Xs . Remark 13.4.2 (Martingales and Fair Games) The notion of a martingale is typically associated with the concept of a “fair game”. Indeed, consider a P-martingale X ∈ L for some information structure P and view the process X as being a sequence of payoffs. The martingale property implies that EP [Xt − Xs |Ps ] = EP [Xt |Ps ] − Xs = 0 for every 0 ≤ s < t ≤ T , where we have used the “taking out what is known” property of conditional expectations. In other words, the expected increase in wealth by playing the “game” X is equal to zero. In this sense we can say that a martingale represents a fair game. Similarly, super- and submartingales represent biased games where the expected increase in wealth is always negative, respectively positive. As discussed in Mansuy [18], the word “martingale” itself seems to stem from the world of horse races and bets.  The next proposition records an important characterization of the martingale property that highlights its “recursive” nature. Proposition 13.4.3 Let P be an information structure on . Then, for every P-adapted process X ∈ L the following statements are equivalent: (a) X is a P-martingale under P. (b) EP [Xt |Pt −1] = Xt −1 for every t ∈ {1, . . . , T }. Proof It follows from the definition of a martingale that (a) implies (b). To prove the converse implication, assume that (b) holds. Now, fix t ∈ {1, . . . , T }. We claim that EP [Xt |Ps ] = Xs for every s ∈ {0, . . . , t − 1}. We show this by backward induction on s. Base Step It follows directly from (b) that (13.1) holds for s = t − 1. Induction Step Assume that (13.1) holds for some s ∈ {1, . . . , t − 1}. Then, EP [Xt |Ps−1 ] = EP [EP [Xt |Ps ]|Ps−1 ] = EP [Xs |Ps−1 ] = Xs−1 ,

(13.1)

228

13 Information and Stochastic Processes

where we have used the Tower Property from Proposition 10.3.9 in the first equality, identity (13.1) in the second equality, and property (b) in the third equality. This concludes the induction argument and shows that (a) holds. 

13.5

Conditional Functionals

We highlight that the class of maps defined on any subset of L and taking values into the vector space L(P) carries a natural vector space structure. Definition 13.5.1 (Operations) Let M be a subset of L and let P be a partition of . For maps π : M → L(P) and ρ : M → L(P) we define the sum of π and ρ as the map π + ρ : M → L(P) given by (π + ρ)(X) := π(X) + ρ(X). Similarly, for π : M → L(P) and a ∈ R we define the product of π by a as the map aπ : M → L(P) given by (aπ)(X) := aπ(X). Moreover, for π : M → L(P) and Z ∈ L(P) we define the product of π by Z as the map Zπ : M → L(P) given by (Zπ)(X) := Zπ(X).



All the notions that we introduced in Chap. 11 for conditional functionals can be easily extended to maps defined on a linear subspace of L and taking values in a space of random variables. In particular, one can still use the following notation for localizations. Definition 13.5.2 (Localization) Let M be a linear subspace of L and let P be a partition of . Consider a map π : M → L(P). For every A ∈ P the functional π(·|A) : M → R defined by π(X|A) := π(X)(A) is called the localization of π to A.



In the remainder of this short section we show what are the appropriate notions of conditionality and monotonicity if we are dealing with a special class of maps defined on a space of adapted processes. As a preliminary step, it is useful to introduce the following notation.

13.5 Conditional Functionals

229

Definition 13.5.3 (Forward-Looking Truncation) Let t ∈ {0, . . . , T }. For every process X ∈ L the process X t :T := (0, . . . , 0, Xt , . . . , XT ) is called the (forward-looking) truncation of X at time t.



We can now formally introduce the notion of a conditional functional defined on a space of adapted processes satisfying the following structural condition.

Standing Assumption Throughout the remainder of this section we fix an information structure P on  and denote by M a fixed linear subspace of L(P). Moreover, we assume that for every t ∈ {0, . . . , T − 1} A ∈ Pt , X ∈ M ⇒ 1A X t :T ∈ M.

Definition 13.5.4 (Forward-Looking Conditional Functional) Let t ∈ {0, . . . , T − 1}. A map π : M → L(Pt ) satisfying π(0) = 0 and π(1A Xt :T ) = 1A π(X) for all A ∈ Pt and X ∈ M is called a forward-looking Pt -conditional functional.



The following proposition justifies the “forward-looking” part in the preceding definition. Proposition 13.5.5 Let t ∈ {0, . . . , T − 1}. For every Pt -conditional functional π : M → L(Pt ) and every process X ∈ M we have π(X) = π(Xt :T ). Proof Take a process X ∈ M and note that the forward-looking truncation X t :T belongs to M by our standing assumption. Since for every atom A ∈ Pt we have 1A π(X) = π(1A X t :T ) = 1A π(Xt :T ) by conditionality, we conclude that π(X) = π(Xt :T ).



We end with the definition of increasing and strictly-increasing forward-looking conditional functionals, which is adapted to the forward-looking nature of such maps.

230

13 Information and Stochastic Processes

Definition 13.5.6 ((Strictly) Increasing Forward-Looking Conditional Functional) Let t ∈ {0, . . . , T − 1}. A forward-looking Pt -conditional functional π : M → L(Pt ) is said to be increasing whenever X, Y ∈ M, Xt :T ≥ Y t :T ⇒ π(X) ≥ π(Y ) and strictly increasing whenever X, Y ∈ M, Xt :T  Y t :T ⇒ π(X)  π(Y ). We say that π is (strictly) decreasing if −π is (strictly) increasing.

13.6



Exercises

In all exercises below we assume that  is a finite sample space consisting of K elements. Moreover, we consider a finite number of dates t = 0, 1, . . . , T , where T ∈ N is a fixed terminal date. Exercise 13.6.1 Show that for every information structure P on  the sequence of fields (F (P0 ), . . . , F (PT )) is a filtration on . Exercise 13.6.2 Show that L is a (real) vector space and prove that the set B = {Xω1 ,0 , . . . , X ωK ,0 , . . . , X ω1 ,T , . . . , X ωK ,T }, where Xω,t ∈ L is defined for every ω ∈  and t ∈ {0, . . . , T } by Xω,t = (0, . . . , 0, 1ω , 0, . . . , 0), where 1ω occupies the (t + 1)th position in the sequence, is a basis for L. In particular, dim(L) = K(T + 1). Exercise 13.6.3 Let P be an information structure on . Prove that for every process X ∈ L the following statements are equivalent: (a) X is P-adapted. (b) PtX is coarser than Pt for every t ∈ {0, . . . , T }. In particular, X is P X -adapted.

13.6 Exercises

231

Exercise 13.6.4 Show that for all stochastic processes X, Y ∈ L the following statements are equivalent: (a) Y is P X -adapted. (b) For every t ∈ {0, . . . , T } there exists a function ft : Rt +1 → R such that we have Yt = ft (X0 , . . . , Xt ). Exercise 13.6.5 Let P be an information structure on  and consider a probability measure P. Show that every P-martingale X ∈ L under P satisfies EP [Xt ] = X0 for every t ∈ {1, . . . , T }. Exercise 13.6.6 Let P be an information structure on  and let P be a probability measure. For every random variable X ∈ L show that the process X = (EP [X], EP [X|P1 ], . . . , EP [X|PT ]) is a P-martingale under P. Exercise 13.6.7 Let P be an information structure on  and let P be a probability measure. Moreover, let X, Y ∈ L be two P-martingales under P. Show that the process W ∈ L given by Wt = min{Xt , Yt } is a P-supermartingale under P. Similarly, show that the process Z ∈ L given by Zt = max{Xt , Yt } is a P-submartingale under P. Exercise 13.6.8 (Snell Envelope) Let P be an information structure on  and consider a probability measure P. Moreover, consider a P-adapted process X ∈ L. The P-Snell envelope of X under P is the process U X ∈ L defined recursively by UTX = XT and UtX = max{Xt , EP [UtX+1 |Pt ]}. Show that U X is the smallest P-supermartingale under P dominating X. More precisely, show that U X is a P-supermartingale under P such that UtX ≥ Xt for every t ∈ {0, . . . , T } and that for every other P-supermartingale Y ∈ L under P such that Yt ≥ Xt for every t ∈ {0, . . . , T } we have Yt ≥ UtX for every t ∈ {0, . . . , T }. Exercise 13.6.9 Let M be a linear subspace of L and consider the following properties of a map π : M → L: (i) π is convex and π(0) = 0. (ii) π is subadditive and π(0) = 0. (iii) π is positively homogeneous. Show that, under any of the above properties, the remaining two properties are equivalent.

232

13 Information and Stochastic Processes

Exercise 13.6.10 Let P be an information structure on  and consider a probability measure P. For every t ∈ {0, . . . , T − 1}, every A ∈ Pt , and every B ∈ Pt +1 such that B ⊂ A set pA,B = P(B|A) =

P(B) . P(A)

Moreover, for every ω ∈  and every t ∈ {0, . . . , T } denote by At (ω) the unique atom in Pt containing ω and set pω =

T −1

pAt (ω),At+1 (ω) .

t =0

(i) Show that for every t ∈ {0, . . . , T − 1} and every A ∈ Pt we have 

pA,B = 1.

B∈Pt+1 , B⊂A

(ii) Show that for every ω ∈  we have pω = P(ω). In particular, this exercise shows how to recover P from the corresponding conditional probabilities over a given information structure. Exercise 13.6.11 Let P be an information structure on . Assume that for every t ∈ {0, . . . , T − 1} and every A ∈ Pt there exist coefficients pA,B ∈ [0, 1], for B ∈ Pt +1 with B ⊂ A, satisfying  B∈Pt+1 , B⊂A

pA,B = 1.

13.6 Exercises

233

For every ω ∈  and every t ∈ {0, . . . , T } denote by At (ω) the unique atom in Pt containing ω and set pω =

T −1

pAt (ω),At+1 (ω) .

t =0

Moreover, for all 0 ≤ s < t ≤ T and for every B ∈ Pt denote by As (B) the unique atom in Ps containing B. (i) Prove by backward induction on t ∈ {1, . . . , T } that t −1  

pAs (B),As+1 (B) =

B∈Pt s=0



pω .

ω∈

 Deduce that ω∈ pω = 1. (ii) Show that the probability measure P given by P(ω1 ) = pω1 , . . . , P(ωK ) = pωK satisfies P(B|A) = pA,B for every t ∈ {0, . . . , T − 1}, every A ∈ Pt , and every B ∈ Pt +1 with B ⊂ A. This exercise can be seen as the converse of Exercise 13.6.10 and provides a way to construct a probability measure with pre-specified conditional probabilities over a given information structure.

Multi-Period Financial Markets

14

In this chapter we begin our study of multi-period financial markets. We consider a multi-period economy with a finite time horizon in which uncertainty is modelled by a finite sample space. At each date prior to the terminal date, agents can buy or sell a finite number of basic securities at their prevailing price. Each of these securities entitles them to a certain payoff at every successive date. Through their trading activity agents can implement dynamic trading strategies by building portfolios at the starting date and successively rebalancing them until liquidation. Of particular interest are self-financing trading strategies. For these strategies rebalancing is costless, i.e., at each date the set-up of the new portfolio can be financed by the proceeds from the liquidation of the previous portfolio. A payoff that can be generated by way of a self-financing strategy is said to be replicable. The market is said to be complete if all conceivable payoffs can be generated in this way.

14.1

The Elements of the Market

In this section we describe how to model a multi-period financial market. The assumptions and notation introduced here will be adhered to during the next sections and chapters. The Multi-Period Economy We consider a multi-period economy with T + 1 dates, which are referred to as: • the initial date (t = 0). • the intermediate dates (t = 1, . . . , T − 1). • the terminal date (t = T ).

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_14

235

236

14 Multi-Period Financial Markets

The case T = 1 corresponds to the one-period economy studied in Chaps. 5, 6, 7, and 8. We can thus assume that we have at least three dates so that T ≥ 2. We model uncertainty in the economy by the elements of a sample space  = {ω1 , . . . , ωK }. Each element of  represents a possible path the economy might follow from the initial date to the terminal date. We refer to the elements of  as the (terminal) states of the economy. The gradual revelation of the terminal state of the economy is modelled by an information structure P = (P0 , . . . , PT ). Before making assumptions on P let us consider some possible examples. Example 14.1.1 (Levels of Information) Assume T = 2. In each of the two periods, the economy can either grow (u for “upturn”) or stagnate (d for “downturn”). We can model the paths in such an economy by the elements of the sample space  = {(u, u), (u, d), (d, u), (d, d)}, where, for instance, (u, d) means that the economy experienced an upturn in the first period and a downturn in the second. We consider three types of agents that have access to information in different granularities: (i) The well-informed agent. The well-informed agent learns of the prevailing state of the economy at the end of the period, as soon as an upturn or a downturn is realized. The information structure of such an agent is given by the partitions P0 = {}, P1 = {{(u, d), (u, u)}, {(d, u), (d, d)}}, P2 = {{(u, d)}, {(u, u)}, {(d, u)}, {(d, d)}}. A well-informed agent can always act based on the maximum of information available at each date. (ii) The unobserving agent. The unobserving agent lacks focus and obtains the information only one period after the fact. The information structure of such an agent is P0 = P1 = {}, P2 = {{(u, d), (u, u)}, {(d, u), (d, d)}}. Such an agent can only act on old information and can be taken advantage of by well-informed agents.

14.1 The Elements of the Market

237

(iii) The prophetic agent. The prophetic agent is in fact a clairvoyant. He or she knows already at the start of a period whether there will be an economic upturn or downturn. The corresponding information structure is given by P0 = {{(u, d), (u, u)}, {(d, u), (d, d)}}, P1 = P2 = {{(u, d)}, {(u, u)}, {(d, u)}, {(d, d)}}. The prophetic agent is always ahead of the game and can take advantage of the well-informed and the unobserving agent. In real life, an agent who has “insider” information has the same advantages of a prophetic agent.  We assume that all agents have access to the same information and that all are wellinformed. At date 0 there is no information about the path the economy will follow, which is tantamount to assuming that P0 is the trivial partition, i.e., P0 = {}. Since P is an information structure, as time passes, more and more information about the true path followed by the economy becomes available. Finally, at date T , the path is fully known, which means that we assume that PT is the discrete partition, i.e., PT = {{ω1 }, . . . , {ωK }}. For every t ∈ {0, . . . , T } we refer to the atoms of Pt as the states of the economy at date t. The field of events generated by Pt is simply denoted by Ft , i.e., Ft := F (Pt ). Similarly, the space of Pt -measurable random variables is denoted by Lt , i.e., Lt := L(Pt ). We also fix a reference probability measure P that is assumed to satisfy supp(P) = . In other words, we assume that P(ω) > 0 for every scenario ω ∈ . This probability measure is interpreted as the “objective” probability measure in the sense of Sect. 5.1, to which we refer for further details on the interpretation of P.

238

14 Multi-Period Financial Markets

The Unit of Account As in the one-period economy in Chap. 5, we assume that, at every date, payments are expressed in a common fixed unit of account or numéraire. This means that at every date we can compare and add up payments. Section 14.5 is devoted to exploring what happens when we change the accounting unit. Also here, it is convenient to think of the unit of account as a fixed currency which we call the numéraire currency. Payments at the Initial Date As in our model of a one-period economy, a payment at the initial date is represented by a real number indicating the number of units of the (fixed) unit of account. Recall that if an agent makes a payment p ∈ R, then he or she pays the amount p whenever p is positive and receives the amount −p whenever p is negative. Note that, since P0 is the trivial partition, we can identify real numbers with P0 -measurable random variables, so that we can equivalently model payments at the initial date by elements of L0 . Payments After the Initial Date Let M ∈ {1, . . . , T } be a date after the initial date. A payment due at date M depends, in general, on the state of the economy at that point in time. For this reason, we sometimes say that the payment is state contingent. A state-contingent payment due at date M is represented by a random variable X ∈ LM . Note that, since PM represents that information at date M, the requirement that X is PM -measurable is necessary for the amount to be paid to be well defined at the date when the payment is due. Indeed, we need to be able to know the value of X as soon as we learn which of the states A ∈ PM the economy is in at date M. In this case, the number X(A) represents the payment due at date M if the economy turns out to be in state A. Also here we follow the same rule for interpreting positive and negative payments, i.e., an agents pays the amount X(A) whenever X(A) is positive and receives the amount −X(A) whenever X(A) is negative. Financial Contracts: Prices and Payoffs A financial contract is any agreement between two agents that entitles one party, referred to as the buyer or the owner, to receive from the other party, referred to as the seller or the issuer, a state-contingent payment at a certain future date, referred to as the contract’s maturity, in exchange for a payment at some prior date. To acquire a financial contract with maturity M ∈ {1, . . . , T } at some prior date t ∈ {0, . . . , M − 1}, the buyer pays a price p ∈ Lt . For each state A ∈ Pt , the number p(A) represents the payment, expressed in the fixed unit of account, to be made at date t to acquire the contract. The payoff of the contract is represented by a random variable X ∈ LM . For each state A ∈ PM , the number X(A) represents the payment, expressed in the fixed unit of account, to be received at date M. A positive payoff, i.e., a payoff that is positive in every future state of the economy, is also called a claim.

14.1 The Elements of the Market

239

Every random variable can be viewed as the payoff of a given financial contract. The set of payoffs maturing at date M ∈ {1, . . . , T } is denoted by XM , i.e., XM := {payoffs of all financial contracts maturing at M}. In mathematical terms, XM coincides with the set of PM -measurable random variables LM , but in financial terms the elements of XM carry a special attribute: They correspond to payoffs expressed in our fixed unit of account. This justifies using the notation XM instead of just LM . The set of claims maturing at date M is denoted by (XM )+ . The Financial Market At every date prior to the terminal date T , agents are able to buy or sell a finite number of financial contracts maturing at date T . These contracts are called the basic securities. The following fundamental assumptions about the functioning of the market are essentially the same as for the one-period model: There Are No Trading Constraints Basic securities are infinitely divisible, i.e., can be traded in arbitrarily small or large quantities. Moreover, “short-selling” is allowed, i.e., agents may sell securities even though they do not own them. In this case, the seller will need to “deliver” the security to the buyer at the terminal date by paying an amount corresponding to the security’s payoff. The Market Is Frictionless The price at which a security can be bought or sold is the same. Moreover, the price per unit of a given security does not depend on the traded volume. This means that, at every trading date, there is a unique market price at which each basic security can be bought or sold and, for every λ ∈ R, the cost of buying λ units of a basic security equals λ times the unitary price. All Agents Are Price Takers We assume that each investor can buy or sell as many securities as desired without changing their market price. In other words, agents cannot influence the price of traded securities through their trading activity. The Basic Securities We fix N ∈ N basic securities. As in the single-period model, the multi-period market is fully determined as soon as we specify, for each of the basic traded securities, its price at each date t ∈ {0, . . . , T − 1} and its terminal payoff at date T . For each i ∈ {1, . . . , N} the ith basic security is thus represented by a P-adapted process S i = (S0i , . . . , STi ) ∈ X0 × · · · × XT where S0i , . . . , STi −1 are the prices of the ith security before the terminal date and STi is its payoff at date T . We always assume that each security has a strictly-positive initial price,

240

14 Multi-Period Financial Markets

a nonzero positive intermediate price, and a nonzero positive payoff, and that for every terminal state of the economy and for every date there exists a basic security with nonzero payoff in that state: (1) S0i > 0 for every i ∈ {1, . . . , N}. (2) Sti  0 for all i ∈ {1, . . . , N} and t ∈ {1, . . . , T }. (3) For all ω ∈  and t ∈ {1, . . . , T } we have Sti (ω) > 0 for some i ∈ {1, . . . , N}. In view of condition (2), we can equivalently reformulated condition (3) as: (3 ) For all ω ∈  and t ∈ {1, . . . , T } we have

N

i i=1 St (ω)

> 0.

Example 14.1.2 (Risk-Free Security) In the literature it is standard to assume that one of the basic securities, say S 1 , is risk free in the sense that St1 is a strictly-positive constant for every t ∈ {1, . . . , T }. In this case, one can always write St1 =

t 

(1 + rs )S01

s=1

for r1 , . . . , rt ∈ (−1, ∞), which are interpreted as the interest rates paid by the security at the corresponding dates. Note that condition (3) above is automatically satisfied when one of the basic securities is risk free.  Visualizing Basic Securities In the context of our examples we often adopt a tree-like diagram of the following type to introduce a basic security: 8 : 5

6 9

2

3

4

7

This means that the sample space has four states, i.e.,  = {ω1 , ω2 , ω3 , ω4 }, and that we consider a security S with value process specified by S0 = 5, S1 = 81{ω1 ,ω2 } + 21ω3 + 41ω4 , S2 = 61ω1 + 91ω2 + 31ω3 + 71ω4 . The above diagram also allows to easily check whether the basic security is adapted to the underlying information structure.

14.2 Trading Strategies

241

The Multi-Period Model in a Nutshell

Standing Assumption Throughout the chapter we consider a multi-period model with dates t ∈ {0, . . . , T } where the terminal states of the economy are represented by the elements of a finite sample space  = {ω1 , . . . , ωK }. We assume that  is equipped with a probability measure P with supp(P) = . We fix an information structure P such that P0 is trivial and PT is discrete. The space of payoffs maturing at date M ∈ {1, . . . , T } is denoted by XM . There are N basic securities represented by the P-adapted processes S i ∈ (X0 )+ \ {0} × · · · × (XT )+ \ {0}, i ∈ {1, . . . , N}. We assume that for all ω ∈  and t ∈ {1, . . . , T } we have Sti (ω) > 0 for some i ∈ {1, . . . , N}.

14.2

Trading Strategies

By buying and selling in the financial market agents can set up portfolios of basic securities. Definition 14.2.1 (Portfolio) Every N-dimensional vector λ = (λ1 , . . . , λN ) ∈ RN is called a portfolio (of basic securities).  As in Chap. 5, a portfolio λ ∈ RN is said to be long λi units of the ith security when > 0 short −λi units of the ith security when λi < 0. In contrast to the one-period economy, agents now have the possibility to rebalance their positions at each date, i.e., they may liquidate existing portfolios and acquire new ones at any intermediate date. The essence of this rebalancing process is captured by the notion of a trading strategy.

λi

Definition 14.2.2 (Trading Strategy) Let t ∈ {0, . . . , T − 1}. A (trading) strategy starting at date t is a P-adapted N-dimensional stochastic process λ = (λt , . . . , λT −1 ).

242

14 Multi-Period Financial Markets

In other words, for every u ∈ {t, . . . , T − 1} we have N λu = (λ1u , . . . , λN u ) ∈ Lu .

We say that λ is static if λt = · · · = λT −1 . The set of all trading strategies starting at date t is denoted by S t .  Let t ∈ {0, . . . , T − 1} and consider a strategy λ ∈ S t . For each u ∈ {t, . . . , T − 1} the random vector λu is interpreted as the state-contingent portfolio to be set up at date u. Once the state of the economy at date u becomes known, say it is A ∈ Pu , we can evaluate λu to yield N λu (A) = (λ1u (A), . . . , λN u (A)) ∈ R ,

which is an ordinary portfolio that can be set up by transacting in the basic securities at the prevailing prices in state A. This allows us to associate with the strategy λ the following operational interpretation: • If at date t the state of the economy is At ∈ Pt , we start by setting up the portfolio λt (At ). • If at date t + 1 the economy is in the state At +1 ∈ Pt +1 , where At +1 ⊂ At , we liquidate the portfolio λt (At ) and acquire the portfolio λt +1 (At +1 ). • We continue in this way until we reach the final date T , in which the last portfolio λT −1 (AT −1 ) delivers the corresponding payoff. For every t ∈ {0, . . . , T − 1} the set of strategies S t can be identified with the Cartesian N product LN t × · · · × LT −1 . As such, S t inherits a vector space structure where operations are defined component by component. We record this in the next result. The explicit verification is left as an exercise. Proposition 14.2.3 For every t ∈ {0, . . . , T − 1} the set S t is a vector space. There are two very natural concepts of value associated with a strategy. Definition 14.2.4 (Acquisition/Liquidation Value) Let t ∈ {0, . . . , T − 1} be fixed and consider a strategy λ ∈ S t . The acquisition value of λ at date u ∈ {t, . . . , T − 1} is the Acq random variable Vu [λ] ∈ Xu defined by Acq Vu [λ]

:=

N  i=1

λiu Sui .

14.2 Trading Strategies

243 Liq

The liquidation value of λ at date u ∈ {t + 1, . . . , T } is the random variable Vu [λ] ∈ Xu defined by Liq

Vu [λ] :=

N 

λiu−1 Sui .



i=1

For a strategy λ ∈ S t , its acquisition value at date u ∈ {t, . . . , T − 1} represents the state-contingent cost of acquiring the state-contingent portfolio λu at date u. Similarly, the liquidation value of λ at date u ∈ {t + 1, . . . , T } represents the state-contingent payoff obtained when liquidating the state-contingent portfolio λu−1 at date u. It is easy to see that computing acquisition and liquidation values of a strategy is a linear operation. The verification is left as an exercise. Proposition 14.2.5 For every t ∈ {0, . . . , T − 1} the following statements hold: Acq

(i) The map Vu : S t → Xu is linear for every u ∈ {t, . . . , T − 1}. Liq (ii) The map Vu : S t → Xu is linear for every u ∈ {t + 1, . . . , T }. Just as in the one-period model, it is convenient to introduce the multi-period equivalent of the equal-weight market portfolio. Example 14.2.6 (Equal-Weight Market Strategy) The equal-weight market strategy is the static strategy η ∈ S 0 defined by η0 = · · · = ηT −1 = η := (1, . . . , 1) ∈ RN , which entails acquiring the portfolio η at date 0 and holding on to it until liquidation at date T . By our initial assumptions on the basic securities we can easily see that both the acquisition value and the liquidation value of the market portfolio are strictly positive, i.e., Acq

Vt

[η] =

N 

Sti > 0

i=1

for every t ∈ {0, . . . , T − 1}, as well as Liq

Vt

[η] =

N 

Sti > 0

i=1

for every t ∈ {1, . . . , T }.



244

14 Multi-Period Financial Markets

When implementing a strategy, at any rebalancing date, the difference between the funds obtained when liquidating the existing portfolio and the funds needed for setting up the subsequent portfolio may be strictly positive (implying the withdrawal of funds) or strictly negative (requiring an injection of funds). Self-financing strategies are strategies for which this never occurs, i.e., for which acquisition and liquidation values coincide at each date and in each state of the economy. This means that the cost of implementing the strategy is just the cost of setting up the initial portfolio since the acquisition of each new portfolio is financed exclusively by the liquidation of the preceding portfolio. Our definition captures a slightly more general situation in which we allow a strategy to be terminated at a pre-determined date. Definition 14.2.7 (Self-Financing Strategy) Let 0 ≤ t < M ≤ T . A strategy λ ∈ S t is said to be self-financing with termination date M if the following conditions are satisfied: Liq

Acq

(1) Vu [λ] = Vu [λ] for every u ∈ {t + 1, . . . , M − 1}. (2) λM = · · · = λT −1 = 0. For convenience we also speak of a (t, M)-self-financing strategy. The set of all (t, M)self-financing strategies is denoted by S t,M .  Example 14.2.8 Every static strategy starting at date t ∈ {0, . . . , T −1} is easily seen to be a (t, T )-self-financing strategy. In particular, the equal-weight market strategy introduced in Example 14.2.6 is a (0, T )-self-financing strategy.  The collection of all self-financing trading strategies with given starting date and termination date is a linear subspace of the corresponding space of strategies. Proposition 14.2.9 Let 0 ≤ t < M ≤ T be fixed. Then, S t,M is a linear subspace of S t such that {λ ∈ S t ; λt = · · · = λM−1 , λM = · · · = λT −1 = 0} ⊂ S t,M . Proof Take λ, μ ∈ S t,M and a ∈ R and consider the strategy ξ = aλ + μ ∈ S t . It follows from the linearity of the acquisition and liquidation maps that Liq

Liq

Liq

Acq

Vu [ξ ] = aVu [λ] + Vu [μ] = aVu

Acq

[λ] + Vu

Acq

[μ] = Vu

[ξ ]

for every u ∈ {t+1, . . . , M−1}. Moreover, it is clear that ξu = aλu +μu = 0 for every date u ∈ {M, . . . , T − 1}. This shows that S t,M is a linear subspace of S t . The last assertion is obvious.  Since the acquisition and the liquidation values of a self-financing strategy coincide at each date between the starting date and the termination date, the value of a self-financing strategy can be unambiguously defined at every date as follows.

14.2 Trading Strategies

245

Definition 14.2.10 (Value of a Self-Financing Strategy) Let 0 ≤ t < M ≤ T and consider a self-financing strategy λ ∈ S t,M . For every u ∈ {t, . . . , T } the random variable Vu [λ] ∈ Xu defined by ⎧ Acq ⎪ Vt [λ], ⎪ ⎪ ⎪ ⎪ ⎨V Acq [λ] = V Liq [λ], u u Vu [λ] := Liq ⎪ ⎪ VM [λ], ⎪ ⎪ ⎪ ⎩0,

if u = t, if u ∈ {t + 1, . . . , M − 1}, if u = M, if u ∈ {M + 1, . . . , T },

is called the value of λ at date u. We say that λ has strictly-positive value process if that  Vu [λ] is strictly positive for every u ∈ {t, . . . , M}. It follows immediately from the linearity of acquisition and liquidation values that, at all dates where it makes sense, the assignment to a self-financing strategy of its value is a linear operation. Proposition 14.2.11 Let 0 ≤ t < M ≤ T . Then, for every u ∈ {t, . . . , T } the map Vu : S t,M → Xu is linear. Let 0 ≤ t < M ≤ T . It is intuitively clear that given two self-financing strategies λ, μ ∈ S t,M and an event E ∈ Ft , we can define a new strategy that is identical to λ on E and to μ on E c . This means that self-financing strategies can be defined “locally” on Pt -observable events. This is a very useful way to construct new strategies and we provide a formal argument of a slightly more general version of this principle. To do this we introduce the following notation. Definition 14.2.12 For every t ∈ {0, . . . , T −1}, every strategy λ ∈ S t , and every random variable Z ∈ Lt we define a strategy Zλ ∈ S t by setting (Zλ)u := Zλu , u ∈ {t, . . . , T − 1}. If E ∈ Ft , then the strategy 1E λ is called the localization of λ to E.



The following result makes the above argument rigourous. Proposition 14.2.13 Let 0 ≤ t < M ≤ T . For every self-financing strategy λ ∈ S t,M and for every Z ∈ Lt we have Zλ ∈ S t,M and Vu [Zλ] = ZVu [λ] for every u ∈ {t, . . . , M}.

246

14 Multi-Period Financial Markets

Proof Since S t,M is a vector space and Vu is linear for every u ∈ {t, . . . , M}, it suffices to prove the statement for Z = 1A for an arbitrary A ∈ Pt . Using that λ is self-financing, we easily see that Liq

Vu [1A λ] =

N 

1A λiu−1 Sui = 1A Vu [λ] =

i=1

N 

Acq

1A λiu Sui = Vu

[1A λ]

i=1

for every u ∈ {t + 1, . . . , M − 1}. Moreover, it is clear that 1A λu = 0 for every date u ∈ {M + 1, . . . , T − 1}. This shows that 1A λ is a self-financing strategy with termination date M. To conclude the proof it is enough to note that Vt [1A λ] =

N 

1A λit Sti = 1A Vt [λ],

i=1

VM [1A λ] =

N 

i 1A λiM−1 SM = 1A VM [λ].



i=1

14.3

Replicable Payoffs

The notion of a replicable payoff played a fundamental role in our study of one-period financial markets. In this section we extend the notion of replicability to our multi-period setting. Since in a multi-period model it is possible to set up portfolios at any point in time prior to the terminal date, it is critical to specify the date from which we are attempting to replicate a given payoff. Definition 14.3.1 (Replicable Payoff) Let 0 ≤ t < M ≤ T . A payoff X ∈ XM is said to be replicable, or attainable, or marketed, at date t if there exists a self-financing strategy λ ∈ S t,M such that X = VM [λ]. In this case, λ is called a (t, M)-replicating strategy for X. The set of payoffs that mature at date M and are replicable at date t is denoted by Mt,M , i.e., we set Mt,M := {X ∈ XM ; X = VM [λ] for some λ ∈ S t,M }. The set Mt,M is also referred to as the (t, M)-marketed space.



We start by showing that a payoff that is replicable at a certain date is automatically replicable at all successive dates. This fact will be extensively used without explicit reference.

14.3 Replicable Payoffs

247

Proposition 14.3.2 Let 0 ≤ s < t < M ≤ T . Every payoff with maturity M that is replicable at date s is also replicable at date t, i.e., Ms,M ⊂ Mt,M . Proof Take an arbitrary X ∈ Ms,M and assume that X = VM [λ] for a suitable selffinancing strategy λ ∈ S s,M . It is clear that the strategy μ ∈ S t defined by μu = λu , u ∈ {t, . . . , T − 1}, is self-financing with termination date M and satisfies VM [μ] = X. This shows that we have X ∈ Mt,M and concludes the proof.  The following example helps gain a better understanding of how critically the concept of replication depends on the date from which we attempt to replicate. It shows that a payoff which is not replicable at the initial date may turn out to be replicable at a later date. Example 14.3.3 (Gaining Replicability Through Time) Let  = {ω1 , ω2 , ω3 , ω4 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 }, {ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}. We consider two basic securities specified by 1 1: 1

1 1

1 1 1 1

30 2 : 10

20 5

40 10 20 5

Moreover, consider the payoff X ∈ X2 given by X = 301ω1 . It is not difficult to verify that X is replicable at date 1. Indeed, a (self-financing) strategy (λ1 ) ∈ S 1,2 is a replicating strategy for X if and only if λ11 = −101{ω1 ,ω2 } − 20a1ω3 − 5b1ω4 , λ21 = 1{ω1 ,ω2 } + a1ω3 + b1ω4

248

14 Multi-Period Financial Markets

for some a, b ∈ R. However, we cannot find any portfolio λ0 ∈ R2 such that λ = (λ0 , λ1 ) is (0, 2)-self-financing for some replicating strategy (λ1 ) ∈ S 1,2 for X. To see this, note Liq Acq that the equality V1 [λ] = V1 [λ] is equivalent to the system ⎧ 1 2 ⎪ ⎪ ⎨λ0 + 30λ0 = 20, λ1 + 20λ20 = 0, ⎪ 0 ⎪ ⎩ 1 λ0 + 5λ20 = 0,

which is easily seen to admit no solution. This shows that the payoff X is not replicable at date 0.  The next corollary shows that the marketed space is a linear subspace of the corresponding payoff space. In addition, marketed spaces are easily seen to satisfy the localization property stated at the beginning of Chap. 11 and, hence, qualify as the domain for conditional functionals studied there. Moreover, they always contain a strictly-positive payoff. Proposition 14.3.4 Let 0 ≤ t < M ≤ T . The following statements hold for all selffinancing strategies λ, μ ∈ S t,M , all replicable payoffs X, Y ∈ Mt,M , every a ∈ R, and every Z ∈ Xt : (i) If λ is a replicating strategy for X and μ is a replicating strategy for Y , then λ + μ is a replicating strategy for X + Y . (ii) If λ is a replicating strategy for X, then aλ is a replicating strategy for aX. (iii) If λ is a replicating strategy for X, then Zλ is a replicating strategy for ZX. Proof The assertions follow from the linearity of VM and Proposition 14.2.13. The explicit verification is left as an exercise.  Corollary 14.3.5 For all 0 ≤ t < M ≤ T the set Mt,M is a linear subspace of XM satisfying ZX ∈ Mt,M for all Z ∈ Xt and X ∈ Mt,M . Moreover, Mt,M contains a strictly-positive payoff. Proof To show that Mt,M contains a strictly-positive payoff, it suffices to note that VM [η] belongs to Mt,M and recall that it is a strictly-positive payoff. The other assertions follow directly from Proposition 14.3.4. 

14.4 Complete Markets

14.4

249

Complete Markets

Recall that a single-period financial market is called be complete if it is possible to replicate every terminal payoff. For a multi-period market completeness requires being able to replicate payoffs of arbitrary maturity at every date prior to maturity. Definition 14.4.1 (Complete Market) Let 0 ≤ t < M ≤ T . The market is (t, M)complete if every payoff maturing at date M is replicable at date t, i.e., if Mt,M = XM . The market is complete if it is (t, M)-complete for all 0 ≤ t < M ≤ T .  For a market to be complete it is necessary and sufficient that every one-period “submarket” is complete, i.e., that payoffs can be replicated from the last date before their maturity. This is, in turn, equivalent to every payoff being replicable at the initial date. Sometimes, this last condition is used to define market completeness. Proposition 14.4.2 The following statements are equivalent: (a) The market is complete. (b) The market is (0, M)-complete for every M ∈ {1, . . . , T }. (c) The market is (t, t + 1)-complete for every t ∈ {0, . . . , T − 1}. Proof It is clear that (a) implies (b). Next, assume that (b) holds and fix an arbitary date t ∈ {0, . . . , T − 1}. Then, we easily see that Xt +1 = M0,t +1 ⊂ Mt,t +1 ⊂ Xt +1 . This shows that (b) implies (c). To conclude the proof, assume that (c) holds and fix a date M ∈ {1, . . . , T }. To establish (a), we prove by backward induction that Mt,M = XM for every t ∈ {0, . . . , M − 1}. Base Step If t = M − 1, then MM−1,M = XM by point (c). Induction Step Assume that Mt,M = XM for some t ∈ {1, . . . , M − 1} and take an arbitrary X ∈ XM . Since X is replicable at date t by assumption, there exists a selffinancing strategy λ ∈ S t,M such that VM [λ] = X. Moreover, it follows from (c) that Vt [λ] is replicable at date t − 1, so that there exists a self-financing strategy μ ∈ S t −1,t such that Vt [μ] = Vt [λ]. As a result, the strategy ξ ∈ S t −1 defined by

ξu =

⎧ ⎪ ⎪ ⎨μt −1 , ⎪ ⎪ ⎩

if u = t − 1,

λu ,

if u ∈ {t, . . . , M − 1},

0,

if u ∈ {M, . . . , T − 1},

250

14 Multi-Period Financial Markets

is clearly self-financing with termination date M and satisfies VM [ξ ] = VM [λ] = X. Hence, X is replicable at date t − 1. Since X was an arbitrary payoff in XM , we infer that  Mt −1,M = XM . This concludes the induction argument. By the preceding result, if the market is incomplete, then at least one of the oneperiod “submarkets” must be incomplete. The next example shows that some of them may nevertheless be complete. It also provides a counterexample to the conjecture that to check market completeness it suffices to establish that all payoffs maturing at the terminal date can be replicated from all dates prior to it, i.e., to Mt,T = XT for every t ∈ {0, . . . , T − 1} or, equivalently, that all payoffs maturing at the terminal date are replicable at the initial date, i.e., to M0,T = XT . However, Proposition 15.2.6 will show that the conjecture is true if we assume the Law of One Price, which we still need to formulate for multi-period markets. Example 14.4.3 (Incomplete Market With M0,T = XT ) Let  = {ω1 , ω2 } and consider a two-period market with information structure P given by P0 = {}, P1 = P2 = {{ω1 }, {ω2 }}. We consider two basic securities specified by 1: 1

1

1

1

1

2: 1

1

2

1

1

It is not difficult to show that this market is incomplete. For instance, the payoff X ∈ X1 given by X = 1ω1 is not replicable at date 0, so that M0,1 = X1 . At the same time, one can easily verify that M0,2 = X2 holds (so that, a fortiori, M1,2 = X2 also holds). To see this, note that for every a, b ∈ R the payoff Y ∈ X2 given by Y = a1ω1 + b1ω2 can be replicated by the self-financing strategy λ ∈ S 0,2 specified by λ0 = (b, 0), λ11 = (2b − a)1ω1 + b1ω2 , λ21 = (a − b)1ω1 .



14.5 Changing the Unit of Account

251

For each maturity, it is useful to consider claims that pay one unit of the numéraire in a pre-specified state of the economy and nothing in all other states. The following definition is the natural generalization of Arrow-Debreu payoffs from the one-period to the multiperiod setting. Definition 14.4.4 (Arrow-Debreu Payoff) Let M ∈ {1, . . . , T } and A ∈ PM . The payoff  1A ∈ XM is called an Arrow-Debreu payoff (with maturity M). Also in the multi-period setting, Arrow-Debreu payoffs can be seen as elementary building blocks for all other payoffs. As a consequence, completeness is easily seen to be equivalent to all Arrow-Debreu payoffs being replicable at date 0. Proposition 14.4.5 The following statements are equivalent: (a) The market is complete. (b) Every Arrow-Debreu payoff is replicable at date 0. Proof It is clear that (a) implies (b). To prove the converse implication, fix an arbitrary M ∈ {1, . . . , T } and assume that 1A ∈ M0,M for every A ∈ PM . Since the set of indicator functions of the atoms of PM is a basis for XM by Proposition 9.4.8, it follows  that M0,M = XM . Hence, the market is complete by Proposition 14.4.2. Example 14.4.6 (Arrow-Debreu Market) We speak of an Arrow-Debreu market whenever we have K basic securities of the form S 1 = (S01 , . . . , ST1 −1 , 1ω1 ), . . . , S K = (S0K , . . . , STK−1 , 1ωK ) with S01 , . . . , S0K > 0 and St1 , . . . , StK  0 for every t ∈ {1, . . . , T − 1}, and such that for all ω ∈  and t ∈ {1, . . . , T − 1} we have Sti (ω) > 0 for some i ∈ {1, . . . , K}. This market satisfies the assumptions stipulated at the end of Sect. 14.1 and is, clearly, complete by Proposition 14.4.5. 

14.5

Changing the Unit of Account

This section is devoted to discussing how to formalize a change of the unit of account in the multi-period model. Since the basic ideas are the same as for the one-period model in Sect. 5.5, we can take a slightly more direct approach and start by defining rescaling processes. Definition 14.5.1 (Rescaling Process) Any strictly-positive P-adapted stochastic process R ∈ L is said to be a rescaling process. The set of rescaling processes is denoted by R. 

252

14 Multi-Period Financial Markets

Rescaling processes arise as “exchange-rate” processes when switching from one unit of account to another. All examples described in Chap. 5 have their multi-period counterparts. As in the one-period case, for definiteness we assume that the original unit of account is a numéraire currency. Example 14.5.2 (Numéraire Currency) If we wish to express everything in a new currency, then we need to multiply payments at each date by the appropriate exchange rate from the original numéraire currency to the new one. In this case, the corresponding rescaling process R ∈ R is such that Rt is the exchange rate from the original currency to the new one at date t for every t ∈ {0, . . . , T }.  Example 14.5.3 (Numéraire Security) Assume that all the intermediate prices as well as the terminal payoff of, say, the first basic security are strictly positive. Then, at every date, it is possible to express payments in units of this security, i.e. it is possible to use this basic security as a new unit of account. The corresponding rescaling process R ∈ R is given by Rt =

1 St1

for every t ∈ {0, . . . , T }. In particular, note that Rt St1 = 1 for every t ∈ {0, . . . , T }. When we use a basic security as the unit of account we say that it is the numéraire security.  Example 14.5.4 (Numéraire Strategy) The previous example can be generalized in the following way. Assume θ ∈ S 0,T is a self-financing strategy with strictly-positive value process (for instance, we could take θ to be the equal-weight market strategy). In this case, at every date, we can express payments in units of the value of this strategy. The corresponding rescaling process R ∈ R is given by Rt =

1 Vt [θ ]

for every t ∈ {0, . . . , T }. In particular, note that Rt Vt [θ ] = 1 for every t ∈ {0, . . . , T }. When we use a strategy satisfying the above properties as the unit of account we say that it is the numéraire strategy.  The Rescaled Market Let R ∈ R be a rescaling process. Let M ∈ {0, . . . , T } and consider a (possibly statecontingent) payment X ∈ XM at date M expressed in the numéraire currency fixed at the beginning. When expressed in the new unit of account, the above payment becomes  = RM X. X

14.5 Changing the Unit of Account

253

M the space of  as the rescaled payment (with respect to R). We denote by X We refer to X rescaled payoffs with maturity M, i.e., we set M = {RM X ; X ∈ XM }. X M both “coincide” with the From a mathematical point of view, the spaces XM and X space L(PM ). In financial terms, however, each of them represents payoffs in a different M satisfy ∈X unit of account. It is also important to keep in mind that if X ∈ XM and X  = RM X, then they represent the “payoff” of the same financial contract: X in the X  in the new unit of account. For each i ∈ {1, . . . , N}, the ith original unit of account and X basic security is represented in the new unit of account by the process   i i  S0 , . . . ,  STi = (R0 S0i , . . . , RT STi ). S =  As in the one-period case, the strict positivity of the “exchange rates” of R implies that all the standing assumptions imposed on the basic securities in the original unit of account continue to hold after conversion into the new unit of account: (1)  S0i > 0 for every i ∈ {1, . . . , N}. (2)  Sti  0 for all i ∈ {1, . . . , N} and t ∈ {1, . . . , T }. (3) For all ω ∈  and t ∈ {1, . . . , T } we have  Sti (ω) > 0 for some i ∈ {1, . . . , N}. As the theory developed so far is exclusively based on these initial assumptions on the basic securities, it follows that all the results established in this section remain valid in the context of every rescaled market. Remark 14.5.5 As done in the single-period part, when developing our theory it is important to highlight which concepts do not depend on the particular choice of the accounting unit. The main structural notion introduced in this chapter was that of market completeness. It is not difficult to prove that whether or not the market is complete does not depend on the unit of account; see Exercise 14.6.12.  Remark 14.5.6 (Notation for Rescaled Payments) As in the one-period case, we use a “tilde” to distinguish the notation after rescaling from the notation in the original unit of account; see Exercise 14.6.12. The same caveat with respect to this notation holds here: It is imprecise because it does not make explicit reference to the dependence on the chosen rescaling process. However, this omission should cause no confusion because we only deal with one rescaling process at a time. Note that we do not need especial notation for (selffinancing) trading strategies because they are given in terms of portfolios, whose entries represent units of the respective basic securities (and not of their payoffs). 

254

14.6

14 Multi-Period Financial Markets

Exercises

In all exercises below we consider the multi-period economy described in Sect. 14.1 and adhere to the market specifications introduced there. Exercise 14.6.1 Show that for every t ∈ {0, . . . , T − 1} the set S t is a vector space and prove the following statements for all 0 ≤ t < M ≤ T : Acq

(i) The map Vu : S t → Xu is linear for every u ∈ {t, . . . , T − 1}. Liq (ii) The map Vu : S t → Xu is linear for every u ∈ {t + 1, . . . , T }. Exercise 14.6.2 Let 0 ≤ t < M ≤ T . Show that S t,M is a linear subspace of S t satisfying {λ ∈ S t ; λt = · · · = λM−1 , λM = · · · = λT −1 = 0} ⊂ S t,M . Moreover, show that the map Vu : S t,M → Xu is linear for every u ∈ {t, . . . , T }. Exercise 14.6.3 Let 0 ≤ t < M ≤ T . Prove that the following statements hold for all self-financing strategies λ, μ ∈ S t,M , all replicable payoffs X, Y ∈ Mt,M , every a ∈ R, and every Z ∈ Xt : (i) If λ is a replicating strategy for X and μ is a replicating strategy for Y , then λ + μ is a replicating strategy for X + Y . (ii) If λ is a replicating strategy for X, then aλ is a replicating strategy for aX. (iii) If λ is a replicating strategy for X, then Zλ is a replicating strategy for ZX. Deduce that Mt,M is a linear subspace of XM satisfying Z ∈ Xt , X ∈ Mt,M ⇒ ZX ∈ Mt,M . Exercise 14.6.4 Let 0 ≤ t < M ≤ T and take a strategy θ ∈ S M,T with strictly-positive value process. Prove that for every payoff X ∈ XM the following statements are equivalent: (a) X is replicable at date t. (b) VMX[θ] VT [θ ] is replicable at date t. Exercise 14.6.5 Show that for every t ∈ {0, . . . , T − 1} the following statements are equivalent: (a) The market is (t, t + 1)-complete.

14.6 Exercises

255

(b) For all A ∈ Pt and X ∈ Xt +1 there exists a strategy λ ∈ S t,t +1 such that Vt +1[λ] = X on A. (c) For all A ∈ Pt and X ∈ Xt +1 there exists a portfolio λ ∈ RN such that N 

λi Sti+1 = X on A.

i=1

Exercise 14.6.6 Deduce from Exercise 14.6.5 that the following statements are equivalent: (a) The market is complete. (b) For all t ∈ {0, . . . , T − 1} and A ∈ Pt and for every payoff X ∈ Xt +1 there exists a portfolio λ ∈ RN such that N 

λi Sti+1 = X on A.

i=1

This result provides a simple operational condition to check whether the market is complete working at a “localized” level in each one-period submarket. Exercise 14.6.7 (Put-Call Parity) Let i ∈ {1, . . . , N} and p ∈ (0, ∞). Show that the Put-Call Parity (with respect to the ith basic security) holds, i.e., for every M ∈ {1, . . . , T } we have i i i max{SM − p, 0} + p = max{p − SM , 0} + SM .

This establishes a link between the payoff of a call and a put option with the same underlying and strike price. Exercise 14.6.8 Let  = {ω1 , . . . , ω4 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}.

256

14 Multi-Period Financial Markets

We consider two basic securities specified by

1: 2

1 3

1 1 4 1

2: 5

6 4

8 2 8 0

(i) Show that the market is complete. (ii) Find the replicating strategies for 1ω1 at date 0. (iii) Show that every payoff maturing at date 2 has a unique replicating strategy at date 0. Exercise 14.6.9 Let  = {ω1 , ω2 , ω3 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 }}, P2 = {{ω1 }, {ω2 }, {ω3 }}. We consider two basic securities specified by 1 1: 2

1 3

6

1 4

2: 5

8 2

4

4

(i) Show that the market is complete. (ii) Find the replicating strategies for 1ω1 at date 0. (iii) Show that every payoff maturing at date 2 has infinitely many replicating strategies at date 0. Exercise 14.6.10 Let  = {ω1 , . . . , ω6 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 , ω3 }, {ω4 , ω5 , ω6 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }, {ω5 }, {ω6 }}.

14.6 Exercises

257

We consider two basic securities specified by 1 1 1 1 1 1

1 1: 1

1 1

(i) (ii) (iii) (iv)

6 2: 3

4 2

8 4 6 2 3 1

Show that the market is incomplete. Show that 51ω1 + 31{ω2 ,ω3 } + 1ω4 is replicable at date 0. Show that 1{ω1 ,ω4 } is not replicable at date 0. Find a new security S 3 such that the extended market is complete.

Exercise 14.6.11 Let (0 , P0 ) be the probability space given by 0 = {1, . . . , H } with H ≥ 2 and P0 (1) = p1 , . . . , P0 (H ) = pH for fixed p1 , . . . , pH ∈ (0, 1) adding up to 1. This setup corresponds to a simple experiment with H possible outcomes. We consider a multi-period economy with T dates, where T ≥ 2, and state space  = T0 . The outcomes 1, . . . , H are interpreted as a certain “development” or “movement” of the economy. More precisely, the outcome 1 corresponds to the best development and the outcome H to the worst development. Every element ω ∈  can therefore be interpreted as a time sequence of developments ω = (ω1 , . . . , ωT ). For every t ∈ {1, . . . , T } the random variable Zt ∈ L() defined by Zt (ω) = ωt detects the development of the economy during the tth period. Similarly, for every index j j ∈ {1, . . . , H }, the random variable Nt ∈ L() defined by j

Nt (ω) = card({s ∈ {1, . . . , t} ; ωs = j }) counts the number of periods with a development of type j up to date t. We equip  with the probability measure P : E → [0, 1] such that P(ω) =

T  t =1

P0 (ωt ) =

H  j =1

j

N (ω)

pj T

258

14 Multi-Period Financial Markets

for every ω ∈ . It follows from Exercise 2.8.7 that P is the unique probability measure making the simple experiments independent. Further, we consider the information structure P = (P0 , . . . , PT ) given by P0 = {}, P1 = P(Z1 ), . . . , PT = P(Z1 , . . . , ZT ). The market consists of two basic securities. The first security is denoted by S 1 and satisfies for every t ∈ {0, . . . , T } St1 = (1 + r)t , where r ∈ (−1, ∞). We interpret S 1 as a risk-free bond, or bank account, or money market account. The quantity r represents the interest rate paid by the security, which is assumed to be constant through time. The second security is denoted by S 2 and satisfies for every t ∈ {1, . . . , T } St2 = S02

H 

j

(1 + rj )Nt ,

j =1

where S02 ∈ (0, ∞) and r1 , . . . , rH ∈ (−1, ∞) are such that r1 > · · · > rH . We typically interpret S 2 as modelling a risky asset such as a stock. For each j ∈ {1, . . . , H } the quantity rj represents the rate of return of the stock, which is assumed to be constant through time, if the development of the economy has been of type j . The above market is called the multinomial market corresponding to the parameters (T , H, p1 , . . . , pH , r, r1 , . . . , rH ). When H = 2, we generally speak of the binomial or Cox-Ross-Rubinstein market. Prove that, in the general multinomial market, the following statements hold: (i) (ii) (iii) (iv) (v)

 contains H T elements. PT is the discrete partition of . j Nt is Pt -measurable for every j ∈ {1, . . . , H } and every t ∈ {1, . . . , T }. S 1 and S 2 satisfy the assumptions listed in Sect. 14.1. For every t ∈ {1, . . . , T } we have St2 = St2−1

H 

(1 + rj 1{Zt =j } ).

j =1

(vi) The market is complete if and only if H = 2.

14.6 Exercises

259

Exercise 14.6.12 Let R ∈ R be a rescaling process and fix 0 ≤ t < M ≤ T . Prove the following statements: (i) (ii) (iii)

uAcq [λ] = Ru VuAcq [λ] for all λ ∈ S t and u ∈ {t, . . . , T − 1}. V uLiq[λ] = Ru VuLiq [λ] for all λ ∈ S t and u ∈ {t + 1, . . . , T }. V u [λ] = Ru Vu [λ] for all λ ∈ S t,M and u ∈ {t, . . . , T − 1}. V

Deduce that    t,M = X M ; X ∈ Mt,M . ∈X M RM M if and only if Mt,M = XM . This shows that whether t,M = X In particular, we have M or not the market is complete does not depend on the chosen unit of account.

15

Market-Consistent Prices for Replicable Payoffs

This chapter is devoted to extending to the multi-period setting the theory of marketconsistent prices for replicable payoffs that was laid out in Chap. 6 for a single-period economy. Because of the dynamic nature of the setting, financial contracts can now mature at any date after the initial date and their prices can be determined at any date prior to their maturity. The key requirement enabling a rich valuation theory is that it should not be possible to make a riskless profit by trading in the basic securities, i.e., price and payoff arbitrage opportunities should not exist. Similar to the single-period case, the non-existence of price arbitrage opportunities is equivalent to the validity of the multiperiod version of the Law of One Price. In this case, a conditional linear pricing functional exists for every date and every maturity. Moreover, markets in which no payoff arbitrage opportunities exist can be characterized by the strict positivity of the pricing functionals.

Standing Assumption Throughout the entire chapter we work in the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there.

15.1

Price and Payoff Arbitrage

We start by extending the notions of price and payoff arbitrage from the single-period to the multi-period setting. The notion of price arbitrage captures the possibility of making a riskless profit by exploiting price differences for the same payoff. In analogy to the singleperiod case, let 0 ≤ t < M ≤ T be given dates and assume that two self-financing strategies μ, ξ ∈ S t,M have the same liquidation value at time M, but different acquisition

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_15

261

262

15 Market-Consistent Prices for Replicable Payoffs

values at time t. For example, let μ be cheaper than ξ in some state A ∈ Pt so that Vt [μ](A) < Vt [ξ ](A) and VM [μ] = VM [ξ ]. An agent could now sell the more expensive strategy ξ and use the proceeds to buy the cheaper strategy μ, ending up with the combined strategy λ = μ − ξ . At time t the price of this combined strategy is Vt [λ](A) = Vt [μ](A) − Vt [ξ ](A) < 0, which, being a strictly-negative price, represents a strictly-positive profit. Being selffinancing, implementing the strategy λ does not require any additional funding after time t. Finally, since the terminal payoff of μ exactly offsets that of −ξ , we have VM [λ] = VM [μ] − VM [ξ ] = 0, so that no commitments result at maturity. Such a strategy is called a price arbitrage. Definition 15.1.1 (Price Arbitrage) Let 0 ≤ t < M ≤ T . We say that a self-financing strategy λ ∈ S t,M is a (t, M)-price arbitrage if it satisfies Vt [λ](A) < 0 for some A ∈ Pt and VM [λ] = 0.  Just as in the single-period model, the second source of a riskless profit is price inconsistency for comparable payoffs. This occurs whenever, for 0 ≤ t < M ≤ T , there exist two self-financing strategies μ, ξ ∈ S t,M satisfying Vt [μ] ≤ Vt [ξ ] and VM [μ]  VM [ξ ]. In this case, an agent can set up the strategy λ = μ−ξ , which has a nonpositive acquisition value at time t Vt [λ] = Vt [μ] − Vt [ξ ] ≤ 0, but a nonzero positive liquidation value at time M VM [λ] = VM [μ] − VM [ξ ]  0. Moreover, the strategy being self-financing, there are no rebalancing costs at intermediate dates. The cost of setting up this strategy is zero or even strictly-negative, but its payoff is never negative and sometimes strictly positive.

15.1 Price and Payoff Arbitrage

263

Definition 15.1.2 (Payoff Arbitrage) Let 0 ≤ t < M ≤ T . We say that a self-financing strategy λ ∈ S t,M is a (t, M)-payoff arbitrage if it satisfies Vt [λ] ≤ 0 and VM [λ]  0.  As in the case of a single-period economy, the absence of payoff arbitrage opportunities automatically implies the absence of price arbitrage opportunities (but not viceversa). Proposition 15.1.3 Let 0 ≤ t < M ≤ T . If (t, M)-payoff arbitrage opportunities do not exist, then (t, M)-price arbitrage opportunities do not exist either. Proof Assume that λ ∈ S t,M is a (t, M)-price arbitrage opportunity, so that Vt [λ](A) < 0 for some A ∈ Pt and VM [λ] = 0. Define the self-financing strategy μ ∈ S t,M by setting μu =

⎧  ⎨1A λu − ⎩0,

Vt [λ](A) Vt [η](A) ηu

 ,

if u ∈ {t, . . . , M − 1}, if u ∈ {M, . . . , T − 1}.

Then, we immediately see that Vt [μ] = 0 and VM [μ] = −

Vt [λ](A) 1A VM [η]. Vt [η](A)

Since η has a strictly-positive value at each date, it follows that VM [μ]  0, proving that μ is a (t, M)-payoff arbitrage opportunity. This establishes our desired claim.  As in the single-period case, the key assumption to develop a rich pricing theory is the absence of payoff arbitrage opportunities. We refer to Remark 6.1.6 for a brief justification of the relevance of arbitrage-free markets in the economic literature. Definition 15.1.4 (Arbitrage-Free Market) Let 0 ≤ t < M ≤ T . We say that the market is (t, M)-arbitrage free if there exists no (t, M)-payoff arbitrage opportunity. Moreover, we say that the market is arbitrage free if the market is (t, M)-arbitrage free for all 0 ≤ t < M ≤ T .  The next important result provides a variety of equivalent conditions for a market to be arbitrage free. Each condition requires the absence of arbitrage opportunities in suitable single-period or multi-period “submarkets”. Theorem 15.1.5 The following statements are equivalent: (a) The market is arbitrage free. (b) The market is (0, M)-arbitrage free for every M ∈ {1, . . . , T }. (c) The market is (t, t + 1)-arbitrage free for every t ∈ {0, . . . , T − 1}.

264

15 Market-Consistent Prices for Replicable Payoffs

(d) The market is (t, T )-arbitrage free for every t ∈ {0, . . . , T − 1}. (e) The market is (0, T )-arbitrage free. Proof Clearly, (a) implies (b). To prove that (b) implies (c), assume the strategy λ ∈ S t,t +1 is a (t, t + 1)-payoff arbitrage for some t ∈ {0, . . . , T − 1} and consider the self-financing strategy μ ∈ S 0,t +1 defined by ⎧ ⎪ ⎪ ⎨0, μu = λu − ⎪ ⎪ ⎩ 0,

if u ∈ {0, . . . , t − 1}, Vt [λ] Vt [η] ηu ,

if u = t, if u ∈ {t + 1, . . . , T − 1}.

It is easy to see that V0 [μ] = 0 and Vt +1 [μ] = Vt +1 [λ] −

Vt [λ] Vt +1 [η] ≥ Vt +1 [λ]  0, Vt [η]

where we used that, being a payoff arbitrage, the strategy λ satisfies Vt [λ] ≤ 0 and Vt +1 [λ]  0. Hence, μ is a (0, t + 1)-payoff arbitrage. Hence, that a (0, t + 1)-payoff arbitrage opportunity always exists if the market is not (t, t + 1)-arbitrage free, so, (b) implies (c). To prove that (c) implies (d), assume that the market is (t, t + 1)-arbitrage free for every t ∈ {0, . . . , T −1} and consider a self-financing strategy λ ∈ S t,T satisfying Vt [λ] ≤ 0 for some t ∈ {0, . . . , T − 1}. We claim that we cannot have Vu [λ]  0 for any u ∈ {t, . . . , T }. In particular, this implies that λ cannot be a (t, T )-payoff arbitrage, establishing the desired implication. We prove our claim by induction. Base Step Since Vt [λ] ≤ 0 by assumption, the claim trivially holds if u = t. Induction Step Assume the claim holds for some u ∈ {t, . . . , T − 1}, but we have Vu+1 [λ]  0. By assumption, we must either have Vu [λ] = 0 or Vu [λ](A) < 0 for a some A ∈ Pu . In the former case, the self-financing strategy μ ∈ S u,u+1 given by ⎧ ⎨λ , u μs = ⎩0,

if s = u, if s ∈ {u + 1, . . . , T − 1}

is easily seen to be a (u, u + 1)-payoff arbitrage opportunity. In the latter case, the selffinancing strategy ξ ∈ S u,u+1 given by ⎧  ⎨1A λu − ξs = ⎩0,

Vu [λ](A) Vu [η](A) ηu

 ,

if s = u, if s ∈ {u + 1, . . . , T − 1}

15.2 The Law of One Price

265

is a (u, u + 1)-payoff arbitrage opportunity. Indeed, we have Vu [ξ ] = 0 and 

Vu [λ](A) Vu [λ](A) Vu+1 [η] ≥ − 1A Vu+1 [η]  0. Vu+1 [ξ ] = 1A Vu+1 [λ] − Vu [η](A) Vu [η](A) Consequently, the assumption Vu+1 [λ]  0 leads in any case to a (u, u + 1)-payoff arbitrage. Since this is not possible by (c), it follows that Vu+1 [λ]  0 cannot hold. This concludes the induction argument. It is immediate to see that (d) implies (e). Finally, to prove that (e) implies (a), assume that λ ∈ S t,M is a (t, M)-payoff arbitrage opportunity for some 0 ≤ t < M ≤ T and define a trading strategy μ ∈ S 0 by setting ⎧ ⎪ 0, ⎪ ⎨ μu = λu − 1{Vt [λ]0} VM [η]

Vt [λ] {Vt [λ] 0} is nonempty because λ is a (t, M)-payoff arbitrage opportunity. As a result, μ is a (0, T )-payoff arbitrage. Therefore a (0, T )-arbitrage always exists if the market is not arbitrage free. Hence, (e) implies (a).  Remark 15.1.6 (Changing the Unit of Account) Let 0 ≤ t < M ≤ T . It is immediate to see that a self-financing strategy is a (t, M)-(price) arbitrage opportunity with respect to the fixed underlying numéraire if and only if it is a (t, M)-(price) arbitrage opportunity with respect to any other numéraire. In particular, whether a market is arbitrage free does not depend on the chosen unit of account. This fact will be freely used in the sequel. 

15.2

The Law of One Price

The multi-period version of the Law of One Price requires that self-financing strategies whose values coincide at a given date must have equal values at all prior dates too.

266

15 Market-Consistent Prices for Replicable Payoffs

Definition 15.2.1 (Law of One Price) Let 0 ≤ t < M ≤ T . We say that the (t, M)-Law of One Price holds if for all strategies λ, μ ∈ S t,M we have VM [λ] = VM [μ] ⇒ Vt [λ] = Vt [μ]. We say that the Law of One Price holds if the (t, M)-Law of One Price holds for all 0 ≤ t < M ≤ T.  The linearity of the valuation maps implies that the Law of One Price is equivalent to the absence of price arbitrage opportunities. Proposition 15.2.2 Let 0 ≤ t < M ≤ T . The following statements are equivalent: (a) The (t, M)-Law of One Price holds. (b) There exists no (t, M)-price arbitrage opportunity. (c) For every strategy λ ∈ S t,M we have VM [λ] = 0 ⇒ Vt [λ] = 0. Proof To prove that (a) implies (b), assume that the (t, M)-Law of One Price holds and consider a self-financing strategy λ ∈ S t,M satisfying VM [λ] = 0. Then, since the zero strategy is self-financing and has zero value at each date, we must have Vt [λ] = 0. This shows that no (t, M)-price arbitrage can exist. To show that (b) implies (c), assume that no (t, M)-price arbitrage exists and take a strategy λ ∈ S t,M satisfying VM [λ] = 0. Note that, for each state A ∈ Pt , we cannot have Vt [λ](A) < 0, for otherwise λ would be a (t, M)-price arbitrage. But Vt [λ](A) > 0 is not possible either since, otherwise, −λ would be a (t, M)-price arbitrage. Hence, we must have Vt [λ] = 0. Finally, assume that (c) holds and consider two strategies λ, μ ∈ S t,M satisfying VM [λ] = VM [μ]. Since VM [λ − μ] = 0 by linearity, our assumption implies that Vt [λ − μ] = 0 and thus, again by linearity, Vt [λ] = Vt [μ]. This establishes (a) and concludes the proof.  Since, by Proposition 15.1.3, no price arbitrage is possible in a market without payoff arbitrage opportunities, the preceding proposition immediately implies that arbitrage-free markets satisfy the Law of One Price. Corollary 15.2.3 Let 0 ≤ t < M ≤ T . If the market is (t, M)-arbitrage free, then the (t, M)-Law of One Price holds. In particular, if the market is arbitrage free, then the Law of One Price holds.

15.2 The Law of One Price

267

The next result shows that the validity of the Law of One Price in suitable “submarkets” implies its global validity. Proposition 15.2.4 The following statements are equivalent: (a) The Law of One Price holds. (b) The (t, t + 1)-Law of One Price holds for every t ∈ {0, . . . , T − 1}. (c) The (t, T )-Law of One Price holds for every t ∈ {0, . . . , T − 1}. Proof It is clear that (a) implies (b). Now, assume that (b) holds and take an arbitrary date t ∈ {0, . . . , T − 1} and a self-financing strategy λ ∈ S t,T such that VT [λ] = 0. We show by backward induction that VM [λ] = 0 for every M ∈ {t, . . . , T }. Base Step The assertion is clearly true if M = T . Induction Step Assume that VM [λ] = 0 for some date M ∈ {t + 1, . . . , T }. Since the (M − 1, M)-Law of One Price holds by assumption, it follows that VM−1 [λ] = 0. This concludes the recursion argument. As a result, we see that Vt [λ] = 0 and we can apply Proposition 15.2.2 to conclude that the (t, T )-Law of One Price holds, which establishes (c). Finally, assume that (c) holds and take arbitrary 0 ≤ t < M ≤ T . Consider a selffinancing strategy λ ∈ S t,M such that VM [λ] = 0. It is immediate to see that λ belongs to S t,T and satisfies VT [λ] = 0. Since the (t, T )-Law of One Price holds by assumption, we infer that Vt [λ] = 0. A direct application of Proposition 15.2.2 implies that the (t, M)-Law of One Price holds, yielding (a).  As illustrated by the next example, for the Law of One Price to hold globally it does not suffice that the (0, M)-Law of One Price holds for every date M ∈ {1, . . . , T }. In particular, the validity of the (0, T )-Law of One Price does not imply that of the global Law of One Price. Example 15.2.5 Let  = {ω1 , ω2 , ω3 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 }}, P2 = {{ω1 }, {ω2 }, {ω3 }}. We consider two basic securities specified by 1 1

: 1 1

1 1 1

1 2

: 1 0

2 0 1

268

15 Market-Consistent Prices for Replicable Payoffs

It is easy to show that both the (0, 1)- and the (0, 2)-Law of One Price hold. Indeed, set A = {ω1 , ω2 }. On the one hand, for every λ ∈ S 0,1 such that V1 [λ] = 0 we clearly have V0 [λ] = V1 [λ](A) = 0. On the other hand, every μ ∈ S 0,2 such that V2 [μ] = 0 satisfies V0 [μ] = V1 [μ](A) =

1 (V2 [μ](ω1 ) + V2 [μ](ω2 )) = 0. 2

The above claim follows directly from Proposition 15.2.2. However, the (1, 2)-Law of One Price fails because the (self-financing) strategy ξ ∈ S 1,2 such that ξ11 = 1Ac , ξ12 = −1Ac is easily seen to satisfy V1 [ξ ] = 1Ac and V2 [ξ ] = 0.



In Sect. 14.4 we suggested that, whenever the Law of One Price holds, it is possible to characterize market completeness in terms of the replicability at date 0 of each payoff maturing at date T . This is the content of the following proposition. Proposition 15.2.6 Assume the Law of One Price holds. Then, the following statements are equivalent: (a) The market is complete. (b) Mt,T = XT for every t ∈ {0, . . . , T − 1}. (c) M0,T = XT . Proof It is clear that (a) implies (b), which in turn entails (c). To prove that (c) implies (a), assume that M0,T = XT holds and take M ∈ {1, . . . , T −1}. We claim that M0,M = XM . To this effect, take X ∈ XM and define a payoff Y ∈ XT by Y =

X VT [η]. VM [η]

Since M0,T = XT , there is a self-financing strategy λ ∈ S 0,T such that Y = VT [λ]. Now, consider the self-financing strategy μ ∈ S M,T defined by μu =

X ηu , u ∈ {M, . . . , T − 1}. VM [η]

15.3 Market-Consistent Prices

269

Since VT [μ] =

X VT [η] = Y = VT [λ], VM [η]

the Law of One Price implies that VM [λ] = VM [μ] =

X VM [η] = X. VM [η]

This yields X ∈ M0,M . We conclude that M0,M = XM , and since M was arbitrary, the market is complete by virtue of Proposition 14.4.2.  Remark 15.2.7 (Changing the Unit of Account) Let 0 ≤ t < M ≤ T . It is immediate to verify that the validity of the (t, M)-Law of One Price does not depend on the chosen unit of account. This also follows from Remark 15.1.6 and Proposition 15.2.2. This fact will be freely used in the sequel. 

15.3

Market-Consistent Prices

We are now ready to extend the notion of market-consistent prices for replicable payoffs to a multi-period market where the Law of One Price holds. Due to the multi-period structure, we cannot deal with a unique market-consistent price as in the single-period setting, but need to consider a whole family of market-consistent prices indexed by the valuation date and the maturity of the payoff. We start by proving the multi-period counterpart of Proposition 6.3.1 stating essentially that, when the Law of One Price does not hold, every replicable payoff can be replicated at arbitrary cost. Proposition 15.3.1 Let 0 ≤ t < M ≤ T and assume the (t, M)-Law of One Price does not hold. Then, there exists A ∈ Pt such that for every replicable payoff X ∈ Mt,M and every a ∈ R there exists a self-financing strategy λ ∈ S t,M satisfying Vt [λ](A) = a and VM [λ] = X. Proof Since the (t, M)-Law of One Price does not hold, there exists a self-financing strategy ξ ∈ S t,M such that Vt [ξ ](A) = 0 for some A ∈ Pt and VM [ξ ] = 0. Take now any X ∈ Mt,M and assume that μ ∈ S t,M is a replicating strategy for X. Moreover, take any a ∈ R. Then, the self-financing strategy λ ∈ S t,M given by λu = μu +

a − Vt [μ](A) ξu , u ∈ {t, . . . , T − 1}, Vt [ξ ](A)

is easily seen to satisfy Vt [λ](A) = a and VM [λ] = X.



270

15 Market-Consistent Prices for Replicable Payoffs

Let 0 ≤ t < M ≤ T and assume the (t, M)-Law of One Price holds. In this case, every self-financing strategy replicating from date t a given marketed payoff maturing at date M has the same acquisition value, which can then be unambiguously interpreted as the price of that payoff at date t. We refer to Chap. 6 for a detailed discussion on the interpretation of market-consistent prices. Definition 15.3.2 (Market-Consistent Price) Let 0 ≤ t < M ≤ T and assume the (t, M)-Law of One Price holds. For every replicable payoff X ∈ Mt,M we set πt,M (X) := Vt [λ], where λ ∈ S t,M is any replicating strategy for X. We say that πt,M (X) is the marketconsistent price of X at date t. The map πt,M : Mt,M → Xt is called the (t, M)-pricing functional. The corresponding family is denoted by π := {πt,M ; 0 ≤ t < M ≤ T }.



As in the single-period case, the notion of market-consistent prices can be extended to nonreplicable payoffs also in a multi-period market. This is done in Chap. 17.

15.4

Pricing Functionals

In a market in which the Law of One Price holds, we have defined a family π = {πt,M ; 0 ≤ t < M ≤ T } of pricing functionals. We now show that each member of this family is a conditional linear functional. From this point on, we start using the notions and results from Chaps 11 and 12 intensively. Theorem 15.4.1 Let 0 ≤ t < M ≤ T and assume the (t, M)-Law of One Price holds. Then, πt,M is a Pt -conditional linear functional. Proof To prove linearity, take X, Y ∈ Mt,M and assume that X = VM [λ] and Y = VM [μ] for suitable strategies λ, μ ∈ S t,M . For every a ∈ R the strategy aλ + μ ∈ S t,M is easily seen to replicate the marketed payoff aX + Y . Therefore, πt,M (aX + Y ) = Vt [aλ + μ] = aVt [λ] + Vt [μ] = aπt,M (X) + πt,M (Y ). Since πt,M (0) = 0 clearly holds, this establishes the linearity of πt,M .

15.4 Pricing Functionals

271

To show conditionality, take X ∈ Mt,M and assume that X = VM [λ] for some strategy λ ∈ S t,M . Fix A ∈ Pt . By Proposition 14.3.4, the payoff 1A X can be replicated by the strategy 1A λ ∈ S t,M . Hence, using the localization result in Proposition 14.2.13, we obtain πt,M (1A X) = VM [1A λ] = 1A VM [λ] = 1A πt,M (X). This implies that πt,M is a conditional functional with respect to Pt .



The next important result establishes a certain consistency of market-consistent prices over time: given 0 ≤ s < t < M ≤ T , it does not matter whether we price a replicable payoff in X ∈ Ms,M directly at time s, or we first determine its price at some intermediate date t to obtain πt,M (X) and then price πt,M (X) at date s. This recursion property will prove very useful in the sequel. Theorem 15.4.2 (Time Consistency) Let 0 ≤ s < t < M ≤ T and assume the (s, M)Law of One Price holds. Then: (i) For every X ∈ Mt,M with πt,M (X) ∈ Ms,t we have X ∈ Ms,M and πs,M (X) = πs,t (πt,M (X)). (ii) For every X ∈ Ms,M we have πt,M (X) ∈ Ms,t and πs,M (X) = πs,t (πt,M (X)). Proof To prove (i), take a payoff X ∈ Mt,M such that πt,M (X) ∈ Ms,t . To show that X ∈ Ms,M , assume that X = VM [μ] for some μ ∈ S t,M and πt,M (X) = Vt [λ] for some λ ∈ S s,t . The strategy ξ ∈ S s defined by ⎧ ⎨λ , u ξu = ⎩ μu ,

if u ∈ {s, . . . , t − 1}, if u ∈ {t, . . . , T − 1},

is easily seen to be self-financing with termination date M. Since X = VM [ξ ], we conclude that X ∈ Ms,M . To establish time consistency, it suffices to note that πs,t (πt,M (X)) = Vs [λ] = Vs [ξ ] = πs,M (X). We now focus on (ii). As X ∈ Ms,M , we have X = VM [λ] for some self-financing strategy λ ∈ S s,M . Since πt,M (X) = Vt [λ], we see that πt,M (X) ∈ Ms,t . The claim now follows immediately from item (i). 

272

15 Market-Consistent Prices for Replicable Payoffs

One of the strongest consequences of time consistency is that to price replicable payoffs with any maturity it suffices to know how to price replicable payoffs with maturity T . Proposition 15.4.3 Let 0 ≤ t < M ≤ T and assume the (t, T )-Law of One Price holds. Let θ ∈ S t,T be a strategy with strictly-positive value process. Then, for every X ∈ Mt,M we have VMX[θ] VT [θ ] ∈ Mt,T and

πt,M (X) = πt,T

 X VT [θ] . VM [θ]

Proof Note that VT [θ] ∈ MM,T . Since VMX[θ] belongs to XM , it follows from Corollary 14.3.5 that Y = VMX[θ] VT [θ] belongs to MM,T as well. Moreover, the PM conditionality of πM,T implies πM,T (Y ) =

X πM,T (VT [θ]) = X. VM [θ ]

Since X ∈ Mt,M , Theorem 15.4.2 shows that Y belongs to Mt,T and πt,T (Y ) = πt,M (πM,T (Y )) = πt,M (X), concluding the proof of the corollary.



We conclude the chapter by showing that the absence of arbitrage opportunities is equivalent to the strict positivity of the pricing functionals. This fundamental characterization of arbitrage-free markets constitutes the key result on our way to a multi-period version of the Fundamental Theorem of Asset Pricing. Lemma 15.4.4 Let 0 ≤ t < M ≤ T and assume the (t, M)-Law of One Price holds. Then, the following statements are equivalent: (a) The market is (t, M)-arbitrage free. (b) πt,M is strictly positive. Proof To prove that (a) implies (b), assume that no (t, M)-payoff arbitrage exists and take a nonzero positive X ∈ Mt,M . Let λ ∈ S t,M be a replicating strategy for X. In particular,

15.4 Pricing Functionals

273

note that VM [λ]  0. If Vt [λ](A) < 0 for some A ∈ Pt , then the self-financing strategy μ ∈ S t,M given by μu =

⎧  ⎨1A λu − ⎩0,

Vt [λ](A) Vt [η](A) ηu



, if u ∈ {t, . . . , M − 1}, if u ∈ {M, . . . , T − 1},

would be a (t, M)-payoff arbitrage. Indeed, we would have Vt [μ] = 0 and

 Vt [λ](A) Vt [λ](A) VM [μ] = 1A VM [λ] − VM [η] ≥ − 1A VM [η]  0. Vt [η](A) Vt [η](A) However, this is not possible because the market is (t, M)-arbitrage free by assumption. For the same reason, we cannot have Vt [λ] = 0 either. As a result, the strategy λ must satisfy Vt [λ]  0. This implies that πt,M (X) = Vt [λ]  0, showing that πt,M is strictly positive. To prove that (b) implies (a), assume that πt,M is strictly positive and take a selffinancing strategy λ ∈ S t,M . If VM [λ]  0, the strict positivity of πt,M immediately implies that Vt [λ] = πt,M (VM [λ])  0. Hence, it follows that the market is (t, M)-arbitrage free.



In view of Theorem 15.1.5, the preceding lemma delivers the following characterization of arbitrage-free markets. Theorem 15.4.5 Assume the Law of One Price holds. Then, the following statements are equivalent: (a) (b) (c) (d) (e) (f)

The market is arbitrage free. πt,M is strictly positive for all 0 ≤ t < M ≤ T . π0,M is strictly positive for every M ∈ {1, . . . , T }. πt,t +1 is strictly positive for every t ∈ {0, . . . , T − 1}. πt,T is strictly positive for every t ∈ {0, . . . , T − 1}. π0,T is strictly positive.

274

15 Market-Consistent Prices for Replicable Payoffs

This theorem justifies our detailed study of strictly-positive conditional linear functionals in Chaps. 11 and 12. In the next chapter we apply the extension and representation results obtained there to our pricing functionals.

15.5

Exercises

In all exercises below we consider the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. Exercise 15.5.1 Let 0 ≤ t < M ≤ T . Prove that the following statements are equivalent: (a) The market is (t, M)-arbitrage free. (b) There exists no (t, M)-arbitrage opportunity λ ∈ S t,M with Vt [λ] = 0. Exercise 15.5.2 (Strong Arbitrage Opportunity) Let 0 ≤ t < M ≤ T . We say that a self-financing strategy λ ∈ S t,M is a (t, M)-strong arbitrage opportunity if it satisfies: (1) Vt [λ] ≤ 0. (2) Vu [λ] ≥ 0 for every u ∈ {t + 1, . . . , M − 1}. (3) VM [λ]  0. This means that λ is a (t, M)-arbitrage opportunity that has the following additional feature: any forced liquidation at an intermediate date between t and M will not result in a loss. This makes strong arbitrage opportunities particularly appealing. Show that, perhaps surprisingly, the absence of strong arbitrage opportunities is equivalent to the market being arbitrage free, i.e., the following statements are equivalent: (a) The market is (t, M)-arbitrage free. (b) There exists no (t, M)-strong arbitrage opportunity. (c) There exists no (t, M)-strong arbitrage opportunity λ ∈ S t,M with Vt [λ] = 0. Exercise 15.5.3 Show that for every t ∈ {0, . . . , T − 1} the following statements are equivalent: (a) The (t, t + 1)-Law of One Price holds. (b) For every A ∈ Pt and every strategy λ ∈ S t,t +1 we have Vt +1[λ] = 0 on A ⇒ Vt [λ](A) = 0.

15.5 Exercises

275

(c) For every A ∈ Pt and every portfolio λ ∈ RN we have N 

λi Sti+1 = 0 on A ⇒

i=1

N 

λi Sti (A) = 0.

i=1

Exercise 15.5.4 Deduce from Exercise 15.5.3 that the following statements are equivalent: (a) The Law of One Price holds. (b) For all t ∈ {0, . . . , T − 1} and A ∈ Pt and for every portfolio λ ∈ RN we have N  i=1

λi Sti+1 = 0 on A ⇒

N 

λi Sti (A) = 0.

i=1

This result provides a simple operational condition to check whether the Law of One Price holds by working at a “localized” level in each one-period submarket. Exercise 15.5.5 Show that for every t ∈ {0, . . . , T − 1} the following statements are equivalent: (a) The market is (t, t + 1)-arbitrage free. (b) For every A ∈ Pt there exists no strategy λ ∈ S t,t +1 such that: (1) Vt [λ](A) ≤ 0. (2) Vt [λ]  0 on A. (c) For every A ∈ Pt there exists no strategy λ ∈ S t,t +1 such that: (1) Vt [λ](A) = 0. (2) Vt [λ]  0 on A. (d) For every A ∈ Pt there exists no portfolio λ ∈ RN such that:  (1) N λi Sti (A) ≤ 0. i=1 N (2) i=1 λi Sti+1  0 on A. (e) For every A ∈ Pt there exists no portfolio λ ∈ RN such that:  (1) N λi Sti (A) = 0. i=1 N (2) i=1 λi Sti+1  0 on A. Exercise 15.5.6 Deduce from Exercise 15.5.5 that the following statements are equivalent: (a) The market is arbitrage free. (b) For all t ∈ {0, . . . , T − 1} and A ∈ Pt there exists no λ ∈ RN such that:  (1) N λi Sti (A) = 0. i=1 N (2) i=1 λi Sti+1  0 on A.

276

15 Market-Consistent Prices for Replicable Payoffs

This result provides a simple operational condition to check whether a market is arbitrage free by working at a “localized” level in each one-period submarket. Exercise 15.5.7 Let  = {ω1 , ω2 , ω3 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 }}, P2 = {{ω1 }, {ω2 }, {ω3 }}. Consider two basic securities specified by

1

(i) (ii) (iii) (iv)

: 1

2

4 3

1

1

2

: 1

1

1 1

2

3

Show that the market is complete. Show that the Law of One Price fails. Identify all the price arbitrage opportunities. Modify one security in such a way that the new market remains complete, but the Law of One Price holds.

Exercise 15.5.8 Let  = {ω1 , . . . , ω4 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}. Consider two basic securities specified by

1

(i) (ii) (iii) (iv)

: 1

5 1

2 3 2 1

2

: 1

4 2

6 9 4 2

Show that the market is incomplete. Show that the Law of One Price fails. Identify all the price arbitrage opportunities. Modify one security in such a way that the new market remains incomplete but the Law of One Price holds.

Exercise 15.5.9 Let  = {ω1 , . . . , ω4 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}.

15.5 Exercises

277

Consider two basic securities specified by

1

(i) (ii) (iii) (iv) (v) (vi)

: 2

1 3

1 1 4 1

2

: 1

2 1

2 3 1 0

Show that the market is complete. Show that the Law of One Price holds. Determine the market-consistent price of 1ω1 + 1ω3 at date 0. Show that the market is not arbitrage free. Identify all the arbitrage opportunities. Modify one security in such a way that the new market remains complete, but becomes arbitrage free.

Exercise 15.5.10 Let  = {ω1 , . . . , ω4 } and consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}. Consider two basic securities specified by

1

(i) (ii) (iii) (iv) (v) (vi) (vii)

: 1

1 0

3 1 0 0

2

: 1

2 2

2 0 3 1

Show that the market is incomplete. Show that the Law of One Price holds. Show that −1{ω1 ,ω4 } − 31ω3 is replicable at date 0. Determine the market-consistent price of −1{ω1 ,ω4 } − 31ω3 at date 0. Show that the market is not arbitrage free. Identify all the arbitrage opportunities. Modify one security in such a way that the new market remains incomplete, but becomes arbitrage free.

Exercise 15.5.11 Show that the multinomial market introduced in Exercise 14.6.11 satisfies the following conditions: (i) The Law of One Price holds. (ii) The market is arbitrage free if and only if r1 > r > rH .

278

15 Market-Consistent Prices for Replicable Payoffs

Exercise 15.5.12 Let R ∈ R be a rescaling process and take 0 ≤ t < M ≤ T . Prove that  ∈ Mt,M we have for every replicable rescaled payoff X    = Rt πt,M  πt,M X

 X RM

 .

16

Fundamental Theorem of Asset Pricing

In this chapter we generalize the Fundamental Theorem of Asset Pricing to a multi-period economy. The main result states that the absence of arbitrage opportunities is equivalent to the existence of a time-consistent family of strictly-positive conditional linear extensions of the pricing functionals. In line with the interpretation provided in the context of a singleperiod model, each of these extensions can be viewed as a hypothetical pricing rule in a complete market under which the original basic securities maintain their prices. The other versions of the Fundamental Theorem of Asset Pricing are based on this extension result and provide useful representations of the pricing functionals in terms of Riesz densities. A critical tool will be the extension results for conditional linear functionals established in Chap. 12.

Standing Assumption Throughout the entire chapter we work in the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the Law of One Price holds.

16.1

Pricing Extensions

The basic version of the Fundamental Theorem of Asset Pricing in a single-period setting stated that the pricing functional of an arbitrage-free market can always be extended to a strictly-positive linear functional defined on the entire payoff space. By Proposition 7.1.1, this extension can be interpreted as the pricing functional of a hypothetical arbitragefree market that is complete and that “extends” the original market in the sense that, under the pricing extension, all replicable payoffs retain their original prices. In other words, the Fundamental Theorem of Asset Pricing shows that arbitrage-free markets are © Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_16

279

280

16 Fundamental Theorem of Asset Pricing

precisely those markets that can be “embedded” in a complete arbitrage-free market. Moreover, whenever the original market is not complete, there are infinitely many such “embeddings”. In this chapter, we establish a multi-period extension of the Fundamental Theorem of Asset Pricing. Since we are now dealing with a family of pricing functionals, the first step towards the desired generalization is to establish the multi-period counterpart of Proposition 7.1.1. To do this, we first need to introduce the notion of a time-consistent family of conditional functionals, which we had already encountered in Theorem 15.4.2. This leads us to work with a family of maps ψ := {ψt,M : XM → Xt ; 0 ≤ t < M ≤ T }. This compact notation for a family of maps will be used throughout the remainder of the book. Definition 16.1.1 (Time Consistency) A family ψ is said to be time consistent if for all  0 ≤ s < t < M ≤ T we have ψs,t ◦ ψt,M = ψs,M . Our first result shows that a time-consistent family of strictly-positive conditional linear functionals can always be viewed as the family of pricing functionals of a suitable complete arbitrage-free market. Proposition 16.1.2 Let ψ be a time-consistent family such that ψt,M is a strictly-positive PM -conditional linear functional for all 0 ≤ t < M ≤ T . Then, the Arrow-Debreu market with basic securities given by S 1 = (ψ0,T (1ω1 ), . . . , ψT −1,T (1ω1 ), 1ω1 ) .. . S K = (ψ0,T (1ωK ), . . . , ψT −1,T (1ωK ), 1ωK ) is complete and arbitrage free with (t, M)-pricing functional equal to ψt,M for all dates 0 ≤ t < M ≤ T. Proof Since ψt,T is strictly positive for every t ∈ {0, . . . , T − 1}, it is immediate that the above basic securities satisfy the assumptions at the beginning of Chap. 14 and that the corresponding market is complete; see also Example 14.4.6. Now, take M ∈ {1, . . . , T } and λ ∈ S M−1,M and note that VM [λ] =

K  i=1

λiM−1 ψM,T (1ωi ).

16.1 Pricing Extensions

281

Since ψM−1,M ◦ ψM,T = ψM−1,T by time consistency, it follows that ψM−1,M (VM [λ]) =

K 

λiM−1 ψM−1,M (ψM,T (1ωi )) = VM−1 [λ].

(16.1)

i=1

A simple induction argument implies that ψt,M (VM [λ]) = Vt [λ]

(16.2)

for every t ∈ {0, . . . , M − 1} and every self-financing strategy λ ∈ S t,M . Base Step That (16.2) holds for t = M − 1 follows directly from (16.1). Induction Step Assume that (16.2) holds for some t ∈ {1, . . . , M − 1}. Then, by time consistency, ψt −1,M (VM [λ]) = ψt −1,t (ψt,M (VM [λ])) = ψt −1,t (Vt [λ]) = Vt −1 [λ], where the last equality follows from (16.1). Now, let 0 ≤ t < M ≤ T and take a self-financing strategy λ ∈ S t,M such that VM [λ] = 0. Since Vt [λ] = ψt,M (0) = 0 by (16.2), it follows from Proposition 15.2.2 that the Law of One Price holds. By (16.2), the corresponding family of pricing functionals coincides with ψ. In particular, the market is arbitrage free by Theorem 15.4.5.  Remark 16.1.3 (Characterizing Complete Arbitrage-Free Markets) In the spirit of Remark 7.1.2, if we identify all complete arbitrage-free markets having the same pricing functionals, then we may say that there is a one-to-one correspondence between complete arbitrage-free markets and time-consistent families of strictly-positive linear functionals.  As already mentioned, we view the Fundamental Theorem of Asset Pricing as an “embedding” result stating that every arbitrage-free market can be “embedded” in a complete arbitrage-free market. This leads us to the notion of a time-consistent family of arbitrage-free extensions of the pricing functionals. Definition 16.1.4 (Arbitrage-Free Extension) Let 0 ≤ t < M ≤ T . We say that a Pt -conditional linear functional ψt,M : XM → Xt is an arbitrage-free extension of πt,M if the following properties hold: (1) ψt,M (X) = πt,M (X) for every replicable payoff X ∈ Mt,M . (2) ψt,M is strictly positive.

282

16 Fundamental Theorem of Asset Pricing

The set of arbitrage-free extensions of πt,M is denoted by E(πt,M ). A family ψ is said to be an arbitrage-free family of pricing extensions ( for π ) if the following properties hold: (1) ψt,M is an arbitrage-free extension of πt,M for all 0 ≤ t < M ≤ T . (2) ψ is time consistent. The set of arbitrage-free families of pricing extensions is denoted by E(π ).



Before proving the multi-period version of the Fundamental Theorem of Asset Pricing, we show that a composition of “adjacent” arbitrage-free extensions is again an arbitragefree extension. The result is formulated for the composition of two pricing functionals, but it is clear how to extend it to the composition of multiple pricing functionals. Lemma 16.1.5 Let 0 ≤ s < t < M ≤ T and assume that ψs,t and ψt,M are arbitragefree extensions of πs,t and πt,M , respectively. Then, ψs,t ◦ ψt,M is an arbitrage-free extension of πs,M . Proof Set ψs,M = ψs,t ◦ ψt,M . It is clear that, as the composition of two strictly-positive Ps - and Pt -conditional linear functionals, ψs,M is a strictly-positive Ps -conditional linear functional. It remains to establish that ψs,M extends πs,M . To this end, take an arbitrary X ∈ Ms,M ⊂ Mt,M and observe that Theorem 15.4.2 implies that πt,M (X) ∈ Ms,t and ψs,M (X) = ψs,t (ψt,M (X)) = ψs,t (πt,M (X)) = πs,t (πt,M (X)) = πt,M (X).



We can now prove the announced Fundamental Theorem of Asset Pricing. Theorem 16.1.6 (Fundamental Theorem of Asset Pricing, Version Ia) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists an arbitrage-free family of pricing extensions for π . In this case, if the market is incomplete, then π admits infinitely many arbitrage-free families of pricing extensions. Proof If the market is complete, then for all 0 ≤ t < M ≤ T the pricing functional πt,M is defined on XM and, hence, every arbitrage-free extension of πt,M must coincide with πt,M itself. In this case, the desired equivalence follows immediately from Theorem 15.4.5. Let us assume that the market is incomplete. To show that (a) implies (b), assume that the market is arbitrage free. Take t ∈ {0, . . . , T − 1}. It follows from Theorem 15.4.5 that πt,t +1 is strictly positive. As a result, Theorem 12.2.6 implies that πt,t +1 admits

16.1 Pricing Extensions

283

an arbitrage-free extension ψt,t +1 . Now, for all 0 ≤ t < M ≤ T , it follows from Lemma 16.1.5 that the map ψt,M : XM → Xt defined by ψt,M = ψt,t +1 ◦ · · · ◦ ψM−1,M is an arbitrage-free extension of πt,M . This yields a family ψ of arbitrage-free extensions of our pricing functionals, which is clearly time consistent by construction. Indeed, for all 0 ≤ s < t < M ≤ T we have ψs,t ◦ ψt,M = (ψs,s+1 ◦ · · · ◦ ψt −1,t ) ◦ (ψt,t +1 ◦ · · · ◦ ψM−1,M ) = ψs,M . To prove that (b) implies (a), it suffices to observe that the existence of an arbitrage-free family of pricing extensions implies that each pricing functional is strictly positive in the first place. Hence, by Theorem 15.4.5, the market is arbitrage free. This establishes the desired equivalence. To prove the multiplicity statement, assume that the market is arbitrage free and incomplete. By Proposition 14.4.2, we find a date t ∈ {0, . . . , T − 1} such that the market is not (t, t + 1)-complete. This means that Mt,t +1 is a proper subspace of Xt +1 . In this case, πt,t +1 admits infinitely many arbitrage-free extensions by Theorem 12.2.6 and the above construction yields an infinity of arbitrage-free families of pricing extensions. This concludes the proof of the theorem.  Arbitrage-Free Extensions of the (0, T )-Pricing Functional In this section we focus on arbitrage-free extensions of the pricing functional π0,T . We start by showing that the Fundamental Theorem of Asset Pricing can be equivalently formulated in terms of these special arbitrage-free extensions only. Theorem 16.1.7 (Fundamental Theorem of Asset Pricing, Version Ib) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists an arbitrage-free extension for π0,T . In this case, if the market is incomplete, then π0,T admits infinitely many arbitrage-free pricing extensions. Proof If the market is complete, then the pricing functional π0,T is defined on XT and, hence, every arbitrage-free extension of π0,T must coincide with π0,T itself. In this case, the desired equivalence follows immediately from Theorem 15.4.5. Assume that the market is incomplete. By Theorem, it is clear that (a) implies (b). To show the converse implication, assume that (b) holds. Having a strictly-positive extension,

284

16 Fundamental Theorem of Asset Pricing

the functional π0,T must be itself strictly positive. Then, Theorem 15.4.5 implies that the market is arbitrage free, so that (a) holds. To establish the multiplicity statement, assume that the market is arbitrage free and incomplete. In this case, Proposition 15.2.6 implies that the market is not (0, T )-complete. Since M0,T is a proper subspace of XT , we infer from Theorem 12.2.6 (or even Theorem 4.2.6) that π0,T admits infinitely many arbitrage-free extensions. This concludes the proof.  The above version of the Fundamental Theorem of Asset shows that the existence of an arbitrage-free extension of the (0, T )-pricing functional is equivalent to the existence of an arbitrage-free family of pricing extensions. This may seem surprising at a first glance. In this section, we show that there is, in fact, a one-to-one correspondence between arbitragefree extensions of π0,T and arbitrage-free families of pricing extensions. As a first step, we show that an arbitrage-free family of pricing extensions is completely determined by its (0, T )-pricing extension. In other words, two arbitrage-free families of pricing extensions with the same (0, T )-pricing extension must coincide. This is a direct consequence of time consistency. Proposition 16.1.8 Let θ ∈ S 0,T be any self-financing strategy with strictly-positive value process. For every arbitrage-free family ψ ∈ E(π) the following statements hold: (i) For all M ∈ {1, . . . , T } and X ∈ XM we have

ψ0,M (X) = ψ0,T

 X VT [θ] . VM [θ]

(ii) For all 0 ≤ t < M ≤ T and X ∈ XM we have   1 X  ψ0,M (1A X)  ψ0,T VMA[θ] VT [θ]   1A . ψt,M (X) = 1A = 1A ψ0,t (1A ) A∈Pt A∈Pt ψ0,T Vt [θ] VT [θ ] Proof To establish (i), let M ∈ {1, . . . , T } and X ∈ XM and set Y = Then,

X VM [θ]

∈ vX M .

ψ0,T (Y VT [θ]) = ψ0,M (ψM,T (Y VT [θ ])) = ψ0,M (Y ψM,T (VT [θ ])) = ψ0,M (X) by time consistency and conditionality. This yields the desired equality. To show (ii), take 0 ≤ t < M ≤ T and X ∈ XM . Using the notation for localizations, see Definition 11.2.1, for every A ∈ Pt we have that ψ0,M (1A X) = ψ0,t (ψt,M (1A X)) = ψ0,t (1A ψt,M (X)) = ψt,M (X|A)ψ0,t (1A )

16.1 Pricing Extensions

285

by time consistency and conditionality. As ψ0,t (1A ) > 0 by strict positivity, we get ψt,M (X|A) =

ψ0,M (1A X) . ψ0,t (1A )

Now, it suffices to apply Proposition 11.2.3 to obtain ψt,M (X) =



ψt,M (X|A)1A =

A∈Pt

 ψ0,M (1A X) 1A . ψ0,t (1A )

A∈Pt

This establishes the desired equality and concludes the proof.



As a second step, we show that any arbitrage-free extension of π0,T can be embedded into an arbitrage-free family of pricing extensions, which by the preceding proposition must be unique. This establishes that there exists a one-to-one correspondence between the set of arbitrage-free extensions E(π0,T ) and the set of arbitrage-free families of pricing extensions E(π ). Proposition 16.1.9 For every arbitrage-free extension ϕ ∈ E(π0,T ) there exists a unique arbitrage-free family ψ ∈ E(π) such that ψ0,T = ϕ. Proof In view of Proposition 16.1.8, it is sufficient to prove existence. To this end, let θ ∈ S 0,T be any self-financing strategy with strictly-positive value process. Proposition 16.1.8 suggests how to define the desired family by setting   1 X  ϕ VMA[θ] VT [θ]   1A ψt,M (X) = 1A A∈Pt ϕ Vt [θ ] VT [θ ] for all 0 ≤ t < M ≤ T and X ∈ XM . As a preliminary observation, note that for every M ∈ {1, . . . , T } and every X ∈ XM we have

 X VT [θ] , ψ0,M (X) = ϕ VM [θ] where we have used that ϕ(VT [θ]) = V0 [θ ] since ϕ is an extension of π0,T . In particular, ψ0,T = ϕ. It remains to show that, for all 0 ≤ t < M ≤ T , the map ψt,M is an arbitragefree extension of πt,M and that the family ψ is time consistent. For convenience, we split the proof into three steps.

286

16 Fundamental Theorem of Asset Pricing

Step 1. We first fix M ∈ {1, . . . , T } and show that ψ0,M is an arbitrage-free extension of π0,M . It is clear that ψ0,M is a strictly-positive linear functional. To show that ψ0,M is an extension of π0,M , take a replicable payoff X ∈ M0,M and note that VMX[θ] VT [θ] ∈ M0,T and

  X X VT [θ ] = π0,T VT [θ] = π0,M (X) ψ0,M (X) = ϕ VM [θ ] VM [θ] by Proposition 15.4.3. Hence, ψ0,M is an arbitrage-free extension of π0,M . Step 2. We now fix 0 < t < M ≤ T and show that ψt,M is an arbitrage-free extension of πt,M . To this end, note first that ψt,M (X) =

 ψ0,M (1A X) 1A ψ0,t (1A )

(16.3)

A∈Pt

for every X ∈ XM . It is easy to see that ψt,M is linear and strictly positive, and Proposition 11.2.3 implies that it is a Pt -conditional functional. To prove that ψt,M is an extension of πt,M , take arbitrary X ∈ Mt,M and A ∈ Pt . Note that the payoff Y ∈ Mt,M given by 

πt,M (X) VM [θ ] Y = 1A X − Vt [θ] satisfies πt,M (Y ) = 1A πt,M (X) − 1A

πt,M (X) πt,M (VM [θ]) = 0 Vt [θ ]

by conditionality. Since 0 ∈ M0,t , we infer from Theorem 15.4.2 that Y belongs to M0,M and satisfies ψ0,M (Y ) = π0,M (Y ) = 0, where we have used that ψ0,M is an extension of π0,M by Step 1. Consequently,     π (X) VM [θ] ψ0,M (1A X) = ψ0,M 1A t,M Vt [θ] VM [θ ] = πt,M (X|A)ψ0,M 1A Vt [θ]   [θ] = πt,M (X|A)ψ0,t (1A ). = πt,M (X|A)ϕ 1A VVTt [θ] This yields ψt,M (X) =

 ψ0,M (1A X)  1A = πt,M (X|A)1A = πt,M (X), ψ0,t (1A )

A∈Pt

A∈Pt

proving that ψt,M is an arbitrage-free extension of πt,M .

16.2 Pricing Densities

287

Step 3. We show that ψ is time consistent. To this end, let 0 ≤ s < t < M ≤ T and take X ∈ XM and A ∈ Ps . It follows from (16.3) that   ψ0,t (1A ψt,M (X)) = ψ0,t (1B ψt,M (X)) = ψt,M (X|B)ψ0,t (1B ) B∈Pt B⊂A

=



B∈Pt B⊂A

ψ0,M (1B X) = ψ0,M (1A X).

B∈Pt B⊂A

Setting ψ0,0 (1) = 1, we can use (16.3) again to get ψs,t (ψt,M (X)) =

 ψ0,t (1A ψt,M (X))  ψ0,M (1A X) 1A = 1A = ψs,M (X). ψ0,s (1A ) ψ0,s (1A )

A∈Ps

A∈Ps

This establishes time consistency and concludes the proof.

16.2



Pricing Densities

In this section we introduce the multi-period counterpart of the notion of a pricing density and derive the corresponding Fundamental Theorem of Asset Pricing. Here, we have to deal with a family of random variables D := {Dt,M ∈ LM ; 0 ≤ t < M ≤ T }, which plays the role of the family of maps introduced in the previous section. Definition 16.2.1 (Time Consistency) A family D is said to be time consistent if for all  0 ≤ s < t < M ≤ T we have Ds,M = Ds,t Dt,M . Definition 16.2.2 (Pricing Density) Let 0 ≤ t < M ≤ T . A random variable Dt,M ∈ LM is called a pricing density (for πt,M ) if: (1) πt,M (X) = EP [Dt,M X|Pt ] for every replicable payoff X ∈ Mt,M . (2) Dt,M is strictly positive. The set of pricing densities for πt,M is denoted by D(πt,M ). A family D is said to be an arbitrage-free family of pricing densities (for π) if: (1) Dt,M is a pricing density for πt,M for all 0 ≤ t < M ≤ T . (2) D is time consistent. The set of arbitrage-free families of pricing densities is denoted by D(π ).



288

16 Fundamental Theorem of Asset Pricing

The interpretation of pricing densities is similar to the one provided in the single-period model in Sect. 7.2. The next proposition, which generalizes Proposition 7.2.3, highlights the link between arbitrage-free families of pricing extensions and arbitrage-free families of pricing densities. Recall that, by Theorem 12.3.5, every Pt -conditional linear functional ψ : XM → Xt has a unique PM -measurable Riesz density for all 0 ≤ t < M ≤ T . Proposition 16.2.3 (i) Let 0 ≤ t < M ≤ T . For every arbitrage-free extension ψt,M ∈ E(πt,M ) the PM measurable Riesz density of ψt,M is a pricing density for πt,M . (ii) Let 0 ≤ t < M ≤ T . For every pricing density Dt,M ∈ D(πt,M ) there exists an arbitrage-free extension of πt,M with PM -measurable Riesz density equal to Dt,M . (iii) For every arbitrage-free family ψ ∈ E(π ) the collection D, where Dt,M is the PM measurable Riesz density of ψt,M for all 0 ≤ t < M ≤ T , is an arbitrage-free family of pricing densities for π . (iv) For every arbitrage-free family D ∈ D(π ) there exists an arbitrage-free family of pricing extensions such that Dt,M is the PM -measurable Riesz density of the (t, M)pricing extension for all 0 ≤ t < M ≤ T . Proof To show (i), fix 0 ≤ t < M ≤ T and assume that ψt,M is an arbitrage-free extension of πt,M . Let Dt,M be the unique Riesz density for ψt,M belonging to LM . To show that Dt,M is a pricing density for πt,M we only need to establish that Dt,M is strictly positive. To see this, recall that, being strictly positive, ψt,M has a strictly-positive Riesz density D ∈ L by Theorem 12.3.6. Since Dt,M = EP [D|PM ] due to Proposition 10.3.5 and Theorem 12.3.5, we infer that Dt,M is strictly positive as desired. To establish (ii), fix 0 ≤ t < M ≤ T and assume that Dt,M is a pricing density for πt,M . Consider the Pt -conditional linear functional ψt,M : XM → Xt defined by ψt,M (X) = EP [Dt,M X|Pt ].

(16.4)

It is clear that, by definition, ψt,M coincides with the pricing functional πt,M on the marketed space Mt,M . Moreover, ψt,M is strictly positive since Dt,M is strictly positive. This shows that ψt,M is an arbitrage-free extension of πt,M . To show (iii), consider an arbitrage-free family ψ ∈ E(π), and for all 0 ≤ t < M ≤ T let Dt,M be the unique Riesz density for ψt,M belonging to LM . In view of item (i), we only have to show that the family D is time consistent. To this end, let 0 ≤ s < t < M ≤ T and note that EP [Ds,t Dt,M X|Ps ] = EP [EP [Ds,t Dt,M X|Pt ]|Ps ] = EP [Ds,t EP [Dt,M X|Pt ]|Ps ]

16.2 Pricing Densities

289

= ψs,t (ψt,M (X)) = ψs,M (X) for every X ∈ XM by the “taking out what is known” property in Proposition 10.3.8, the tower property in Proposition 10.3.9, and by time consistency. This shows that Ds,t Dt,M is a PM -measurable Riesz density for ψs,M . Hence, Ds,t Dt,M must coincide with Ds,M . Finally, to show (iv), take a family D ∈ D(π ) and for all 0 ≤ t < M ≤ T define a map ψt,M : XM → Xt as in (16.4). In view of item (ii), we only need to prove that the family ψ is time consistent. To this end, let 0 ≤ s < t < M ≤ T and take X ∈ XM . Then, we have ψs,t (ψt,M (X)) = EP [Ds,t EP [Dt,M X|Pt ]|Ps ] = EP [EP [Ds,t Dt,M X|Pt ]|Ps ] = EP [Ds,t Dt,M X|Ps ] = EP [Ds,M X|Ps ] = ψs,M (X) by the “taking out what is known” property in Proposition 10.3.8, the tower property in Proposition 10.3.9, and by time consistency.  In view of Proposition 16.2.3 it is straightforward to reformulate the Fundamental Theorem of Asset Pricing recorded in Theorem 16.1.6 in the language of pricing densities. Theorem 16.2.4 (Fundamental Theorem of Asset Pricing, Version IIa) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists an arbitrage-free family of pricing densities for π. In this case, if the market is complete, then π admits a unique arbitrage-free family of pricing densities. Otherwise, π admits infinitely many arbitrage-free families of pricing densities. Pricing Densities of the (0, T )-Pricing Functional In this section we translate the results about arbitrage-free extensions of the pricing functional π0,T into the language of pricing densities. We start with the counterpart of Theorem 16.1.7, which is a direct consequence of Proposition 16.2.3.

290

16 Fundamental Theorem of Asset Pricing

Theorem 16.2.5 (Fundamental Theorem of Asset Pricing, Version IIb) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists a pricing density for π0,T . In this case, if the market is complete, then π0,T admits a unique pricing density. Otherwise, π0,T admits infinitely many pricing densities. The next result is the counterpart of Proposition 16.1.8 and establishes that an arbitragefree family of pricing densities is completely determined by its (0, T )-pricing density. In other words, two arbitrage-free families of pricing densities with the same (0, T )-pricing density must coincide. Proposition 16.2.6 Let θ ∈ S 0,T be any self-financing strategy with strictly-positive value process. For every arbitrage-free family D ∈ D(π ) the following statements hold: (i) For every M ∈ {1, . . . , T }, D0,M =

EP [D0,T VT [θ ]|PM ] . VM [θ]

(ii) For all 0 ≤ t < M ≤ T , Dt,M =

D0,M Vt [θ ] EP [D0,T VT [θ ]|PM ] . = D0,t VM [θ] EP [D0,T VT [θ]|Pt ]

Proof Assertion (ii) is an immediate consequence of time consistency. To show (i), let ψ be the arbitrage-free family of pricing extensions for π associated with D through (16.4). In particular, recall that D0,M is a PM -measurable Riesz density for ψ0,M . Now, for every X ∈ XM we have  EP

)   ) EP [D0,T VT [θ]|PM ] X X = EP EP D0,T VT [θ ]))PM VM [θ] VM [θ ]  X = EP D0,T VT [θ] VM [θ] 

X = ψ0,T VT [θ] VM [θ] = ψ0,M (X)

16.2 Pricing Densities

291

by Propositions 10.3.7, 10.3.8, and 16.1.8. This shows that EP [D0,T VT [θ ]|PM ] VM [θ] is a PM -measurable Riesz density for ψ0,M . The desired claim follows by noting that ψ0,M has a unique PM -measurable Riesz density by Theorem 12.3.5.  We conclude this section by stating the counterpart of Proposition 16.1.9, showing that any pricing density of π0,T can be embedded into an arbitrage-free family of pricing densities. Since this family must be unique by the preceding proposition, this result establishes a one-to-one correspondence between the set of pricing densities D(π0,T ) and the set of arbitrage-free families of pricing densities D(π ). Proposition 16.2.7 For every pricing density D ∈ D(π0,T ) there exists a unique arbitrage-free family D ∈ D(π ) such that D0,T = D. Proof By Proposition 16.2.6, we only have to establish existence. Thus, recall from Proposition 16.2.3 that there exists an arbitrage-free extension ϕ ∈ E(π0,T ) with (PT measurable) Riesz density equal to D. It follows from Proposition 16.1.9 that there is an arbitrage-free family ψ ∈ E(π) such that ψ0,T = ϕ. Applying Proposition 16.2.3 again, we find an arbitrage-free family D ∈ D(π ) such that Dt,M is the PM -measurable Riesz density of ψt,M for all 0 ≤ t < M ≤ T . We conclude by noting that, as (PT -measurable) Riesz densities of the same functional, D and D0,T must coincide.  The Language of Martingale Deflator Processes We conclude our study of pricing densities by showing that, given any arbitrage-free family of pricing densities D, the process K = (1, D0,1 , . . . , D0,T ) enjoys a special martingale property. This explains why the above process is sometimes called a martingale deflator process in the literature. Proposition 16.2.8 (Martingale Deflator Process) For every P-adapted process K ∈ L such that K0 = 1 the following statements are equivalent: (a) K is strictly positive and for every i ∈ {1, . . . , N} the process KS i is a P-martingale under P, i.e., for all 0 ≤ t < M ≤ T we have i |Pt ] = Kt Sti . EP [KM SM

292

16 Fundamental Theorem of Asset Pricing

(b) The family D defined by setting Dt,M =

KM Kt

is an arbitrage-free family of pricing densities for π . In this case, we have D0,M = KM for every M ∈ {1, . . . , T }. Proof Assume that (a) holds and define the family D as above. Note that D is time consistent. Now, fix M ∈ {1, . . . , M}. To establish (b), we need to show that EP [Dt,M VM [λ]|Pt ] = Vt [λ] for every t ∈ {0, . . . , M − 1} and every strategy λ ∈ S t,M . We prove this by backward induction on t. Base Step If t = M − 1, then the assertion follows immediately from the “taking out what is known property” in Proposition 10.3.8 because EP [DM−1,M VM [λ]|PM−1 ] =

=

N 

1 KM−1

i=1 N 

1 KM−1

i λiM−1 EP [KM SM |PM−1 ]

i λiM−1 KM−1 SM−1

i=1

= Vt [λ]. Induction Step Assume the statement holds for some t ∈ {1, . . . , M − 1}. Then, EP [Dt −1,M VM [λ]|Pt −1 ] = EP [EP [Dt −1,M VM [λ]|Pt ]|Pt −1 ] = EP [Dt −1,t EP [Dt,M VM [λ]|Pt ]|Pt −1] = EP [Dt −1,t Vt [λ]|Pt −1 ] = =

=

1 Kt −1

EP [Kt Vt [λ]|Pt −1 ]

1

N 

Kt −1

i=1

1

N 

Kt −1

i=1

= Vt −1 [λ],

λit −1 EP [Kt Sti |Pt −1] λit −1 Kt −1 Sti−1

16.3 Pricing Measures

293

where we used the “tower property” from Proposition 10.3.9 in the first equality, the “taking out what is known” property from Proposition 10.3.8 in the second and fourth equality, and the induction hypothesis in the third equality. This concludes the induction argument. Conversely, assume that (b) holds. Then, for every i ∈ {1, . . . , N}, i i i |Pt ] = Kt EP [Dt,M SM |Pt ] = Kt πt,M (SM ) = Kt Sti EP [KM SM

for all 0 ≤ t < M ≤ T by the “taking out what is known” property from Proposition 10.3.8. This shows that (a) holds. 

16.3

Pricing Measures

In this section we extend to the multi-period setting the concept of a pricing measure and derive the corresponding version of the Fundamental Theorem of Asset Pricing. As a preliminary observation, suppose the market is arbitrage free and assume that • 1 is a replicable payoff in Mt,M such that πt,M (1) = 1 for all 0 ≤ t < M ≤ T . This is equivalent to assuming that • there exists a strategy θ ∈ S 0,T such that V0 [θ] = · · · = VT [θ ] = 1. In this case, any arbitrage-free family of pricing densities D ∈ D(π) satisfies EP [D0,T ] = EP [D0,T 1] = π0,T (1) = 1. Hence, by Theorem 2.6.5, the pricing density D0,T can be expressed as a Radon-Nikodym density dQ dP for a suitable probability measure Q that is equivalent to P. As a consequence of Proposition 16.2.6, for all dates 0 ≤ t < M ≤ T we can write Dt,M =

EP [D0,T |PM ] EP [D0,T |Pt ]

which in turn yields πt,M (X) = EP [Dt,M X|Pt ] =

! dQ ) " ) dP X PM ! ) " = EQ [X|Pt ] EP dQ )Pt

EP

dP

for every replicable payoff X ∈ Mt,M by Theorem 10.4.1. The above identity is nice because, by analogy to what was observed in the single-period setting, it allows

294

16 Fundamental Theorem of Asset Pricing

to represent prices of marketed payoffs as simple conditional expectations. The key observation here is that, as pointed out in Example 14.5.4, the above “special” assumptions on the payoff 1 can always be achieved by changing the unit of account and expressing payments in units of a numéraire strategy θ ∈ S 0,T . In this case, for 0 ≤ t < M ≤ T , the above argument yields )  X )) πt,M (X) Pt = EQ Vt [θ] VM [θ] ) for every replicable payoff X ∈ Mt,M . The preceding equality can be equivalently written, using the “tilde” notation for rescaled payments, see Remark 14.5.6, as   ! ) "  = EQ X )Pt  πt,M X t,M . ∈M for every replicable rescaled payoff X

Standing Assumption Throughout the remainder of this section we fix a numéraire strategy θ ∈ S 0,T and use the “tilde” notation to denote payments in units of θ .

The next definition provides the multi-period counterpart of the notion of a pricing measure. Definition 16.3.1 (Pricing Measure) A probability measure Q is called a θ -pricing measure (for π ) if the following conditions are satisfied:   ! ) " t,M with 0 ≤ t < M ≤ T .  = EQ X )Pt for every payoff X ∈M (1)  πt,M X (2) Q is equivalent to P. The set of all θ-pricing measures for π is denoted by Qθ (π ).



The following result highlights the link between pricing measures and pricing densities of the rescaled pricing functionals. Proposition 16.3.2 (i) For every pricing measure Q ∈ Qθ (π ) there exists an arbitrage-free family D ∈ D( π) . such that D0,T = dQ dP (ii) For every arbitrage-free family D ∈ D( π ) there exists a pricing measure Q ∈ Qθ (π) such that dQ = D . 0,T dP

16.3 Pricing Measures

295

Proof To show (i), take any pricing measure Q ∈ Qθ (π ). Then, the Radon-Nikodym density dQ dP is strictly positive and satisfies    ! "   = EQ X  = EP dQ X  π0,T X dP 0,T by Proposition 2.6.4. This shows that dQ is a pricing density for  ∈ M for every X dP π)  π0,T . We conclude by noting that there always exists an arbitrage-free family D ∈ D( by Proposition 16.2.7. such that D0,T = dQ dP t,M and To prove (ii), it suffices to observe that for all 0 ≤ t < M ≤ T we have 1 ∈ M  πt,M (1) = 1 and to rely on the argument at the beginning of this section.



The preceding proposition leads to our third version of the Fundamental Theorem of Asset Pricing, which follows directly from Theorem 16.2.4 and Proposition 16.2.6 applied to the rescaled market. Theorem 16.3.3 (Fundamental Theorem of Asset Pricing, Version III) The following statements are equivalent: (a) The market is arbitrage free. (b) There exists a θ -pricing measure for π . In this case, if the market is complete, then π admits a unique θ -pricing measure. Otherwise, it admits infinitely many θ -pricing measures. The Language of Equivalent Martingale Measures We conclude this section by showing that, under any pricing measure, the rescaled value process of each of the basic securities is a martingale process. This explains why pricing measures are also called equivalent martingale measures in the literature (equivalent because they are equivalent to the underlying probability measure). Proposition 16.3.4 (Equivalent Martingale Measure) For every probability measure Q the following statements are equivalent: (a) Q is equivalent to P and for every i ∈ {1, . . . , N} the process  S is a P-martingale under Q, i.e., for all 0 ≤ t < M ≤ T we have i

! i ) " Sti EQ  SM )Pt =  (b) Q is a θ -pricing measure for π .

296

16 Fundamental Theorem of Asset Pricing

Proof First, assume that (a) holds. Fix M ∈ {1, . . . , T }. To establish that (b) holds, we have to show that ) " ! M [λ])Pt = V t [λ] EQ V for every t ∈ {0, . . . , M − 1} and every strategy λ ∈ St,M . We establish this by backward induction on t. Base Step For t = M − 1, the statement follows immediately from the “taking out what is known property” recorded in Proposition 10.3.8 because N N ) ! "  ! i ) "  i t [λ]. M [λ])PM−1 = EQ V λiM−1 EQ  λiM−1  =V SM−1 SM )PM−1 = i=1

i=1

Induction Step Assume the statement holds for some t ∈ {1, . . . , M − 1}. Then, ) ) ") ! " ! ! " M [λ])Pt −1 = EQ EQ V M [λ])Pt )Pt −1 EQ V ) ! " t [λ])Pt −1 = EQ V =

N 

! i) " St )Pt −1 λit −1 EQ 

i=1

=

N 

λit −1 Sti−1

i=1

t −1 [λ], =V where we used the “tower property” from Proposition 10.3.9 in the first equality, the induction hypothesis in the second equality, and the “taking out what is known” property from Proposition 10.3.8 in the third equality. This concludes the induction argument. Conversely, assume that (b) holds. Then, for every i ∈ {1, . . . , N} we have ! i ) "  i  SM )Pt =  πt,M  SM =  Sti EQ  for all 0 ≤ t < M ≤ T . This shows that (a) holds.

16.4



Exercises

In all exercises below we consider the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there.

16.4 Exercises

297

Exercise 16.4.1 Prove that the set of arbitrage-free families of pricing extensions E(π ), the set of arbitrage-free families of pricing densities D(π), and the set of θ -pricing measures Qθ (π), where θ ∈ S 0,T is an arbitrary numéraire strategy, are in a one-to-one correspondence. More concretely, prove the following statements: (i) The map F : D(π ) → E(π ) given for 0 ≤ t < M ≤ T and X ∈ XM by the formula F (D)t,M (X) = EP [Dt,M X|Pt ] is a (well-defined) one-to-one correspondence. (ii) The map F : Qθ (π) → E(π) given for 0 ≤ t < M ≤ T and X ∈ XM by the formula  F (Q)t,M (X) = EP

) Vt [θ ]X )) Pt VM [θ ] )

is a (well-defined) one-to-one correspondence. In both cases, determine the corresponding inverse maps. In addition, find an explicit oneto-one correspondence between D(π ) and Qθ (π) and determine the corresponding inverse map. Exercise 16.4.2 Assume the market is arbitrage free. Show that for all 0 ≤ t < M ≤ T and for every arbitrage-free extension ϕt,M ∈ E(πt,M ) there exists an arbitrage-free family of pricing extensions ψ ∈ E(π ) such that ψt,M = ϕt,M . Show that ψ is unique if the market is (0, t)- and (M, T )-complete. Otherwise, there exist infinitely many such families. This extends Proposition 16.1.9. Exercise 16.4.3 Assume the market is arbitrage free and let t ∈ {0, . . . , T − 1}. (i) Show that for every strictly-positive random variable Dt,t +1 ∈ Lt +1 the following statements are equivalent: (a) Dt,t +1 is a pricing density for πt,t +1. (b) For every i ∈ {1, . . . , N} we have EP [Dt,t +1Sti+1 |Pt ] = Sti . (c) For every i ∈ {1, . . . , N} and every A ∈ Pt we have  B∈Pt+1 , B⊂A

Dt,t +1 (B)

P(B) i S (B) = Sti (A). P(A) t +1

This shows that a pricing density for πt,t +1 is uniquely determined by its action on the payoffs of the basic securities at date t + 1.

298

16 Fundamental Theorem of Asset Pricing

(ii) Let M ∈ {t + 2, . . . , T }. Show that a strictly-positive random variable Dt,M ∈ LM i |P ] = S i for every i ∈ {1, . . . , N} without being a pricing may satisfy EP [Dt,M SM t t density for πt,M . This shows that a pricing density for πt,M is not generally determined by its action on the payoffs of the basic securities at date M. Exercise 16.4.4 (Fundamental Theorem of Asset Pricing) Prove that the following statements are equivalent: (a) The market is arbitrage free. (b) For every t ∈ {0, . . . , T − 1} and every B ∈ Pt +1 there exists dB ∈ (0, 1) such that for all A ∈ Pt and i ∈ {1, . . . , N} we have 

dB

B∈Pt+1 , B⊂A

P(B) i S (B) = Sti (A). P(A) t +1

In this case, show that the family D given by Dt,t +1 =



dB 1B , Dt,M =

B∈Pt+1

M−1 

Du,u+1

u=t

is an arbitrage-free family of pricing densities for π. This gives a simple operational condition, expressed in terms of the basic securities, to check whether the market is arbitrage free and construct an arbitrage-free family of pricing densities. Exercise 16.4.5 Let θ ∈ S 0,T be the numéraire strategy and assume the market is arbitrage free. Show that for every probability measure Q that is equivalent to P the following statements are equivalent: (a) Q is a θ-pricing measure for π . (b) For every t ∈ {0, . . . , T − 1} and every i ∈ {1, . . . , N} we have  EQ

)  Sti+1 )) Sti . )P t = Vt +1 [θ] ) Vt [θ ]

(c) For every t ∈ {0, . . . , T − 1}, every A ∈ Pt , and every i ∈ {1, . . . , N} we have  B∈Pt+1 , B⊂A

Q(B|A)

Sti+1 (B)

Vt +1[θ](B)

=

Sti (A) . Vt [θ ](A)

This shows that a pricing measure is uniquely determined by its action on the payoffs of the basic securities.

16.4 Exercises

299

Exercise 16.4.6 (Fundamental Theorem of Asset Pricing) Let θ ∈ S 0,T be the numéraire strategy. Prove that the following statements are equivalent: (a) The market is arbitrage free. (b) For every t ∈ {0, . . . , T − 1}, every A ∈ Pt , and every B ∈ Pt +1 with B ⊂ A there exists a number qA,B ∈ (0, 1) such that for every i ∈ {1, . . . , N} we have 

qA,B

B∈Pt+1 , B⊂A

Sti+1 (B)

Vt +1 [θ ](B)

=

Sti (A) . Vt [θ ](A)

In this case, for every t ∈ {0, . . . , T − 1} and every A ∈ Pt show that 

qA,B = 1.

B∈Pt+1 , B⊂A

For every ω ∈  and every t ∈ {0, . . . , T } denote by At (ω) the unique atom in Pt containing ω and set qω =

T −1

qAt (ω),At+1 (ω) .

t =0

It follows from Exercise 13.6.11 that the coefficients qω add up to 1 and the probability measure Q given by Q(ω1 ) = qω1 , . . . , Q(ωK ) = qωK satisfies Q(B|A) = qA,B for every t ∈ {0, . . . , T − 1}, every A ∈ Pt , and every B ∈ Pt +1 with B ⊂ A. Show that Q is a θ -pricing measure for π. This gives a simple operational condition, expressed in terms of the basic securities, to check whether the market is arbitrage free and construct a pricing measure. Exercise 16.4.7 Let  = {ω1 , . . . , ω4 } and assume that P(ω1 ) = · · · = P(ω4 ) = Consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}.

1 4.

300

16 Fundamental Theorem of Asset Pricing

We consider two basic securities specified by

1

1 1 4 1

1

: 2

3

2

0 0 1 0

0

: 1

1

(i) Use Exercise 16.4.4 to show that the market is not arbitrage free. (ii) Confirm point (i) by using Exercise 16.4.6. Exercise 16.4.8 Let  = {ω1 , . . . , ω6 } and assume that P(ω1 ) = · · · = P(ω6 ) = Consider a two-period market with information structure P given by

1 6.

P0 = {}, P1 = {{ω1 , ω2 , ω3 }, {ω4 , ω5 , ω6 }}, We consider two basic securities specified by 1 1

: 1

1 1

1 1 1 1 1 1

6 2

: 3

4 2

8 4 6 2 3 1

Let θ ∈ S 0,2 be the strategy given by θ0 = θ1 = (1, 0) ∈ R2 . (i) (ii) (iii) (iv) (v)

Use Exercise 16.4.4 to show that the market is arbitrage free. Determine all the arbitrage-free families of pricing densities. Confirm point (i) by using Exercise 16.4.6. Determine all the θ -pricing measures. Deduce that the market is incomplete.

Exercise 16.4.9 Consider the multinomial market introduced in Exercise 14.6.11 and the strategy θ ∈ S 0,T defined by θ0 = · · · = θT −1 = (1, 0) ∈ R2 . (i) Use Exercise 16.4.4 to show that the market is arbitrage free if and only if the condition r1 > r > rH holds. (ii) Determine all the arbitrage-free families of pricing densities. (iii) Confirm item (i) by using Exercise 16.4.6. (iv) Determine all the θ -pricing measures. (v) Deduce that, under no arbitrage, the market is complete if and only if H = 2.

16.4 Exercises

301

Exercise 16.4.10 Let R ∈ R be a rescaling process and take 0 ≤ t < M ≤ T . M → X t is an arbitrage-free extension of  (i) Show that a map ψt,M : X πt,M if and only ψt,M (RM X)   is an arbitrage-free if the map ψt,M : XM → Xt given by ψt,M (X) = Rt extension of πt,M . Moreover, the family ψ is time consistent if and only if the family ψ  is time consistent. (ii) Show that a random variable D ∈ L is a pricing density for  πt,M if and only if the MD is a pricing density for πt,M . Moreover, random variable D  ∈ L given by D  = RR t the family D is time consistent if and only if the family D  is time consistent.

17

Market-Consistent Prices for General Payoffs

In an arbitrage-free market it is possible to price replicable payoffs in a market-consistent manner by way of replication. Similar to what we did in Chap. 8 for the single-period model, in this chapter we extend the concept of a market-consistent price to payoffs that are not replicable. We provide several different descriptions of the set of market-consistent prices and show that the price at which rational sellers and buyers will contemplate transacting a nonreplicable payoff at a certain point in time needs to respect some natural bounds. The upper bound is called the superreplication price and corresponds to the threshold above which no buyer should be willing to buy, while the lower bound is called the subreplication price and corresponds to the threshold below which no seller should be willing to sell.

Standing Assumption Throughout the entire chapter we work in the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is incomplete and arbitrage free.

17.1

Marketed Bounds

In an incomplete market there exist payoffs that are not replicable and the concept of a market-consistent price developed so far cannot be applied to them. However, just as in the single-period case, market-consistent prices of marketed payoffs do provide information about a reasonable range of prices at which a nonmarketed payoff should be transacted. More precisely, the prices of lower and upper marketed bounds provide the benchmark against which the potential price of a nonreplicable payoff should be compared.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_17

303

304

17 Market-Consistent Prices for General Payoffs

Definition 17.1.1 (Marketed Bound) Let 0 ≤ t < M ≤ T and take a payoff X ∈ XM . A marketed payoff Z ∈ Mt,M is called a marketed lower bound for X (replicable at date t) if Z ≤ X. The set of marketed lower bounds for X that are replicable at date t is denoted by M− t,M (X). Similarly, a marketed payoff Z ∈ Mt,M is called a marketed upper bound for X (replicable at date t) if Z ≥ X. The set of marketed upper bounds for X that are replicable  at date t is denoted by M+ t,M (X). It is easy to see that every payoff admits marketed bounds. Proposition 17.1.2 Let 0 ≤ t < M ≤ T . For every X ∈ XM the sets M− t,M (X) and + Mt,M (X) are nonempty. Proof Take an arbitrary payoff X ∈ XM and consider a strictly-positive marketed payoff U ∈ Mt,M , which exists by Corollary 14.3.5. Then, for sufficiently large a ∈ (0, ∞) we +  have −aU ≤ X ≤ aU , showing that M− t,M (X) and Mt,M (X) are nonempty.

17.2

Market-Consistent Prices

The notion of a market-consistent price is also a straightforward generalization of the corresponding single-period notion. The key difference is that, in our multi-period context, we need to account for the fact that at every date the economy can be in different states. In particular, a price at date t ∈ {0, . . . , T − 1} needs to be modelled by a random variable p ∈ Xt , where the Pt -measurability requirement results from the fact that, to determine a price at date t, the information available at date t should suffice. Let 0 ≤ t < M ≤ T be given and consider a payoff X ∈ XM . Assume p ∈ Xt is the price that a seller is prepared to receive for X at date t. More precisely, at date t the agent is prepared to receive p(A) in state A ∈ Pt . Then, from the seller’s perspective, it is reasonable to require that p(A) > π(Z|A) for every marketed payoff Z ∈ Mt,M such that Z ≤ X on the entire  and Z  X on the state A. This is because, if there exists a replicable payoff Z ∈ Mt,M such that • Z(ω) ≤ X(ω) for every ω ∈ , • Z(ω) < X(ω) for some ω ∈ A, • π(Z|A) ≥ p(A), then a rational seller would always give away, in that state, the better, from a seller’s perspective, and more expensive payoff Z in the market instead of transacting X outside the market. In other words, for a rational seller, any price that satisfies p(A) ≤ π(Z|A) would be inconsistent with the alternatives the market has to offer. A similar argument

17.2 Market-Consistent Prices

305

holds for a price a buyer may consider. Indeed, from a buyer’s perspective, it is reasonable to require that p(A) < π(Z|A) for every marketed payoff Z ∈ Mt,M such that Z ≥ X on the entire  and Z  X on the state A. Otherwise, we would have • Z(ω) ≥ X(ω) for every ω ∈ , • Z(ω) > X(ω) for some ω ∈ A, • π(Z|A) ≤ p(A), and every rational buyer would prefer to purchase the better, from a buyer’s perspective, and cheaper payoff Z in the market instead of transacting X outside the market. This discussion provides the basis for the following definition. Definition 17.2.1 (Market-Consistent Price) Let 0 ≤ t < M ≤ T and consider a payoff X ∈ XM and a random variable p ∈ Xt . We say that p is a market-consistent seller price for X at date t if for every marketed lower bound Z ∈ M− t,M (X) and for every A ∈ Pt we have Z  X on A ⇒ p(A) > πt,M (Z|A). The set of market-consistent seller prices for X at date t is denoted by st,M (X). Similarly, we say that p is a market-consistent buyer price for X at date t if for every marketed upper bound Z ∈ M+ t,M (X) and for every A ∈ Pt we have Z  X on A ⇒ p(A) < πt,M (Z|A). The set of market-consistent buyer prices for X at date t is denoted by bt,M (X). Finally, we say that p is a market-consistent price for X at date t whenever p is both a market-consistent seller price and a market-consistent buyer price for X at date t. The set  of market-consistent prices for X at date t is denoted by t,M (X). Remark 17.2.2 It can be easily shown that, for a replicable payoff, the notion of market consistency based on replicating strategies as introduced in Chap. 15 coincides with the general notion we have just introduced; see also Theorem 17.3.1.  Remark 17.2.3 (Prices and Individual Preferences) Just as we did in Remark 8.2.3 for the single-period case, it is important to note that market-consistent prices only provide a range outside of which it would be unreasonable to transact a payoff. Whether a transaction takes place and at which particular market-consistent price ultimately depends on the individual preferences of the transacting parties. 

306

17 Market-Consistent Prices for General Payoffs

We use the arbitrage-free extensions of the pricing functionals to prove that every payoff admits a market-consistent price. In fact, Theorem 17.3.1 will show that every marketconsistent price can be expressed in terms of an arbitrage-free extension. Recall that we have denoted by E(πt,M ) the set of arbitrage-free extensions of the pricing functional πt,M . Proposition 17.2.4 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM and every arbitragefree extension ψt,M ∈ E(πt,M ) we have that ψt,M (X) is a market-consistent price for X at date t. + Proof Consider two marketed bounds W ∈ M− t,M (X) and Z ∈ Mt,M (X) and assume that W  X  Z on some A ∈ Pt . Then,

πt,M (W |A) = ψt,M (W |A) < ψt,M (X|A) < ψt,M (Z|A) = πt,M (Z|A) by conditionality and strict monotonicity of ψt,M . This shows that ψt,M (X) is a marketconsistent price for X at date t and concludes the proof.  The next result shows that market-consistent seller and buyer prices can be expressed in terms of market-consistent prices. This allows to convert all the following results on market-consistent prices into their seller’s and buyer’s counterparts. Proposition 17.2.5 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the following statements hold: (i) st,M (X) = {p ∈ Xt ; p ≥ q for some q ∈ t,M (X)}. (ii) bt,M (X) = {p ∈ Xt ; p ≤ q for some q ∈ t,M (X)}. Proof We start by proving (i). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent seller price p ∈ st,M (X) and a market-consistent price r ∈ t,M (X) and set q = min{p, r}. It is clear that q is a market-consistent price for X at date t. This establishes the desired inclusion. Now, we focus on (ii). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent buyer price p ∈ bt,M (X) and a market-consistent price r ∈ t,M (X) and set q = max{p, r}. It is easy to see that q is a market-consistent price for X at date t. This establishes the desired inclusion. 

17.3

Characterizing Market-Consistent Prices

In this section we study a variety of representations of the set of market-consistent prices. Our main interest is in market-consistent prices, rather than their buyer’s and seller’s

17.3 Characterizing Market-Consistent Prices

307

versions, since these are the prices at which a potential transaction can occur. The corresponding results for seller and buyer prices can be easily derived from Proposition 17.2.5. Market-Consistent Prices and Pricing Extensions In Chap. 12, we have already encountered the concept of a market-consistent price in disguise. Indeed, for 0 ≤ t < M ≤ T and for a payoff X ∈ XM , the market-consistent prices of X at date t correspond precisely to those Pt -measurable random variables that satisfy the πt,M -compatibility condition at X as recorded in Definition 12.2.7. The following fundamental result is a simple translation of Theorem 12.2.9 to our present setting and shows that market-consistent prices can be expressed in terms of arbitragefree extensions of the pricing functional. This justifies the centrality, reflected by the name itself, of the Fundamental Theorem of Asset Pricing: Any market-consistent price can be interpreted as the price that the corresponding payoff possesses in a complete arbitragefree “market extension”. Recall that we have denoted by E(π ) the set of arbitrage-free families of pricing extensions. Theorem 17.3.1 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM we have

t,M (X) = {ψt,M (X) ; ψt,M ∈ E(πt,M )} = {ψt,M (X) ; ψ ∈ E(π )}. Proof The first equality is a direct consequence of Theorem 12.2.9. The inclusion “⊃” in the second equality is clear. The converse inclusion “⊂” follows from Exercise 16.4.2. To show it directly, take any arbitrage-free extension ϕt,M ∈ E(πt,M ). We have to prove that ϕt,M can be embedded into a suitable arbitrage-free family. To show this, let ϕ0,t be an arbitrage-free extension of π0,t (provided that t > 0) and let ϕM,T be an arbitrage-free extension of πM,T (provided that M < T ). Moreover, set ϕ = ϕ0,t ◦ ϕt,M ◦ ϕM,T . It follows from Lemma 16.1.5 that ϕ is an arbitrage-free extension of π0,T . Then, by Proposition 16.1.9, we find an arbitrage-free family ψ ∈ E(π) such that ψ0,T = ϕ. We claim that ψt,M = ϕt,M . To see this, use Proposition 16.1.8 to obtain  1A X VT [η] VM [η]

 1A X = ϕ0,t ϕt,M ϕM,T (VT [η]) VM [η]

ψ0,M (1A X) = ψ0,T

= ϕ0,t (ϕt,M (1A X)) = ϕ0,t (1A ϕt,M (X)) = ϕt,M (X|A)ϕ0,t (1A ),

308

17 Market-Consistent Prices for General Payoffs

as well as

ψ0,t (1A ) = ψ0,T

  1A 1A VT [η] = ϕ0,t ϕt,M (ϕM,T (VT [η])) = ϕ0,t (1A ) Vt [η] Vt [η]

for every A ∈ Pt and every X ∈ XM . As a result, we immediately infer from Proposition 16.1.8 that ψt,M (X) =

 ϕt,M (X|A)ϕ0,t (1A )  1A = ϕt,M (X|A)1A = ϕt,M (X) ϕ0,t (1A )

A∈Pt

A∈Pt

for every X ∈ XM , as claimed. This shows that ϕt,M can be embedded into the family ψ, delivering the desired inclusion.  Remark 17.3.2 (Arbitrage-Free Prices) The above result tells us that the set of marketconsistent prices for a nonreplicable payoff coincides with the set of possible prices that the payoff might have in an “extended” complete arbitrage-free market; see Proposition 16.1.2 and also Exercise 17.5.6. For this reason, market-consistent prices are sometimes called arbitrage-free prices. Albeit suggestive, when using this terminology one should exercise the same caution we called for in Remark 7.1.4.  Market-Consistent Prices and Pricing Densities In view of the link between arbitrage-free extensions and pricing densities established in Proposition 16.2.3, we can immediately derive from the above result a representation of market-consistent prices in terms of pricing densities. Recall that D(πt,M ) denotes the set of pricing densities of the pricing functional πt,M and D(π ) denotes the set of arbitragefree families of pricing densities. Theorem 17.3.3 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM we have

t,M (X) = {EP [Dt,M X|Pt ] ; Dt,M ∈ D(πt,M )} = {EP [Dt,M X|Pt ] ; D ∈ D(π )}. Market-Consistent Prices and Pricing Measures The link between pricing densities and pricing measures established in Proposition 16.3.2 allows us to establish a third characterization of market-consistent prices based on pricing measures. To this end, we have to express payments in units of a given numéraire strategy. We adopt the “tilde” notation to denote the corresponding rescaled payments; see Remark 14.5.6. Recall that we have denoted by Qθ (π ) the set of θ-pricing measures with respect to a numéraire strategy θ ∈ S 0,T .

17.4 Sub- and Superreplication Prices

309

Theorem 17.3.4 Let θ ∈ S 0,T be the numéraire strategy and let 0 ≤ t < M ≤ T . For M we have ∈X every payoff X $   # ! "  = EQ X|P  t ; Q ∈ Qθ (π ) .  t,M X

Remark 17.3.5 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 17.5.11: For 0 ≤ t < M ≤ T and for every payoff X ∈ XM we have  

t,M (X) = Vt [θ ]EQ

17.4

)  X )) P ; Q ∈ Q (π) . t θ VM [θ ] )



Sub- and Superreplication Prices

In the spirit of our treatment of single-period markets, we now provide a detailed study of the bounds of the set of market-consistent prices. In particular, we show when such bounds are themselves market-consistent prices. As a preliminary observation, we point out that the set of market-consistent prices of any given payoff is a bounded interval-like set in the sense specified below. Proposition 17.4.1 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the set t,M (X) is bounded and satisfies the following “interval-like” property: p, q ∈ t,M (X), r ∈ Xt , p ≤ r ≤ q ⇒ r ∈ t,M (X). In particular, {p(A) ; p ∈ t,M (X)} is a bounded interval for every A ∈ Pt . Proof The result follows by combining Theorems 12.2.6 and 17.3.1. Here, we provide a more direct proof. To show that the set t,M (X) is bounded, take any marketed bounds + W ∈ M− t,M (X) and Z ∈ Mt,M (X) such that W < X < Z. Then, every market-consistent price p ∈ t,M (X) must satisfy πt,M (W |A) < p(A) < πt,M (Z|A) for every A ∈ Pt . This establishes that t,M (X) is bounded. The “interval-like” property follows directly from Proposition 17.2.5.  The lower and the upper bounds of the set of market-consistent prices are called the sub- and superreplication prices.

310

17 Market-Consistent Prices for General Payoffs

Definition 17.4.2 (Sub/Superreplication Price) Let 0 ≤ t < M ≤ T and consider a payoff X ∈ XM . The subreplication price of X at date t is defined as − πt,M (X) := inf t,M (X).

The superreplication price of X at date t is defined as + πt,M (X) := sup t,M (X). − + For convenience, we also set πM,M (X) := πM,M (X) := X.

The next proposition is a simple reformulation of Proposition 12.2.11 and justifies our terminology for sub- and superreplication prices. Proposition 17.4.3 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the following statements hold: − (i) πt,M (X) = sup{πt,M (Z) ; Z ∈ M− t,M (X)}. + + (ii) πt,M (X) = inf{πt,M (Z) ; Z ∈ Mt,M (X)}.

Moreover, the supremum in (i) and the infimum in (ii) are attained. As a direct consequence of the representation of market-consistent prices in Theorem 17.3.1, we derive the following result that lists a variety of useful properties of suband superreplication prices. Alternatively, the result follows from Proposition 12.2.10. Proposition 17.4.4 Let 0 ≤ t < M ≤ T . The following statements hold: + − (i) πt,M (X) = −πt,M (−X) for every X ∈ XM . − (ii) πt,M : XM → Xt is a superlinear increasing Pt -conditional extension of πt,M such that: − (X|A) < 0 for all X ∈ XM and A ∈ Pt such that X  0 on A. (a) πt,M − − (X) + πt,M (Z) for all X ∈ XM and Z ∈ Mt,M . (b) πt,M (X + Z) = πt,M + (iii) πt,M : XM → Xt is a sublinear increasing Pt -conditional extension of πt,M such that: + (X|A) > 0 for all X ∈ XM and A ∈ Pt such that X  0 on A. (a) πt,M + + (X + Z) = πt,M (X) + πt,M (Z) for all X ∈ XM and Z ∈ Mt,M . (b) πt,M − + Remark 17.4.5 Let 0 ≤ t < M ≤ T . Since the Pt -conditional functionals πt,M and πt,M are superlinear and sublinear, respectively, they are automatically Lipschitz continuous

17.4 Sub- and Superreplication Prices

311

with respect to the maximum norm (or equivalently with respect to every p-norm for p ∈ [1, ∞)), i.e., there exists a constant c ∈ (0, ∞) such that − − πt,M (X) − πt,M (Y )∞ ≤ cX − Y ∞ , + + πt,M (X) − πt,M (Y )∞ ≤ cX − Y ∞ ,

for all payoffs X, Y ∈ XM . This follows from Corollary 11.3.14.



The next result shows that market-consistent bounds inherit the time consistency property of pricing functionals. This implies, in particular, that it is equally “efficient”, in terms of price, to “sub-” or “superreplicate” up to maturity in one go or to “sub-” or “superreplicate” in two steps: first up to an intermediate date and then up to maturity. Proposition 17.4.6 (Time Consistency) Let 0 ≤ s < t < M ≤ T . For every payoff X ∈ XM the following statements hold: − − − (i) πs,t (πt,M (X)) = πs,M (X). + + + (ii) πs,t (πt,M (X)) = πs,M (X).

Proof In view of Proposition 17.4.4, it suffices to establish (i). To prove the inequality − “≥”, take a marketed payoff Z ∈ M− s,M (X) such that πs,M (Z) = πs,M (X), which exists − − by Proposition 17.4.3. We clearly have Z ∈ Mt,M (X), so that πt,M (X) ≥ πt,M (Z) again by Proposition 17.4.3. Moreover, πt,M (Z) ∈ Ms,t by Theorem 15.4.2. Hence, by the time − , consistency recorded in Theorem 15.4.2 and the monotonicity of πs,t − − − − (πt,M (X)) ≥ πs,t (πt,M (Z)) = πs,t (πt,M (Z)) = πs,M (Z) = πs,M (X) πs,t

the assertion follows directly from point (i). To prove the inequality “≤”, note first that, by Proposition 17.4.3, there are replicable − payoffs Z ∈ M− t,M (X) and W ∈ Ms,t (πt,M (Z)) such that − − πt,M (Z) = πt,M (X), πs,t (W ) = πs,t (πt,M (Z)).

If λ ∈ S t,M is a replicating strategy for Z and μ ∈ S s,t one for W , then Vs [μ] = πs,t (W ), Vt [μ] = W, Vt [λ] = πt,M (Z), VM [λ] = Z.

312

17 Market-Consistent Prices for General Payoffs

Define now a trading strategy ξ ∈ S s by setting ⎧ ⎪ ⎪ ⎨ μu , ξu = λu + ⎪ ⎪ ⎩ 0,

if u ∈ {s, . . . , t − 1}, Vt [μ]−Vt [λ] ηu , Vt [η]

if u ∈ {t, . . . , M − 1}, if u ∈ {M, . . . , T − 1}.

It is easy to see that ξ belongs to S s,M and satisfies VM [ξ ] = VM [λ] +

Vt [μ] − Vt [λ] W − πt,M (Z) VM [η] = Z + VM [η] ≤ Z ≤ X. Vt [η] Vt [η]

Hence, by Proposition 17.4.3, − − − πs,t (πt,M (X)) = πs,t (W ) = Vs [μ] = Vs [ξ ] ≤ πs,M (X).

This concludes the proof of the desired statement.



Representing Sub- and Superreplication Prices As a direct application of Theorem 17.3.1 we derive the following representation of suband superreplication prices. Proposition 17.4.7 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the following statements hold: (i) (ii) (iii) (iv)

− πt,M (X) = inf{ψt,M (X) ; ψt,M ∈ E(πt,M )}. + πt,M (X) = sup{ψt,M (X) ; ψt,M ∈ E(πt,M )}. − πt,M (X) = inf{ψt,M (X) ; ψ ∈ E(π )}. + πt,M (X) = sup{ψt,M (X) ; ψ ∈ E(π )}.

The representation of sub- and superreplication prices in terms of pricing densities is a direct consequence of Theorem 17.3.3. Proposition 17.4.8 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the following statements hold: (i) (ii) (iii) (iv)

− (X) = inf{EP [Dt,M X|Pt ] ; Dt,M ∈ D(πt,M )}. πt,M + πt,M (X) = sup{EP [Dt,M X|Pt ] ; Dt,M ∈ D(πt,M )}. − πt,M (X) = inf{EP [Dt,M X|Pt ] ; D ∈ D(π )}. + (X) = sup{EP [Dt,M X|Pt ] ; D ∈ D(π )}. πt,M

Similarly, we derive from Theorem 17.3.4 a representation of sub- and superreplication prices in terms of pricing measures.

17.4 Sub- and Superreplication Prices

313

Proposition 17.4.9 Let θ ∈ S 0,T be the numéraire strategy and let 0 ≤ t < M ≤ T . For M the following statements hold: ∈X every payoff X $ # ! " −    t ; Q ∈ Qθ (π) . (i)  πt,M X = inf EQ X|P   # ! " $ +   t ; Q ∈ Qθ (π) . X = sup EQ X|P (ii)  πt,M Remark 17.4.10 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 17.5.11: For 0 ≤ t < M ≤ T and for every payoff X ∈ XM the following statements hold: ) ' % & ( ) − (X) = Vt [θ] inf EQ VMX[θ] )Pt ; Q ∈ Qθ (π ) . (i) πt,M ) ' % & ( ) + (X) = Vt [θ] sup EQ VMX[θ ] )Pt ; Q ∈ Qθ (π ) . (ii) πt,M



Market-Consistent Prices and Replicability The following additional characterization of the set of market-consistent prices is a direct translation of Proposition 12.2.10 to our present setting. Theorem 17.4.11 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM and every A ∈ Pt the following statements hold: − + (X|A) = πt,M (X|A) and (i) If 1A X is replicable at date t, then πt,M

{p(A) ; p ∈ t,M (X)} = {πt,M (1A X|A)}. − + (ii) If 1A X is not replicable at date t, then πt,M (X|A) < πt,M (X|A) and − + {p(A) ; p ∈ t,M (X)} = (πt,M (X|A), πt,M (X|A)).

As a direct consequence of Theorem 17.4.11 we can derive the following useful characterization of replicability in terms of sub- and superreplication prices. Proposition 17.4.12 Let 0 ≤ t < M ≤ T . For every payoff X ∈ XM the following statements are equivalent: (a) (b) (c) (d) (e)

X is replicable at date t. X has a unique market-consistent price at date t. + πt,M (X) is a market-consistent price for X at date t. − (X) is a market-consistent price for X at date t. πt,M − + πt,M (X) = πt,M (X).

314

17.5

17 Market-Consistent Prices for General Payoffs

Exercises

In all exercises below we consider the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is incomplete and arbitrage free. Exercise 17.5.1 Let 0 ≤ t < M ≤ T . Show that for every payoff X ∈ XM the following statements hold: − (i) M+ t,M (X) = −Mt,M (−X). (ii) bt,M (X) = − st,M (−X). (iii) t,M (−X) = − t,M (X).

Exercise 17.5.2 Let 0 ≤ t < M ≤ T . Show that for every payoff X ∈ XM the set

st,M (X) is bounded from below and satisfies the “interval-like” property p ∈ st,M (X), q ∈ Xt , q ≥ p ⇒ q ∈ st,M (X). Similarly, the set bt,M (X) is bounded from above and satisfies the “interval-like” property p ∈ bt,M (X), q ∈ Xt , q ≤ p ⇒ q ∈ bt,M (X). Exercise 17.5.3 Let 0 ≤ t < M ≤ T . Show that for every payoff X ∈ XM the following statements hold: (i) min{p, q}, max{p, q} ∈ st,M (X) for all p, q ∈ st,M (X). (ii) min{p, q}, max{p, q} ∈ bt,M (X) for all p, q ∈ bt,M (X). (iii) min{p, q}, max{p, q} ∈ t,M (X) for all p, q ∈ t,M (X). Exercise 17.5.4 Let 0 ≤ t < M ≤ T . Show that for a payoff X ∈ XM and a random variable p ∈ Xt the following statements may hold: (i) p(A) > πt,M (Z|A) for every Z ∈ Mt,M and every A ∈ Pt such that Z < X on A but p is not a market-consistent seller price for X at date t. (ii) p(A) < πt,M (Z|A) for every Z ∈ Mt,M and every A ∈ Pt such that Z > X on A but p is not a market-consistent buyer price for X at date t. Exercise 17.5.5 Let 0 ≤ s < t < M ≤ T . Show that for every payoff X ∈ XM and every arbitrage-free extension ψs,t ∈ E(πs,t ) we have p ∈ t,M (X) ⇒ ψs,t (p) ∈ s,M (X).

17.5 Exercises

315

Exercise 17.5.6 Let pt ∈ Xt for every t ∈ {0, . . . , T − 1} and consider a nonzero positive payoff X ∈ XT . Extend the market for the basic securities by adding a new basic security S N+1 = (p0 , . . . , pT −1 , X). Assume that S 1 , . . . , S N maintain their prices and payoffs. Show that the extended market is arbitrage-free if and only if pt is a market-consistent price for X at date t for every t ∈ {0, . . . , T − 1}. Exercise 17.5.7 Prove that the following statements hold for all 0 ≤ t < M ≤ T : − is neither linear, nor strictly decreasing, and for X ∈ XM we may have (i) πt,M − (X)  0. X  0 ⇒ πt,M + is neither linear, nor strictly increasing, and for X ∈ XM we may have (ii) πt,M + (X)  0. X  0 ⇒ πt,M

Exercise 17.5.8 Let 0 ≤ t < M ≤ T . Show that there exists c ∈ (0, ∞) such that |X − Y | ≤

cX − Y ∞ VM [η] Vt [η]

for all X, Y ∈ XM . Use Proposition 17.4.4 to infer that − − πt,M (X) − πt,M (Y )∞ ≤ cX − Y ∞ , + + (X) − πt,M (Y )∞ ≤ cX − Y ∞ , πt,M − for all X, Y ∈ XM . This provides a direct way to establish the Lipschitz continuity of πt,M + and πt,M ; see Remark 17.4.5.

Exercise 17.5.9 Show that the infimum and supremum in Proposition 17.4.7 are not “attained” for nonreplicable payoffs. More precisely, let 0 ≤ t < M ≤ T and prove that for every payoff X ∈ XM and for every state A ∈ Pt such that 1A X ∈ / Mt,M both the infimum and the supremum fail to be attained on A. Exercise 17.5.10 Let  = {ω1 , . . . , ω6 } and let P(ω1 ) = · · · = P(ω6 ) = 16 . Consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 , ω3 }, {ω4 , ω5 , ω6 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }, {ω5 }, {ω6 }}.

316

17 Market-Consistent Prices for General Payoffs

Consider two basic securities specified by 1 1: 1

1 1

1 1 1 1 1 1

6 2: 3

4 2

8 4 6 2 3 1

The market is incomplete and arbitrage free by Exercise 16.4.8. (i) Determine the set of market-consistent prices of 1ω1 at date 0. (ii) Determine the sub- and superreplication prices of 1ω1 at date 0. (iii) Deduce whether 1ω1 is replicable at date 0 or not. Exercise 17.5.11 Let 0 ≤ t < M ≤ T and let R ∈ R be a rescaling process. Prove that M we have ∈X for every rescaled payoff X    = Rt t,M  t,M X

−   X  πt,M

=

− Rt πt,M

  X , RM

 

  X X +   + ,  πt,M X = Rt πt,M . RM RM

18

Market-Consistent Prices for Payoff Streams

So far we have focused on financial contracts with a single maturity. In this chapter we extend the valuation framework to include financial contracts that involve payments at multiple dates. The flow of payoffs associated with such a contract is called a payoff stream. Although fairly straightforward, it is worthwhile to provide explicit results for payoff streams for two reasons. First, there are some features which may not be entirely intuitive, e.g., the fact that for a payoff stream to be replicable it is not necessary that the individual single-payoff components are replicable. Second, a clear understanding of payoff streams brings greater clarity to the study of American options in the next chapter.

Standing Assumption Throughout the entire chapter we work in the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is arbitrage free.

18.1

Payoff Streams

The definition of a payoff stream formalizes the notion of a financial contract delivering payments at multiple dates. Note that such contracts abound in practice: Shares pay dividends, bonds pay coupons, swaps specify the periodic exchange of cash flows, etc. Hence, there is also a practical interest in understanding how to replicate and price multiple-payment contracts.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_18

317

318

18 Market-Consistent Prices for Payoff Streams

Definition 18.1.1 (Payoff Stream) A payoff stream is a P-adapted process such that X0 = 0. The set of payoff streams is denoted by X , i.e., X := {0} × X1 × · · · × XT .



For a payoff stream X ∈ X and for each date t ∈ {1, . . . , T } the random variable Xt ∈ Xt represents a payoff, expressed in the fixed unit of account, that is due at date t. Since no payment is involved at date 0, we have set X0 = 0. Example 18.1.2 (Single-Maturity Payoff) It is clear that the single-maturity payoffs we have considered so far can be viewed as special cases of payoff streams. Indeed, a payoff X ∈ XM with M ∈ {1, . . . , T } can be identified with the payoff stream X ∈ X given by X = (0, . . . , 0, X, 0, . . . , 0), where X occupies the (M + 1)th position in the sequence.



Remark 18.1.3 (Notation for Rescaled Payoff Streams) We continue using the “tilde” notation to denote payments after a suitable change of the unit of account. Here, for a  the space of rescaled payoff streams, given rescaling process R ∈ R, we denote by X i.e., we set  = {0} × X 1 × · · · × X T . X  given by ∈X The rescaled version of a stream X ∈ X is the payoff stream X T ) = (R0 X0 , . . . , RT XT ).  = (X 0 , . . . , X X As observed in the previous chapters, this notation is imprecise because it does not make explicit reference to the dependence on the chosen rescaling process. However, this omission should cause no confusion because we only deal with one rescaling process at a time.  Transacting a Payoff Stream at an Intermediate Date Assume a seller and a buyer transact a financial contract that is represented by the payoff stream X ∈ X . At any given date t ∈ {1, . . . , T − 1} the payoffs X0 , . . . , Xt −1 will have already been settled at earlier dates, the payoff Xt is maturing precisely at date t, and the payoffs Xt +1 , . . . , XT will need to be settled at later dates. Clearly, if the contract changes hand at date t, the payoffs that have already been settled are of no relevance when determining at which price the transaction should take place. Just as clearly, the payoffs to be settled at later dates will belong to the new owner and need to be taken into consideration when pricing. What is a matter of convention, however, is what happens with

18.2 Terminal-Payoff Equivalents

319

the payoff Xt that is due at date t. Is Xt due before the transaction so that it belongs to the seller and is irrelevant for pricing? Or is Xt due after the transaction, in which case it belongs to the buyer and it must be accounted for in the pricing? Actual transactions may adopt either convention and which convention is adopted for an individual transaction is of high practical relevance. From a theoretical perspective, however, we only need to develop the theory for one of the conventions and it is irrelevant which one we choose. This is because, at date t, the only possible value for Xt is exactly Xt . Hence, to obtain the value of the transaction when the convention is that Xt belongs to the seller, we just need to subtract Xt from the value of the transaction when the convention is that Xt belongs to the buyer. Because it leads to more elegant formulas, in this book we will adopt the convention that the payoff Xt always belongs to the buyer. Maturing Payoff and Residual Payoff Stream Under our convention, the following notation turns out to be very useful. Definition 18.1.4 (Forward-Looking Truncation) Let X ∈ X and t ∈ {1, . . . , T }. The forward-looking truncation of X at date t is the payoff stream Xt :T ∈ X defined by X t :T := (0, . . . , 0, Xt , . . . , XT ).



Every payoff stream X ∈ X can be therefore decomposed into two components at each date t ∈ {1, . . . , T − 1}: • the maturing payoff Xt , • the residual payoff stream Xt +1:T . Then, under our convention, a buyer of X at the date t receives the maturing payoff together with the residual payoff stream. Remark 18.1.5 Consider a payoff stream X ∈ X and take t ∈ {1, . . . , T − 1} and A ∈ Pt . Then, the stochastic process 1A X is well defined, but fails to be P-adapted. However, the process 1A Xt :T is P-adapted and thus qualifies as a payoff stream. This provides additional motivation for introducing forward-looking truncations. 

18.2

Terminal-Payoff Equivalents

In this brief section we introduce a powerful notion, the terminal-payoff equivalent of a payoff stream, that will allow us to transfer all the results about market-consistent prices for single-maturity payoffs to the setting of payoff streams. In words, the terminalpayoff equivalent transforms a payoff stream into a single-payment payoff maturing at the

320

18 Market-Consistent Prices for Payoff Streams

terminal date by pushing forward all the components of the stream using a pre-specified trading strategy with strictly-positive value process. The formal definition is as follows. Definition 18.2.1 (Terminal-Payoff Equivalent) Let X ∈ X and t ∈ {0, . . . , T }. The payoff Tt (X) ∈ XT defined by T  Xu Tt (X) := VT [η] V [η] u=t u

is called the terminal-payoff equivalent of X.



We record the main properties of terminal-payoff equivalents in the next result, whose simple verification is left as an exercise. Proposition 18.2.2 For every t ∈ {0, . . . , T } the map Tt : X → XT is linear and satisfies the following properties: (i) (ii) (iii) (iv)

Tt (X) = Tt (Xt :T ) for every X ∈ X . Tt (X) ≥ 0 for every X ∈ X such that Xt :T ≥ 0. Tt (X)  0 for every X ∈ X such that Xt :T  0. Tt (1A X t :T ) = 1A Tt (X) for all X ∈ X and A ∈ Pt .

Remark 18.2.3 (Terminal-Payoff Equivalents with General Strategies) The notion of a terminal-payoff equivalent is stated in terms of the equal-weight market strategy η ∈ S 0,T . For notational convenience, we did not highlight this dependence in the above definition. Clearly, one could replace the equal-weight market strategy with any strategy θ ∈ S 0,T with strictly-positive value process and set Ttθ (X) :=

T  Xu VT [θ] V [θ] u=t u

for all X ∈ X and t ∈ {0, . . . , T }. In particular, terminal-payoff equivalents take a simple form if V0 [θ ] = · · · = VT [θ] = 1, in which case Ttθ (X) =

T 

Xu

u=t

for all X ∈ X and t ∈ {0, . . . , T }. This holds whenever θ is used as the numéraire strategy; see Example 14.5.4. 

18.3 Replicable Payoff Streams

18.3

321

Replicable Payoff Streams

At first, it may be unclear why one would need to spend any time on developing the notions of replicability and market-consistent pricing for payoff streams. Indeed, since each component of the payoff stream matures at a fixed maturity, isn’t it sufficient to apply what we already know to each separate component? For instance, if each component of a payoff stream is replicable at date 0, and thus has a unique market-consistent price at date 0, then it is meaningful to define the value of the payoff stream at date 0 to be the sum of those market-consistent prices. While this is certainly true, there are many more payoff streams that can be meaningfully regarded as being “replicable”. To see this, assume that the market fails to be (0, M)-complete for some date M ∈ {1, . . . , T − 1} and take a nonreplicable payoff X ∈ XM \ M0,M . Moreover, take a self-financing strategy λ ∈ S 0,T such that VT [λ] is strictly positive and VM [λ] ≥ X (for instance, we could take a sufficiently large multiple of the equal-weight market strategy). Consider now the payoff stream X ∈ X defined by 

VM [λ] − X VT [λ] , X = 0, . . . , 0, X, 0, . . . , VM [λ] where X occupies the (M + 1)th position. By construction, the only components of this payoff stream that are replicable at date 0 are those that are null. Nonetheless, this payoff stream can be “replicated” by pursuing the following strategy: • • • • •

Buy λ at date 0 and keep it until date M. Liquidate the position at date M for VM [λ]. Use VM [λ] to settle X. Reinvest the difference VM [λ] − X in λ. Liquidate the new position at date T .

Note that no additional funding is required along the way. It therefore makes sense to view the strategy we have just described as a “replicating strategy” for X. From the above example it follows that trying to replicate a payoff stream by replicating the individual components is, in general, less effective than trying to replicate the whole payoff stream simultaneously. This is because the ability to liquidate as one pleases on the way to the terminal maturity provides an additional flexibility that is not available if each maturity is targeted individually. Note that, since we need to liquidate and settle payoffs at intermediate dates, the replicating strategies cannot be self-financing and we need to keep track of their acquisition and liquidation values separately.

322

18 Market-Consistent Prices for Payoff Streams

Definition 18.3.1 (Replicable Payoff Stream) Let t ∈ {0, . . . , T − 1}. A payoff stream X ∈ X is said to be replicable at date t if there exists a strategy λ ∈ S t such that: Liq

Acq

(1) Vu [λ] = Vu [λ] + Xu for every u ∈ {t + 1, . . . , T − 1}. Liq (2) VT [λ] = XT . The strategy λ is said to be a replicating strategy for X. The set of all payoff streams that are replicable at date t is denoted by Mt . The elements of Mt are also called marketed payoff streams.  We spell out the trading scheme behind a replicating strategy. Here, we consider a payoff stream X ∈ X that is replicable at date t ∈ {0, . . . , T − 1} with replicating strategy λ ∈ St : • If at date t the state of the economy is At ∈ Pt , then we set up the portfolio λt (At ) for Acq the amount Vt [λ](At ). • If at date t + 1 the economy is in the state At +1 ∈ Pt +1 , where At +1 ⊂ At , we Liq liquidate the portfolio λt (At ) obtaining the amount Vt +1 [λ](At +1). By condition (1), this is precisely the amount needed to settle the maturing payoff Xt +1 (At +1 ) and to purchase the portfolio λt +1 (At +1 ). • We continue in this way until we reach the final date T , at which, by condition (2), the last portfolio λT −1 (AT −1 ) generates precisely the payoff needed to match the terminal payoff XT (AT ). Remark 18.3.2 It can be proved that a payoff stream consisting of a single nonzero component is replicable at a given date if and only if its nonzero component is replicable at the same date. This shows that the above notion of replicability generalizes the notion of replicability for single-maturity payoffs; see also Proposition 18.3.5.  Remark 18.3.3 (On Replicating Strategies) Consider a payoff stream X ∈ X that is replicable at date t ∈ {0, . . . , T − 1} with replicating strategy λ ∈ S t . (i) Recall that, under our convention, when a payoff stream is transacted, the buyer receives the maturing payoff and the residual payoff stream. Hence, to fully “produce” the payoff stream, we need the “cash” amount corresponding to the maturing payoff together with a replicating strategy for the payoff stream. In this sense, it would be more correct to view the pair (Xt , λ) as a “replicating strategy”. However, it is clear that the replicability of a payoff stream only depends on the residual payoff stream, because we can always “replicate” at date t any payment that is due at that date. (ii) Note that, after setting up the initial portfolio, the proceeds from subsequent liquidations always precisely match what is needed to settle the maturing payoff and

18.3 Replicable Payoff Streams

323

to rebalance the portfolio as specified by the strategy λ. In this sense, a replicating strategy can be viewed as being “self-financing”.  We exploit terminal-payoff equivalents to derive a variety of properties of replicable payoff streams in this and in the next section. We start by showing that the replicability of a payoff stream is tantamount to the replicability of any of its terminal-payoff equivalents. This establishes in which sense a replicable payoff stream and its terminalpayoff equivalent are “equivalent”. Lemma 18.3.4 For every payoff stream X ∈ X and every t ∈ {0, . . . , T } the following statements hold: (i) If λ ∈ S t is a replicating strategy for X, then the strategy μ ∈ S t defined by μu = λu +

u  Xs ηu , u ∈ {t, . . . , T − 1}, V [η] s=t s

is a replicating strategy for Tt (X). (ii) If μ ∈ S t,T is a replicating strategy for Tt (X), then the strategy λ ∈ S t defined by λu = μu −

u  Xs ηu , u ∈ {t, . . . , T − 1}, V [η] s=t s

is a replicating strategy for X. Proof To show (i), note first that, for every u ∈ {t + 1, . . . , T − 1}, we have Acq

Vu

Acq

[μ] = Vu

[λ] +

u u−1   Xs Xs Liq Liq Vu [η] = Vu [λ] + Vu [η] = Vu [μ], V [η] V [η] s=t s s=t s

showing that μ is self-financing. Since Liq

VT [μ] = VT [λ] +

T −1 s=t

 Xs Xs VT [η] = VT [η] = Tt (X), Vs [η] V [η] s=t s

we see that μ is a replicating strategy for Tt (X).

T

324

18 Market-Consistent Prices for Payoff Streams

Next, we focus on (ii). To prove that λ is a replicating strategy for X note that Liq

Vu

[λ] = Vu [μ] −

u−1 u   Xs Xs Acq Vu [η] = Vu [μ] − Vu [η] + Xu = Vu [λ] + Xu V [η] V [η] s=t s s=t s

for every u ∈ {t + 1, . . . , T − 1} and Liq

VT [λ] = VT [μ] −

T −1 s=t

T −1

 Xs Xs VT [η] = Tt (X) − VT [η] = XT . Vs [η] V [η] s=t s

This proves that λ is a replicating strategy for X.



Proposition 18.3.5 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements are equivalent: (a) X is replicable at date t. (b) Tt (X) is replicable at date t. (c) Tt +1 (X) is replicable at date t. Proof That (a) and (b) are equivalent follows immediately from Lemma 18.3.4. To see that (b) and (c) are also equivalent it suffices to observe that Tt (X) − Tt +1 (X) =

Xt VT [η] ∈ Mt,T Vt [η]

by Corollary 14.3.5 and to recall that Mt,T is a linear space.



The following result shows that the set of replicable payoff streams at any given date is a linear subspace of the space of payoff streams. Proposition 18.3.6 Let t ∈ {0, . . . , T −1}. The following statements hold for all strategies λ, μ ∈ S t , all payoff streams X, Y ∈ Mt , every a ∈ R, and every Z ∈ Xt : (i) If λ is a replicating strategy for X and μ is a replicating strategy for Y , then λ + μ is a replicating strategy for X + Y . (ii) If λ is a replicating strategy for X, then aλ is a replicating strategy for aX. (iii) If λ is a replicating strategy for X, then Zλ is a replicating strategy for ZXt :T .

18.3 Replicable Payoff Streams

325

Proof The assertions easily follow from the linearity of the liquidation and acquisition maps and from an argument similar to that of the proof of Proposition 14.2.13. The explicit verification is left as an exercise.  Corollary 18.3.7 For every t ∈ {0, . . . , T − 1} the set Mt is a linear subspace of X satisfying ZXt :T ∈ Mt for all Z ∈ Xt and X ∈ Mt . We conclude this section by highlighting two useful structural properties of spaces of replicable payoff streams. First, that whenever all the components of a payoff stream are replicable at a given date, the payoff stream is itself replicable at that date. Second, that whenever a payoff stream is replicable at a given date, it remains replicable at all later dates. Proposition 18.3.8 For every t ∈ {0, . . . , T − 1} the following statements hold: (i) Every payoff stream X ∈ X such that Xt +1 , . . . , XT are replicable at date t is also replicable at date t, i.e., {0} × X1 × · · · × Xt × Mt,t +1 × · · · × Mt,T ⊂ Mt . (ii) Let s ∈ {0, . . . , t −1}. Every payoff stream that is replicable at date s is also replicable at date t, i.e., Ms ⊂ Mt . Proof Assertion (ii) is a direct consequence of the definition of a replicable payoff stream. To prove assertion (i), take a payoff stream X ∈ X and assume that XM ∈ Mt,M for every date M ∈ {t + 1, . . . , T }. Note that each of the payoff streams X1 = (0, X1 , 0, . . . , 0), . . . , Xt = (0, . . . , 0, Xt , 0, . . . , 0) is clearly replicable at date t. Moreover, each of the payoff streams Xt +1 = (0, . . . , 0, Xt +1 , 0, . . . , 0), . . . , XT = (0, . . . , 0, XT ) is replicable at date t by Remark 18.3.2. Since X is just the sum of the above payoff streams, it follows that X is also replicable at date t. 

326

18 Market-Consistent Prices for Payoff Streams

As an easy consequence of the previous result we show that every payoff stream is replicable at any given date whenever the market is complete. Proposition 18.3.9 The following statements are equivalent: (a) The market is complete. (b) Every payoff stream is replicable at any date, i.e., Mt = X for all t ∈ {0, . . . , T − 1}. (c) Every payoff stream is replicable at date 0, i.e., M0 = X . Proof That (a) implies (b) follows immediately from Proposition 18.3.8. Next, it is clear that (b) implies (c). Finally, assume that (c) holds and take an arbitrary X ∈ XT . By assumption, the payoff stream X ∈ X given by X = (0, . . . , 0, X) is replicable at date 0, which implies that X ∈ M0,T by Remark 18.3.2. This shows that M0,T = XT . Since the market is arbitrage free and, thus, the Law of One Price holds, it follows from Proposition 15.2.6 that the market is complete, showing that (a) holds and concluding the proof of the equivalence. 

18.4

Market-Consistent Prices

In this section we extend to payoff streams the market-consistent pricing theory developed for single-maturity contracts. We start by introducing the notion of a market-consistent price for a replicable payoff stream and we later show how to generalize it to nonreplicable payoff streams. Market-Consistent Prices for Replicable Payoff Streams The key to defining a market-consistent price for a replicable payoff stream at a given date is to show that all its replication strategies at that date have the same initial acquisition value. Proposition 18.4.1 Let t ∈ {0, . . . , T − 1}. For every replicable payoff stream X ∈ Mt Acq Acq and for all replicating strategies λ, μ ∈ S t for X we have Vt [λ] = Vt [μ]. Proof It follows from the definition of a replicating strategy for X that Acq

Vu

Acq

[λ − μ] = Vu

Acq

[λ] − Vu

Liq

Liq

Liq

[μ] = Vu [λ] − Xu − Vu [μ] + Xu = Vu [λ − μ]

18.4 Market-Consistent Prices

327

for every u ∈ {t + 1, . . . , T − 1}, showing that λ − μ is self-financing. Since Liq

Liq

VT [λ − μ] = VT [λ] − VT [μ] = XT − XT = 0, the Law of One Price implies that Acq

Vt

Acq

[λ] − Vt

[μ] = Vt [λ − μ] = 0. 

This establishes the desired claim.

In view of the preceding result, the following definition of a market-consistent price for a replicable payoff stream is well-posed. Let t ∈ {0, . . . , T − 1}. For every

Definition 18.4.2 (Market-Consistent Price) replicable payoff stream X ∈ Mt we set

Acq

πt (X) := Xt + Vt

[λ],

where λ ∈ S t is any replicating strategy for X. The quantity πt (X), which is independent of the chosen replicating strategy, is called the market-consistent price of X at date t. For  convenience, we also set πT (X) := XT . Fix a date t ∈ {0, . . . , T − 1} and consider a replicable payoff stream X ∈ Mt with replicating strategy λ ∈ S t . The market-consistent price πt (X) consists, by definition, of two components: • The component Xt represents the price of the maturing payoff. Acq • The component Vt [λ] can be viewed as the replication cost of the residual payoff stream. The interpretation of πt (X) is similar to the one for single-maturity payoffs. To see this, suppose X is transacted at date t for a price p ∈ Xt : • If p(A) > πt (X)(A) for some A ∈ Pt , then no rational buyer would engage in the transaction, since he or she could directly access the market and “buy” the maturing payoff Xt together with the strategy λ, which entitles to the residual payoff stream, at a cheaper price. • If p(A) < πt,M (X)(A) for some A ∈ Pt , then no rational seller would want to engage in the transaction, since he or she could directly access the market and “sell” the maturing payoff Xt together with the strategy λ, which commits to delivering the same payoff stream, at a higher price.

328

18 Market-Consistent Prices for Payoff Streams

Remark 18.4.3 The market-consistent price of a replicable payoff stream consisting of a single nonzero component coincides with the market-consistent price of its nonzero component, which is also replicable by Remark 18.3.2. Hence, the above notion of market consistency generalizes the notion of market consistency introduced for single-maturity payoffs; see also Proposition 18.4.4.  We start by showing that the market-consistent price of a replicable payoff stream can be expressed in terms of any of its terminal payoff equivalents. Proposition 18.4.4 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements are equivalent: (a) X is replicable at date t. (b) Tt (X) is replicable at date t. (c) Tt +1 (X) is replicable at date t. In this case, we have πt (X) = πt,T (Tt (X)) = Xt + πt,T (Tt +1 (X)). Proof The equivalence follows from Proposition 18.3.5. To show the last assertion, let λ ∈ S t be a replicating strategy for X and let μ ∈ S t,T be a replicating strategy for Tt (X) satisfying item (i) in Lemma 18.3.4. It is immediate to see that Acq

Vt [μ] = Vt

[λ] + Xt .

This yields πt,T (Tt (X)) = πt (X). The other equality is obvious.



By combining the properties of standard pricing functionals with those of terminalpayoff equivalents as stated in Proposition 18.2.2 we can immediately derive the following result. The easy verification is left as an exercise. Theorem 18.4.5 For every t ∈ {0, . . . , T − 1} the map πt : Mt → Xt is a strictlypositive forward-looking Pt -conditional linear functional. The following recursion formula for market-consistent prices of replicable payoff streams follows from the time consistency of the standard pricing functionals. Note that time consistency for non-adjacent dates does not hold in general; see Exercise 18.7.5.

18.4 Market-Consistent Prices

329

Theorem 18.4.6 (Time Consistency) For every t ∈ {0, . . . , T − 1} and every replicable payoff stream X ∈ Mt we have πt +1 (X) ∈ Mt,t +1 and πt (X) = Xt + πt,t +1 (πt +1 (X)). Proof Recall that X ∈ Mt +1 . Since Tt +1 (X) ∈ Mt,T and πt +1 (X) = πt +1,T (Tt +1 (X)) by Proposition 18.4.4, it follows from Theorem 15.4.2 that πt +1 (X) ∈ Mt,t +1 and πt (X) = Xt + πt,T (Tt +1 (X)) = Xt + πt,t +1(πt +1,T (Tt +1 (X))) = Xt + πt,t +1(πt +1 (X)). 

This yields the desired equality.

It should be clear that, if each component of a payoff stream is replicable, then its market-consistent price is just the sum of the market-consistent prices of the individual components. Proposition 18.4.7 Let t ∈ {0, . . . , T − 1}. For every replicable payoff stream X ∈ Mt such that Xu is replicable at date t for every u ∈ {t + 1, . . . , T } we have πt (X) =

T 

πt,u (Xu ).

u=t

Proof Note that X can be written as the sum of the payoff streams X1 = (0, X1 , 0, . . . , 0), . . . , X T = (0, . . . , 0, XT ), which are all replicable at date t by Remark 18.3.2. Then, by Remark 18.4.3 and the linearity of πt , πt (X) =

T  u=1

πt (X u ) =

T 

πt,u (Xu ).

u=t

This delivers the desired equality and concludes the proof.



330

18 Market-Consistent Prices for Payoff Streams

The above decomposition result only makes sense if every component of a replicable payoff stream is itself replicable, which, as we have shown, need not be the case for replicable payoff streams in general. It is not surprising, but all the more useful, that, to obtain the market-consistent price of a replicable payoff stream, we can apply extensions of the standard pricing functionals to the individual components. Before doing that we introduce some useful notation. Definition 18.4.8 For every arbitrage-free family of pricing extensions ψ ∈ E(π ) and every t ∈ {0, . . . , T − 1} we define a map ψt : X → Xt by setting ψt (X) :=

T 

ψt,u (Xu ).

u=t

For convenience, we also set ψT (X) := XT for every X ∈ X .



Theorem 18.4.9 Let ψ ∈ E(π ) be an arbitrage-free family of pricing extensions. For every t ∈ {0, . . . , T − 1} and every replicable payoff stream X ∈ Mt we have πt (X) = ψt (X) =

T 

ψt,u (Xu ).

u=t

Proof Note that, for every u ∈ {t + 1, . . . , T },

ψt,T

 

Xu Xu VT [η] = ψt,u ψu,T VT [η] = ψt,u (Xu ) Vu [η] Vu [η]

by time consistency and conditionality. Since πt (X) = ψt,T (Tt (X)) by Proposition 18.4.4, we infer that πt (X) =

T  u=t

ψt,T

  T Xu VT [η] = ψt,u (Xu ) = ψt (X). Vu [η] u=t

This yields the desired equality and concludes the proof.



In light of the link between pricing extensions and pricing densities established in Proposition 16.2.3, it is straightforward to formulate the preceding representation of the pricing functional in terms of pricing densities. The explicit verification is left as an exercise.

18.4 Market-Consistent Prices

331

Theorem 18.4.10 Let D ∈ D(π ) be an arbitrage-free family of pricing densities. For every t ∈ {0, . . . , T − 1} and every replicable payoff stream X ∈ Mt we have πt (X) =

T 

EP [Dt,u Xu |Pt ].

u=t

We provide a third representation of market-consistent prices based on pricing measures. To this end, we have to express payments in units of a fixed numéraire strategy. As usual, we use the “tilde” notation to denote the corresponding rescaled payments. The representation follows by combining Proposition 16.3.2 and Theorem 18.4.10. The explicit verification is left as an exercise. Theorem 18.4.11 Let θ ∈ S 0,T be the numéraire strategy and let Q ∈ Qθ (π ) be a θ pricing measure. For every t ∈ {0, . . . , T − 1} and every replicable rescaled payoff stream *t we have ∈M X T    ! "  = u |Pt . EQ X  πt X u=t

Remark 18.4.12 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 18.7.10: For every t ∈ {0, . . . , T − 1} and every replicable payoff stream X ∈ Mt we have πt (X) = Vt [θ]

T  u=t

 EQ

) Xu )) Pt . Vu [θ ] )



Market-Consistent Prices for General Payoff Streams In this section we extend the notion of a market-consistent price to payoff streams that are not replicable.

Standing Assumption In this section we assume that the market is incomplete.

As in the case of single-maturity payoffs, we start by introducing the notion of marketed bounds for a payoff stream. Definition 18.4.13 (Marketed Bound) Let t ∈ {0, . . . , T − 1} and consider a payoff stream X ∈ X . A replicable payoff stream Z ∈ Mt is said to be a marketed lower bound for X at date t if Z t :T ≤ X t :T . The set of marketed lower bound for X at date t is denoted by M− t (X).

332

18 Market-Consistent Prices for Payoff Streams

Similarly, a replicable payoff stream Z ∈ Mt is said to be a marketed upper bound for X at date t if Z t :T ≥ X t :T . The set of marketed upper bound for X at date t is denoted by  M+ t (X). It is easy to see that every payoff stream admits marketed bounds. Proposition 18.4.14 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the + sets M− t (X) and Mt (X) are nonempty. Proof For any marketed payoff stream U ∈ Mt with strictly-positive components we have −aU ≤ X ≤ aU for a ∈ (0, ∞) sufficiently large, showing that both sets M− t (X) +  and Mt (X) are nonempty. The notion of a market-consistent price is a straightforward generalization of the corresponding notion for single-maturity payoffs. Definition 18.4.15 (Market-Consistent Price) Let t ∈ {0, . . . , T − 1} and consider a payoff stream X ∈ X and a random variable p ∈ Xt . We say that p is a market-consistent seller price for X at date t if for every marketed lower bound Z ∈ M− t (X) and for every A ∈ Pt we have Z t :T  Xt :T on A ⇒ p(A) > πt (Z|A). The set of market-consistent seller prices for X at date t is denoted by st (X). Similarly, we say that p is a market-consistent buyer price for X at date t if for every marketed upper bound Z ∈ M+ t (X) and for every A ∈ Pt we have Z t :T  Xt :T on A ⇒ p(A) < πt (Z|A). The set of market-consistent buyer prices for X at date t is denoted by bt (X). Finally, we say that p is a market-consistent price for X at date t if it is both a marketconsistent seller and buyer price for X at date t. The set of market-consistent prices for X  at date t is denoted by t (X). In line with the interpretation provided so far, no agent should be willing to sell a payoff stream for a price that is not a market-consistent seller price. In that case, he or she would be better off by selling for a higher price a replicable payoff stream that is more attractive from a seller’s perspective. Similarly, no agent should be willing to buy a payoff stream for a price that is not a market-consistent buyer price. In that case, the price would be too high because it would be possible to obtain a replicable payoff stream that is more attractive from a buyer’s perspective at a lower price. Hence, market-consistent prices constitute precisely the range of prices that are not in conflict with the prices in the market

18.4 Market-Consistent Prices

333

for the basic securities. As a result, a transaction between agents that have full access to the underlying market should never take place at prices that are not market consistent. Remark 18.4.16 It is not difficult to show that the market-consistent prices of a payoff stream consisting of a single nonzero component correspond precisely to the marketconsistent prices of its nonzero component. This shows that the above notion of market consistency generalizes the corresponding notion for single-maturity payoffs; see also Proposition 18.4.18.  Terminal-payoff equivalents are as effective in deriving results for general payoff streams as they were for replicable ones. This will be a consequence of our next result establishing a link between market-consistent prices of payoff streams and those of the corresponding terminal-payoff equivalents. This, in particular, shows that there is compatibility between the notions of market consistency for single-payment payoffs and payoff streams consisting of a single nonzero components. Lemma 18.4.17 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements hold: − (i) {Tt (Z) ; Z ∈ M− t (X)} = Mt,T (Tt (X)). + (ii) {Tt (Z) ; Z ∈ Mt (X)} = M+ t,T (Tt (X)).

Proof We only prove assertion (i). Take first Z ∈ M− t (X) and note that Tt (Z) belongs to Mt,T by Proposition 18.4.4. Since we clearly have Tt (Z) ≤ Tt (X), it follows that the inclusion “⊂” holds. Conversely, take Z ∈ M− t,T (Tt (X)) and define a payoff stream Z ∈ X by Z = (X0 , . . . , XT −1 , XT + Z − Tt (X)). It is easy to verify that Tt (Z) = Z. Then, it follows from Proposition 18.4.4 that Z belongs to Mt . Moreover, as Z ≤ Tt (X), it is immediate to see that Z ≤ X. This proves that  Z ∈ M− t (X) and establishes the inclusion “⊃”. Proposition 18.4.18 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements hold: (i) st (X) = st,T (Tt (X)). (ii) bt (X) = bt,T (Tt (X)). (iii) t (X) = t,T (Tt (X)).

334

18 Market-Consistent Prices for Payoff Streams

Proof We only prove (i). To show the inclusion “⊂”, take a market-consistent price p ∈ st (X). Next, take any replicable payoff Z ∈ M− t,T (Tt (X)) and A ∈ Pt such that Z  Tt (X) on A. It follows from Lemma 18.4.17 that Z = Tt (Z) for some marketed bound Z ∈ M− t (X). Since Z is easily seen to satisfy Z t :T  X t :T on A, we infer from Proposition 18.4.4 that p(A) > πt (Z|A) = πt,T (Tt (Z)|A) = πt,T (Z|A). This proves that p belongs to st,T (Tt (X)). To prove the inclusion “⊃”, take a market-consistent price p ∈ st,T (Tt (X)). Next, take any Z ∈ M− t (X) and any A ∈ Pt such that Z t :T  X t :T on A. Since we have (T (X)) by Lemma 18.4.17 and clearly Tt (Z)  Tt (X) on A as well, it Tt (Z) ∈ M− t t,T follows from Proposition 18.4.4 that p(A) > πt,T (Tt (Z)|A) = πt (Z|A). This shows that p belongs to st (X) and concludes the proof.



In view of Proposition 18.4.18, we can immediately derive from Proposition 17.2.5 the following characterization of market-consistent seller and buyer prices in terms of marketconsistent prices. This allows to translate all the following results about market-consistent prices into their seller’s and buyer’s counterparts. Proposition 18.4.19 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements hold: (i) st (X) = {p ∈ Xt ; p ≥ q for some q ∈ t (X)}. (ii) bt (X) = {p ∈ Xt ; p ≤ q for some q ∈ t (X)}.

18.5

Characterizing Market-Consistent Prices

In this section we exploit the link between payoff streams and their terminal payoff equivalents to derive a variety of representations of market-consistent prices. As usual, our interest is in market-consistent prices, rather than their buyer’s and seller’s versions, since these are the prices at which a potential transaction can occur. The corresponding results for buyer’s and seller prices can be derived from Proposition 18.4.19.

Standing Assumption In this section we assume that the market is incomplete.

18.5 Characterizing Market-Consistent Prices

335

Market-Consistent Prices and Pricing Extensions The first representation is in terms of arbitrage-free extensions. In particular, this result shows that, for a replicable payoff stream, the general notion of a market-consistent price coincides with that based on replicability. Theorem 18.5.1 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} we have

t (X) = {ψt (X) ; ψ ∈ E(π )}. Proof The assertion follows from Theorem 17.3.1 and Proposition 18.4.18 once we observe that, for every arbitrage-free family ψ ∈ E(π) we have ψt,T (Tt (X)) =

=

 Xu VT [η] ψt,u ψu,T Vu [η] u=t

T 

T 

ψt,u

u=t

=

T 

Xu ψu,T (VT [η]) Vu [η]



ψt,u (Xu )

u=t

= ψt (X) 

by time consistency and conditionality.

Market-Consistent Prices and Pricing Densities We can invoke Proposition 16.2.3 to reformulate the preceding theorem in terms of pricing densities. The explicit verification is left as an exercise. Theorem 18.5.2 For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} we have

t (X) =

T 

EP [Dt,u Xu |Pt ] ; D ∈ D(π ) .

u=t

Market-Consistent Prices and Pricing Measures We establish a third characterization of market-consistent prices based on pricing measures. To this effect, we have to express payments in units of a given numéraire strategy. As usual, we use the “tilde” notation to denote the corresponding rescaled payments; see Remark 14.5.6. In view of the link between pricing densities and and pricing measures established in Proposition 16.3.2, the statement follows immediately from Theorem 18.5.2.

336

18 Market-Consistent Prices for Payoff Streams

 ∈X Theorem 18.5.3 Let θ ∈ S 0,T be the numéraire strategy. For every payoff stream X and every t ∈ {0, . . . , T − 1} we have

T  !   "    EQ Xu |Pt ; Q ∈ Qθ (π) .

t X = u=t

Remark 18.5.4 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 18.7.10: For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} we have

t (X) = Vt [θ]

T  u=t

18.6

 EQ

) Xu )) Pt ; Q ∈ Qθ (π) . Vu [θ ] )



Sub- and Superreplication Prices

This section features a number of results for the bounds of the set of market-consistent prices of a payoff stream. As a preliminary result, we highlight that the set of marketconsistent prices of a payoff stream is a bounded interval-like set in the sense specified below. This makes the study of its bounds interesting. The result follows from the corresponding result for single-maturity payoffs, namely Proposition 17.4.1, in view of Proposition 18.4.18. Proposition 18.6.1 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the set t (X) is bounded and satisfies the following “interval-like” property: p, q ∈ t (X), r ∈ Xt , p ≤ r ≤ q ⇒ r ∈ t (X). In particular, {p(A) ; p ∈ t (X)} is a bounded interval for every A ∈ Pt . In line with the terminology used so far, the bounds of the set of market-consistent prices of a payoff stream are called its sub- and superreplication price, respectively. Definition 18.6.2 (Sub/Superreplication Price) Let t ∈ {0, . . . , T − 1} and consider a payoff stream X ∈ X . The subreplication of X at date t is defined as πt− (X) := inf t (X). The superreplication of X at date t is defined as πt+ (X) := sup t (X). For convenience, we also set πT− (X) := πT+ (X) := XT .



18.6 Sub- and Superreplication Prices

337

We start by showing that sub- and superreplication prices can be equivalently expressed in terms of terminal-payoff equivalents. This is a direct consequence of Proposition 18.4.18. Proposition 18.6.3 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the following statements hold: − − (Tt (X)) = Xt + πt,T (Tt +1 (X)). (i) πt− (X) = πt,T + + + (Tt +1 (X)). (ii) πt (X) = πt,T (Tt (X)) = Xt + πt,T

The next result justifies the terminology of sub- and superreplication price. It follows from the corresponding result for single-maturity payoffs, namely Proposition 17.4.3, in view of Lemma 18.4.17 and Proposition 18.6.3. The explicit verification is left as an exercise. Proposition 18.6.4 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the following statements hold: (i) πt− (X) = sup{πt (Z) ; Z ∈ M− t (X)}. (ii) πt+ (X) = inf{πt (Z) ; Z ∈ M+ t (X)}. Moreover, the supremum in (i) and the infimum in (ii) are attained. A number of important properties of sub- and superreplication prices are collected in the next proposition. The result follows as a direct consequence of the characterization of market-consistent prices recorded in Theorem 18.5.1. Alternatively, it can be derived from the corresponding result for single-maturity payoffs, namely Proposition 17.4.4, by exploiting the link through terminal-payoff equivalents established in Proposition 18.6.3. Proposition 18.6.5 Let t ∈ {0, . . . , T − 1}. The following statements hold: (i) πt+ (X) = −πt− (−X) for every X ∈ X . (ii) πt− : X → Xt is a superlinear increasing forward-looking Pt -conditional extension of πt such that: (a) πt− (X|A) < 0 for all X ∈ X and A ∈ Pt with Xt :T  0 on A. (b) πt− (X + Z) = πt− (X) + πt (Z) for all X ∈ X and Z ∈ Mt . (iii) πt+ : X → Xt is a sublinear increasing forward-looking Pt -conditional extension of πt such that: (a) πt+ (X|A) > 0 for all X ∈ X and A ∈ Pt with Xt :T  0 on A. (b) πt+ (X + Z) = πt+ (X) + πt (Z) for all X ∈ X and Z ∈ Mt .

338

18 Market-Consistent Prices for Payoff Streams

The following recursion formula establishes the time-consistency of sub- and superreplication prices and is a direct consequence of the corresponding result for singlematurity payoffs, namely Proposition 17.4.6, in view of Proposition 18.6.3. Note that time consistency for non-adjacent dates does not hold in general; see Exercise 18.7.8. Proposition 18.6.6 (Time Consistency) Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the following statements hold: − − (i) πt− (X) = Xt + πt,t +1 (πt +1 (X)). + + (ii) πt (X) = Xt + πt,t +1(πt++1 (X)).

Representing Sub- and Superreplication Prices The characterization of market-consistent prices recorded in Theorem 18.5.1 immediately implies the following representation of sub- and superreplication prices in terms of arbitrage-free families of pricing extensions. Proposition 18.6.7 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the following statements hold: (i) πt− (X) = inf{ψt (X) ; ψ ∈ E(π )}. (ii) πt+ (X) = sup{ψt (X) ; ψ ∈ E(π )}. The reformulation of the preceding result in terms of pricing densities follows from Theorem 18.5.2. Proposition 18.6.8 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X the following statements hold: ( E [D X |P ] ; D ∈ D(π ) . t,u u t P u=t % ( T (ii) πt+ (X) = sup u=t EP [Dt,u Xu |Pt ] ; D ∈ D(π ) . (i) πt− (X) = inf

% T

We establish a third representation of sub- and superreplication prices that follows from Theorem 18.5.3. Proposition 18.6.9 Let θ ∈ S 0,T be the numéraire strategy and t ∈ {0, . . . , T − 1}. For  the following statements hold: ∈X every payoff stream X ( %   ! " T  = inf  (i)  πt− X u=t EQ Xu |Pt ; Q ∈ Qθ (π ) . ( %   ! " T  = sup  (ii)  πt+ X ; Q ∈ Q E |P (π ) . X u t θ u=t Q

18.7 Exercises

339

Remark 18.6.10 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 18.7.10: For every payoff stream X ∈ X and every t ∈ {0, . . . , T − 1} the following statements hold: ) ' & % ( T Xu ) ; Q ∈ Q E (π) . (i) πt− (X) = Vt [θ] inf )P t Q θ u=t % &Vu [θ] ) ' ( T + Xu )  (ii) πt (X) = Vt [θ] sup u=t EQ Vu [θ] )Pt ; Q ∈ Qθ (π) . Market-Consistent Prices and Replicability The following additional characterization of the set of market-consistent prices for a payoff stream follows immediately by applying to terminal-payoff equivalents the corresponding result for single-maturity payoffs, namely Theorem 17.4.11. More precisely, it can be derived by combining Propositions 18.4.4 and 18.4.18. Theorem 18.6.11 Let t ∈ {0, . . . , T − 1}. For every payoff stream X ∈ X and every A ∈ Pt the following statements hold: (i) If 1A Xt :T is replicable at date t, then πt− (X|A) = πt+ (X|A) and {p(A) ; p ∈ t (X)} = {πt (1A Xt :T |A)}. (ii) If 1A Xt :T is not replicable at date t, then πt− (X|A) < πt+ (X|A) and {p(A) ; p ∈ t (X)} = (πt− (X|A), πt+ (X|A)). As a direct consequence of Theorem 18.6.11 we derive the following characterization of replicability for a payoff stream. Proposition 18.6.12 Let t ∈ {0, . . . , T −1}. For every payoff stream X ∈ X the following statements are equivalent: (a) (b) (c) (d) (e)

X is replicable at date t. X has a unique market-consistent price at date t. πt− (X) is a market-consistent price for X at date t. πt+ (X) is a market-consistent price for X at date t. πt− (X) = πt+ (X).

18.7

Exercises

In all exercises below we consider the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is arbitrage free.

340

18 Market-Consistent Prices for Payoff Streams

Exercise 18.7.1 Let t ∈ {0, . . . , T }. Prove that for every strategy θ ∈ S 0,T with strictlypositive value process the map Tt : X → XT is linear and satisfies the following properties for every X ∈ X : (i) (ii) (iii) (iv)

Tt (X) = Tt (Xt :T ) for every X ∈ X . Tt (X) ≥ 0 for every X ∈ X such that Xt :T ≥ 0. Tt (X)  0 for every X ∈ X such that Xt :T  0. Tt (1A X t :T ) = 1A Tt (X) for all X ∈ X and A ∈ Pt .

Exercise 18.7.2 Let t ∈ {0, . . . , T − 1}. Show that the following statements hold for all strategies λ, μ ∈ S t , all payoff streams X, Y ∈ Mt , every a ∈ R, and every Z ∈ Xt : (i) If λ is a replicating strategy for X and μ is a replicating strategy for Y , then λ + μ is a replicating strategy for X + Y . (ii) If λ is a replicating strategy for X, then aλ is a replicating strategy for aX. (iii) If λ is a replicating strategy for X, then Zλ is a replicating strategy for ZXt :T . Exercise 18.7.3 Show that for some t ∈ {0, . . . , T −1} it may happen that a payoff stream X ∈ X is replicable at date t even though none of the payoffs Xt +1 , . . . , XT is replicable at date t. Exercise 18.7.4 Let t ∈ {0, . . . , T − 1}. Prove that the following statements are equivalent: (a) A payoff stream is replicable at date t if and only if all its components maturing after date t are replicable at date t, i.e., we have {0} × X1 × · · · × Xt × Mt,t +1 × · · · × Mt,T = Mt . (b) The market is (t, M)-complete for every M ∈ {t + 1, . . . , T − 1}. Exercise 18.7.5 Show that each of the following statements may hold for some payoff stream X ∈ X and some dates 0 ≤ s < t < T : (i) X is replicable at date s but πt (X) is not replicable at date s. (ii) X and πt (X) are both replicable at date s but πs,t (πt (X)) = πs (X). This shows that the time-consistency formula for market-consistent prices of replicable payoffs recorded in Theorem 15.4.2 does not hold for payoff streams.

18.7 Exercises

341

Exercise 18.7.6 Let t ∈ {0, . . . , T − 1}. Show that for every payoff stream X ∈ X the following statements hold: − (i) M+ t (X) = −Mt (−X). b s (ii) t (X) = − t (−X). (iii) t (−X) = − t (X).

Exercise 18.7.7 Let 0 ≤ s < t < T . Show that for every payoff stream X ∈ X and every arbitrage-free extension ψs,t ∈ E(πs,t ) we have p ∈ t (X) ⇒ ψs,t (p) ∈ s (X). Exercise 18.7.8 Show that each of the following statements may hold for some payoff stream X ∈ X and some dates 0 ≤ s < t < T : − (πt− (X)) = πs− (X). (i) πs,t + (ii) πs,t (πt+ (X)) = πs+ (X).

This shows that the time-consistency formula for sub- and superreplication prices recorded in Proposition 17.4.6 does not hold for payoff streams. Exercise 18.7.9 Let  = {ω1 , . . . , ω6 } and let P(ω1 ) = · · · = P(ω6 ) = two-period market with information structure P given by

1 6.

Consider a

P0 = {}, P1 = {{ω1 , ω2 , ω3 }, {ω4 , ω5 , ω6 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }, {ω5 }, {ω6 }}. We consider two basic securities specified by 1 1

: 1

1 1

1 1 1 1 1 1

6 2

: 3

4 2

8 4 6 2 3 1

This market is incomplete and arbitrage free by Exercise 16.4.8. Consider the payoff stream X ∈ X given by X = (0, 1{ω1 ,ω2 } , 1ω1 ). (i) Determine the set of market-consistent prices of X at date 0. (ii) Determine the sub- and superreplication price of X at date 0. (iii) Deduce whether X is replicable at date 0 or not.

342

18 Market-Consistent Prices for Payoff Streams

Exercise 18.7.10 Let R ∈ R and take t ∈ {0, . . . , T − 1}. Prove that 

  0 T X X *   Mt = X ∈ X ; ,..., ∈ Mt . R0 RT *t we have ∈M Show that for every replicable payoff stream X    = Rt π  πt X



0 T X X ,..., R0 RT

 .

 show that ∈X Finally, for every payoff stream X    = Rt t t X



T 0 X X ,..., R0 RT

 ,

   = Rt πt− X

 T  X0 X , ,..., R0 RT

   = Rt πt+  πt+ X

 T  X0 X ,..., . R0 RT

 πt−

19

Market-Consistent Prices for American Options

This chapter is devoted to extending the theory of market-consistent prices to American options. These are contracts that are specified by a stream of nonnegative payoffs and have the following special feature: At any date during the lifetime of the contract, the holder can either exercise the option to receive the payment specified for that date, giving up all future payoffs, or maintain the option to exercise later, giving up the payment specified for that date. This additional optionality makes dealing with American options a more subtle enterprise than dealing with plain payoff streams.

Standing Assumption Throughout the entire chapter we work in the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is arbitrage free.

19.1

American Options

An American option is a special financial contract that is characterized by a positive Padapted process C ∈ X of the form C = (0, C1 , . . . , CT ). However, in contrast to a regular payoff stream, these payoffs represent optional payments to the owner who receives them according to the following modality: • At date 1 the owner may exercise the option and receive the payoff C1 . In this case, the owner relinquishes any claim to future payments. © Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1_19

343

344

19 Market-Consistent Prices for American Options

• At date 2 the owner gets nothing if the option has already been exercised, and may otherwise exercise and receive the payoff C2 . In this case, the owner relinquishes any claim to future payments. • We proceed in the same way until date T , where the agent gets nothing if the option has been already exercised and the payoff CT otherwise. It follows that, although American options are uniquely determined by the same mathematical object as the payoff streams studied in the previous chapter, their financial interpretation is radically different. For this reason, we introduce the following special notation for American options. Definition 19.1.1 (American Option) An American option is a financial contract that is represented by a positive P-adapted process C = (0, C1 , . . . , CT ) and that has the optionality feature described in the preceding discussion. The set of American options is denoted by A.  Remark 19.1.2 (American vs European) The reason for using the expression “American” is to distinguish the payoff streams we will be studying in this chapter from the payoff streams studied in the previous chapter, which are sometimes called “European”. The use of the terms “European” and “American” originally applied to the options to either buy or sell a given asset at a pre-specified price. European options allow exercising only when the contract matures and American options at any date between inception and maturity. Nobel laureate Paul Samuelson recalls how he came up with these terms in a 2005 interview conducted by Robert Merton—like Samuelson also a Nobel laureate1 : The put and call dealers, who had a little trade association, were mostly European refugees in dingy offices outside of the high rent areas. I would go there and ask my innocent questions. But talking to [. . . ] one of those guys, he said, I don’t understand. Why are you here? What are you up to? I said I am trying to study the science of option pricing. He said, that’s hopeless. You will never succeed. I said why not? He said it takes a European kind of mind to understand this mystery. So in revenge, I gave the name, European option, to the simpler option and reverse for the American option.

In this chapter we use the term “American option” to refer to a generic contract that possesses the optionality features discussed above. 

1 A transcript of the interview can be downloaded from Robert Merton’s webpage robertcmerton.com/wp-content/uploads/2018/03/Samuelson-Interview.pdf.

19.1 American Options

345

As the next example shows, every single-maturity contract can be viewed as an American option. In other words, American options generalize the concept of a payoff with a fixed maturity. Example 19.1.3 (Single-Maturity Payoff) For a given positive payoff X ∈ XT define the American option C ∈ A by setting C = (0, . . . , 0, X). Since no rational agent will choose to exercise at any time other than T , the optionality embedded in C is only illusory and owning C is just equivalent to owning the payoff X. Of course, a similar construction can be made with positive payoffs maturing at any date M ∈ {1, . . . , T }. In Sect. 19.3 we discuss a different way of interpreting a replicable fixed maturity payoff as an American option which will prove extremely useful when discussing marketed bounds.  The most well-known genuine American options are American calls and puts. Example 19.1.4 (American Call/Put Option) Fix i ∈ {1, . . . , N} and p ∈ [0, ∞). The American call option on the ith basic security with strike price p is the American option C ∈ A defined by C = (0, max{S1i − p, 0}, . . . , max{STi − p, 0}). This contract is equivalent to the owner having the right to buy (call) the ith security at any date t ∈ {1, . . . , T } for the price p instead of Sti . Similarly, the American put option on the ith basic security with strike price p is the American option C ∈ A defined by C = (0, max{p − S1i , 0}, . . . , max{p − STi , 0}). This time, the contract is equivalent to the owner having the right to sell (put) the ith  security at any date t ∈ {1, . . . , T } for the price p instead of Sti . Remark 19.1.5 (Notation for Rescaled American Options) Let R ∈ R be a rescaling  the set of rescaled American options. The rescaled version process. We denote by A  ∈A  given by of C ∈ A is the American option C  := (0, R1 C1 , . . . , RT CT ). C As observed in previous chapters, this notation is imprecise because it does not make explicit reference to the dependence on the chosen rescaling process. However, this omission should cause no confusion because we only deal with one rescaling process at a time. 

346

19.2

19 Market-Consistent Prices for American Options

Exercise Strategies

The various possible exercise patterns the holder of an American option may adopt over time are called exercise strategies. Definition 19.2.1 (Exercise Strategy) Let t ∈ {0, . . . , T }. An exercise strategy starting at date t is a random variable τ ∈ L satisfying the following properties: (1) τ (ω) ∈ {t, . . . , T } for every ω ∈ . (2) {τ = u} ∈ Fu for every u ∈ {t, . . . , T }. The set of exercise strategies starting at date t is denoted by Tt .



Consider an American option C ∈ A and let t ∈ {0, . . . , T } be a given date. We attach the following operational interpretation to an exercise strategy τ ∈ Tt : • If at date t the state of the economy is At ∈ Pt and At ⊂ {τ = t}, then τ prescribes exercising at date t. Otherwise, exercise is delayed to a later date. • If at date t + 1 the economy is in the state At +1 ∈ Pt +1 , where At +1 ⊂ At , and At +1 ⊂ {τ = t + 1}, then τ prescribes exercising at date t + 1. Otherwise, exercise is delayed to a later date. • This process stops as soon as the option is exercised, which occurs, at the very latest, when the terminal date T is reached. Given this interpretation, it is clear why we need condition (2) in the definition of an exercise strategy: at each date, it should be possible to decide whether or not to stop based only on the information available up to that date. Remark 19.2.2 (i) Note that for all 0 ≤ s ≤ t ≤ T we have TT ⊂ Tt ⊂ Ts ⊂ T0 . (ii) Let C ∈ A be an American option and consider an exercise strategy τ ∈ T0 . Since {τ = 0} ∈ F0 and since P0 is the trivial partition of , we always have either {τ = 0} = ∅ or {τ = 0} = . Clearly, as C0 = 0, no rational buyer will contemplate exercising at time 0. (iii) In the theory of stochastic processes, a random variable τ :  → {0, . . . , T } such that {τ = u} ∈ Fu for every u ∈ {0, . . . , T } is called a stopping time with respect to P. Hence, an exercise strategy starting at date t ∈ {0, . . . , T } is nothing but a stopping time taking values into the set {t, . . . , T }. 

19.2 Exercise Strategies

347

The following result provides a useful characterization of exercise strategies. We use freely that, by Proposition 9.2.5, for every u ∈ {0, . . . , T }, the field Fu is closed under unions, intersections, and the building of complements. Proposition 19.2.3 Let t ∈ {0, . . . , T − 1}. For every random variable of the form τ :  → {t, . . . , T } the following statements are equivalent: (a) τ is an exercise strategy (in Tt ). (b) {τ ≤ u} ∈ Fu for every u ∈ {t, . . . , T }. Proof To prove that (a) implies (b), assume that τ is an exercise strategy starting at date t and take u ∈ {t, . . . , T }. Since {τ = s} belongs to Fu we immediately obtain that {τ ≤ u} =

u 

{τ = s} ∈ Fu .

s=t

To prove that (b) implies (a), assume that {τ ≤ u} ∈ Fu holds for every u ∈ {t, . . . , T }. Note that, since τ ∈ Tt , we have {τ = t} = {τ ≤ t} ∈ F0 . Moreover, for every date u ∈ {t + 1, . . . , T }, we have {τ = u} = {τ ≤ u} ∩ {τ ≤ u − 1}c ∈ Fu . This concludes the proof of the equivalence.



It is useful to observe that the class of exercise strategies starting from a given date is closed under a variety of basic operations. Here, we focus on operations involving two exercise strategies. The generalization to an arbitrary finite number of exercise strategies is straightforward. Proposition 19.2.4 For every t ∈ {0, . . . , T } and for all exercise strategies σ, τ ∈ Tt the following statements hold: (i) min{σ, τ } and max{σ, τ } are exercise strategies in Tt . (ii) 1E σ + 1E c τ is an exercise strategy in Tt for every E ∈ Ft .

348

19 Market-Consistent Prices for American Options

Proof To show (i), take u ∈ {t, . . . , T } and note that {σ ≤ u}, {τ ≤ u} ∈ Fu by Proposition 19.2.3. Then, we easily see that {min{σ, τ } ≤ u} = {σ ≤ u} ∪ {τ ≤ u} ∈ Fu , {max{σ, τ } ≤ u} = {σ ≤ u} ∩ {τ ≤ u} ∈ Fu . Applying Proposition 19.2.3 again delivers the desired assertions. To prove (ii), take a date u ∈ {t, . . . , T } and note that {σ ≤ u}, {τ ≤ u} ∈ Fu by Proposition 19.2.3. Moreover, take E ∈ Ft . Since we have {1E σ + 1E c τ ≤ u} = (E ∩ {σ ≤ u}) ∪ (E c ∩ {τ ≤ u}) ∈ Fu . The desired claim follows immediately from Proposition 19.2.3.



Exercise Strategies and Payoff Streams Once an exercise strategy has been chosen, an American option turns into a regular payoff stream. This simple observation plays a fundamental role in our treatment of American options. Definition 19.2.5 (Payoff Stream Associated to an Exercise Strategy) Consider an American option C ∈ A. For every exercise strategy τ ∈ T0 the payoff stream C(τ ) ∈ X defined by C(τ ) := (0, 1{τ =1} C1 , . . . , 1{τ =T } CT ) 

is called the payoff stream associated to τ .

Let C ∈ A be a given American option and take t ∈ {0, . . . , T }. For every exercise strategy τ ∈ Tt the payoff stream C(τ ) is precisely the payoff stream the holder of C would receive if he or she were to follow the exercise strategy τ : • For every u ∈ {0, . . . , t − 1} we have 1{τ =u} Cu = 0 because the exercise strategy starts at date t. • For every u ∈ {t, . . . , T } and every ω ∈  we have ⎧ ⎨C (ω) u (1{τ =u} Cu )(ω) = ⎩0

if τ (ω) = u, if τ (ω) = u,

19.2 Exercise Strategies

349

showing that, at time u, the holder of the option receives the payoff Cu if τ prescribes to exercise at time u and nothing otherwise. Note that the sets {τ = 0}, . . . , {τ = T } are pairwise disjoint. This is consistent with the fact that the owner of the option receives only one payment during the lifetime of the contract. American Options as “Baskets” of Payoff Streams The preceding discussion shows that, provided it has not yet been exercised, an American option C ∈ A at date t ∈ {0, . . . , T } can be thought of as giving the holder the choice of selecting one (and only one) of the payoff streams in the “basket” {C(τ ) ; τ ∈ Tt }. For the seller the problem is that he or she will only learn about the particular choice of the buyer after having sold the option (otherwise it wouldn’t be an option) and, in fact, only at the moment the holder chooses to exercise it. This makes the study of American options more difficult than the study of regular payoff streams, for which the flow of payments is unambiguous from the very beginning. Associated Payoff Streams and Pricing Extensions Recall from Definition 18.4.8 that, given an arbitrage-free family ψ ∈ E(π), for every payoff stream X ∈ X and for every t ∈ {0, . . . , T − 1} we use the compact notation ψt (X) =

T 

ψt,u (Xu )

u=t

and its “localized” version ψt (X|A) =

T 

ψt,u (Xu |A)

u=t

for every A ∈ Pt . Moreover, we set ψT (X) = XT . Even when dealing with replicable payoff streams, it is convenient to make a detour over arbitrage-free families and use ψt to determine prices. This is because, as we have pointed out in Chap. 18, the individual components of a replicable payoff stream may not be themselves replicable, but we can always apply an arbitrage-free extension to the individual components as shown in Theorem 18.4.9. It is useful to make the following observation in the specific context of American options and the payoff streams associated to individual exercise strategies. The statements are a direct consequence of Theorem 18.5.1 and Proposition 18.4.19.

350

19 Market-Consistent Prices for American Options

Proposition 19.2.6 Let t ∈ {0, . . . , T }. For every American option C ∈ A and every exercise strategy τ ∈ Tt the following statements hold: (i) st (C(τ )) = {p ∈ Xt ; p ≥ ψt (C(τ )) for some ψ ∈ E(π)}. (ii) bt (C(τ )) = {p ∈ Xt ; p ≤ ψt (C(τ )) for some ψ ∈ E(π )}. (iii) t (C(τ )) = {ψt (C(τ )) ; ψ ∈ E(π)}. We conclude this section with a result about the attainability of certain optimization problems over a subset of exercise strategies that will prove useful in several instances later on. Proposition 19.2.7 Let t ∈ {0, . . . , T − 1} and consider a set S ⊂ Tt satisfying σ, τ ∈ S, E ∈ Ft ⇒ 1E σ + 1E c τ ∈ S. For every American option C ∈ A the following statements hold: (i) For every arbitrage-free family ψ ∈ E(π ) there exists σ ∈ S such that ψt (C(σ )) = max ψt (C(τ )). τ ∈S

(ii) There exist σ − , σ + ∈ S such that πt− (C(σ − ) = max πt− (C(τ )), τ ∈S

πt+ (C(σ + ) = max πt+ (C(τ )). τ ∈S

Proof We first prove (i). Since S is finite, for every A ∈ Pt there exists σA ∈ S such that max ψt (C(τ )|A) = ψt (C(σA )|A). τ ∈S

Consider now the exercise strategy in Tt defined by σ =

 A∈Pt

1A σA .

19.3 Marketed Bounds

351

Note that σ ∈ S by our assumption on S. Moreover, by definition, for every A ∈ Pt we have that σ = σA on A. Hence, by conditionality, we have ψt (C(σ )|A) = ψt (C(σA )|A) = max ψt (C(τ )|A). τ ∈ Tt

This establishes the desired statement. To prove (ii), it suffices to substitute πt− , respec tively πt+ , for ψt in the proof of (i).

19.3

Marketed Bounds

The key question that will occupy us in the remainder of this chapter is: If an American option were to be transacted, which prices should the seller consider asking and which prices should the buyer consider paying? In line with our approach, we are only looking for a range of prices so that, if the option is transacted at those prices, neither seller nor buyer could have done better by directly trading in the market. In other words, we are looking for a notion of market consistency. In this section we prepare the ground for the definition of market-consistent prices. The Limits of Replication Viewing terminal payoffs as American options as in Example 19.1.3 is, of course, not particularly interesting but it serves to make the point that the theory to be developed here is an extension of the theory developed so far for single-maturity payoffs. However, when replicable, a terminal payoff can be represented as an American option in a more insightful way that highlights the option of the holder to sell the payoff prior to maturity. This representation, which is discussed in our next example, will play a critical role in much of our treatment of American options. Example 19.3.1 (Replicable Single-Maturity Payoff) Let t ∈ {0, . . . , T − 1} and consider a positive replicable payoff Z ∈ Mt,T . Since it is always possible to sell Z at each time prior to maturity, say u ∈ {t, . . . , T − 1}, for the price πu,T (Z), owning Z is equivalent to owning the American option C Z,t ∈ A given by C Z,t := (0, . . . , 0, πt,T (Z), . . . , πT −1,T (Z), Z). When there is no ambiguity about the date t, as will always be the case in the sequel, we simply write C Z . In Remark 19.5.2 we explain that, for a replicable terminal payoff, the representation in this example and in Example 19.1.3 are economically equivalent.  So far, the underlying principle behind the pricing of financial contracts has been that of replication. Roughly speaking, a financial contract is replicable if its payoff, or payoff

352

19 Market-Consistent Prices for American Options

stream, can be “produced” by dynamically trading in the basic traded securities in a selffinancing way, i.e., without injecting or withdrawing funds in the process. Since American options are essentially a basket of payoff streams, it may appear at first sight that an American option should be called replicable if it is “perfectly replicable”, i.e., if all the associated payoffs streams are replicable and have the same market-consistent price. The next result shows that the only American options that satisfy these two conditions are those associated to a marketed payoff as introduced in Example 19.3.1. Proposition 19.3.2 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements are equivalent: (a) For all exercise strategies σ, τ ∈ Tt the payoff streams C(σ ) and C(τ ) are replicable at date t and satisfy πt (C(σ )) = πt (C(τ )). (b) There exists a positive replicable payoff Z ∈ Mt,T such that C t :T = C Z . In this case, we have πt (C(τ )) = πt,T (Z) for every exercise strategy τ ∈ Tt . Proof First, assume that (a) holds and consider the positive payoff Z = CT . Note that, by taking the constant exercise strategy equal to T , we have C(T ) = (0, . . . , 0, Z). Since, by assumption, C(T ) is replicable at date t, it follows from Remark 18.3.2 that Z is also replicable at date t. We claim that Cu = πu,T (Z)

(19.1)

for every u ∈ {t, . . . , T }. To show this, set first E = {Cu > πu,T (Z)} ∈ Fu and assume that E is nonempty. Consider the exercise strategy τ ∈ Tu given by τ = u1E + T 1E c . Then, for every arbitrage-free family ψ ∈ E(π ) we have πt,T (Z) = ψt (C(T )) = ψt (C(τ )) = ψt,u (ψu (C(τ ))) = ψt,u ((1E Cu + ψu,T (1E c Z)))  ψt,u ((1E ψu,T (Z) + 1E c ψu,T (Z))) = ψt,u (ψu,T (Z)) = πt,T (Z)

19.3 Marketed Bounds

353

by time consistency, conditionality, and strict monotonicity. Since this is a clear contradiction, the event {Cu > πu,T (Z)} must be empty. In the same way one shows that the event {Cu < πu,T (Z)} must be empty as well. In conclusion, we infer that (19.1) holds and this establishes that (a) implies (b). Assume now that (b) holds for some positive marketed payoff Z ∈ Mt,T . Fix τ ∈ Tt and let ψ ∈ E(π) be an arbitrage-free family. Then, we have ψt (C(τ )) =

T 

ψt,u (1{τ =u} Cu ) =

u=t

=

T 

T 

ψt,u (1{τ =u} πu,T (Z))

u=t

ψt,u (πu,T (1{τ =u} Z)) =

u=t

T 

ψt,T (1{τ =u} Z)

u=t

= ψt,T (Z) = πt,T (Z) by conditionality and time consistency. It follows from Theorem 18.5.1 that C(τ ) has a unique market-consistent price at date t, namely πt,T (Z), and, hence, it is replicable at that date by Proposition 18.6.12. Thus, (b) implies (a)  The preceding result shows that attempting to price American options by generalizing the concept of replication in a naive way would allow us only to price the “perfectly replicable” American options described in Example 19.3.1. Instead of trying to artificially craft a surrogate for replicability, it is more meaningful to generalize the concept of a market-consistent price, which describes the range of prices at which a specific financial contract can be transacted so that neither seller nor buyer have “better alternatives” by trading directly in the market. The formal concept behind “better alternatives” is that of marketed bounds, which are used to compare general payoff streams to marketed payoff streams and are the basis for the definition of market-consistent prices. Marketed Bounds for Associated Payoff Streams Before defining the notion of a marketed bound and of a market-consistent price, we undertake a slight “reformulation” of these concepts for payoff streams associated with an American option. The key idea here is that, for payoff streams associated with an American option, there is a natural surrogate for marketed bounds that relies on the “perfectly replicable” American options introduced in Example 19.3.1. From a technical perspective, this “reformulation” works because the payments delivered by a payoff stream associated with an American option are mutually exclusive as discussed in Sect. 19.2.

354

19 Market-Consistent Prices for American Options

Proposition 19.3.3 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. Moreover, take an exercise strategy τ ∈ Tt , a state A ∈ Pt , and a random variable p ∈ Xt . (i) The following statements are equivalent: (a) There exists a replicable payoff stream Z ∈ M− t (C(τ )) such that πt (Z) = p and Z t :T  C(τ ) on A. (b) There exists a replicable payoff Z ∈ Mt,T such that πt,T (Z) = p, C Z (τ ) ≤ C(τ ), C Z (τ )  C(τ ) on A. (ii) The following statements are equivalent: (a) There exists a replicable payoff stream Z ∈ M+ t (C(τ )) such that πt (Z) = p and Z t :T  C(τ ) on A. (b) There exists a replicable payoff Z ∈ Mt,T such that πt,T (Z) = p C Z (τ ) ≥ C(τ ), C Z (τ )  C(τ ) on A. Proof We only prove (i). To this effect, assume that (a) holds and set Z = Tt (Z). It follows from Proposition 18.4.4 that Z belongs to Mt,T and satisfies πt,T (Z) = πt (Z). Now, take a date u ∈ {t, . . . , T } and note that Z≤

Cu VT [η] on {τ = u}. Vu [η]

This immediately yields πu,T (Z) ≤ Cu on {τ = u} by monotonicity and conditionality. Moreover, we must find u ∈ {t, . . . , T } such that Z

Cu VT [η] on A ∩ {τ = u}. Vu [η]

Hence, πu,T (Z)  Cu on A ∩ {τ = u} by strict monotonicity and conditionality. This shows that Z satisfies (b). Conversely, assume that (b) holds and set Z = C Z (τ ). It follows from Proposition 19.3.2 that Z ∈ Mt and πt (Z) = πt,T (Z). It is clear that Z is a marketed lower bound for C(τ ) that is not identical to C(τ ) on A, showing that (a) holds. 

19.3 Marketed Bounds

355

Marketed Bounds for American Options Viewing an American option C ∈ A at date t ∈ {0, . . . , T − 1} as the basket of payoff streams C(τ ) for τ ∈ Tt and recalling Proposition 19.3.3, we immediately get an idea of how marketed lower and upper bounds should be defined: • A marketed lower bound should correspond to a marketed payoff Z ∈ Mt,T such that, from a seller perspective, C Z (τ ) is “better” than C(τ ) for at at least one exercise strategy τ ∈ Tt . Indeed, “paying” the payoff stream of such a marketed lower bound is preferable to “paying” the payoff stream C(τ ), which is one of the possible payoff streams the owner of the option has the right to claim. • A marketed upper bound should correspond to a marketed payoff Z ∈ Mt,T such that, from a buyer perspective, C Z (τ ) is “better” than C(τ ) for every exercise strategy τ ∈ Tt . In this case, for every τ ∈ Tt , “receiving” the payoff stream C Z (τ ) is preferable to “receiving” the payoff stream C(τ ), to which owning C entitles. Note the natural asymmetry, reflected by the use of “at least one” and “every”, in the above intuitive description of marketed lower and upper bounds, respectively. Definition 19.3.4 (Marketed Bound) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A and a payoff Z ∈ Mt,T . We say that Z is a marketed lower bound for C at date t if C Z (τ ) ≤ C(τ ) for some τ ∈ Tt . The set of marketed lower bounds for C at date t is denoted by M− t (C). We say that Z is a marketed upper bound for C at date t if C Z (τ ) ≥ C(τ ) for every τ ∈ Tt . The set of marketed upper bounds for C at date t is denoted by M+ t (C).



Remark 19.3.5 Let C ∈ A be an American option and, for some t ∈ {0, . . . , T − 1}, consider a replicable payoff Z ∈ Mt,T . It is clear that Z is a marketed lower bound for C at date t if and only if there exists an exercise strategy τ ∈ Tt such that πu,T (Z) ≤ Cu on {τ = u} for every u ∈ {t, . . . , T }. Similarly, Z is a marketed upper bound for C at date t  if and only if πu,T (Z) ≥ Cu for every u ∈ {t, . . . , T }. It is easy to verify that every American option admits marketed bounds.

356

19 Market-Consistent Prices for American Options

Proposition 19.3.6 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the sets + M− t (C) and Mt (C) are both nonempty. Proof The zero payoff is easily seen to be a marketed lower bound for C at date t. Moreover, consider a strictly-positive marketed payoff U ∈ Mt,T and take a ∈ (0, ∞). Then, it is clear that aU is a marketed upper bound for C at date t provided that a is sufficiently large. 

19.4

Market-Consistent Prices

By defining marketed lower and upper bounds, we have already prepared the ground for a formalization of the concept of market-consistent prices. The idea is, of course, that a market-consistent seller price must be strictly higher than the price of any marketed lower bound that is “strictly worse” than the option. Similarly, a market-consistent buyer price must be strictly lower than the price of any marketed upper bound that is “strictly better” than the option. Definition 19.4.1 (Market-Consistent Price) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A and a random variable p ∈ Xt . We say that p is a marketconsistent seller price for C at date t if for every marketed lower bound Z ∈ M− t (C) and every A ∈ Pt we have C Z (τ )  C(τ ) on A for some τ ∈ Tt ⇒ p(A) < πt,T (Z|A). The set of market-consistent seller prices for C at date t is denoted by st (C). We say that p is a market-consistent buyer price for C at date t if for every marketed upper bound Z ∈ M+ t (C) and every A ∈ Pt we have C Z (τ )  C(τ ) on A for every τ ∈ Tt ⇒ p(A) < πt,T (Z|A). The set of market-consistent buyer prices for C at date t is denoted by bt (C). Finally, we say that p is a market-consistent price for C at date t if it is simultaneously a market-consistent seller and buyer price for C at date t. The set of market-consistent  prices for C at date t is denoted by t (C). Remark 19.4.2 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. In the definition of a market-consistent seller price, one may have expected that one needs to additionally require that the “test” exercise strategy τ ∈ Tt satisfies C Z (τ ) ≤ C(τ ). It

19.4 Market-Consistent Prices

357

is, however, an easy exercise to show that for a random variable p ∈ Xt the following conditions are equivalent: (a) For every marketed lower bound Z ∈ M− t (C), every A ∈ Pt , and every τ ∈ Tt such that C Z (τ ) ≤ C(τ ) we have C Z (τ )  C(τ ) on A ⇒ p(A) < πt,T (Z|A). (b) p is a market-consistent seller price for C at date t.



Remark 19.4.3 It is not difficult to verify that the market-consistent prices of an American option with a single nonzero component correspond precisely to the market-consistent prices of that component. This shows that the above notion of market consistency generalizes the corresponding notion introduced for single-maturity payoffs; see also Theorem 19.6.9.  We now show that every arbitrage-free family of extensions ψ ∈ E(π ) can be used to define to define a market-consistent price for an American option C via the expression (19.2). In fact, in Theorem 19.6.9, we will see that every market-consistent price for C can be written in that way. Proposition 19.4.4 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A and every arbitrage-free family ψ ∈ E(π) we have that max ψt (C(τ )) τ ∈ Tt

(19.2)

is a market-consistent price for C at date t. Proof Set p = maxτ ∈Tt ψt (C(τ )) ∈ Xt and let A ∈ Pt . First, take Z ∈ M− t (C) and A ∈ Pt and assume that C Z (τ )  C(τ ) on A for some exercise strategy τ ∈ Tt . The strict monotonicity of ψt,T and Proposition 19.3.2 imply p(A) ≥ ψt (C(τ )|A) > ψt (C Z (τ )|A) = πt (C Z (τ )|A) = πt,T (Z|A). This shows that p is a market-consistent seller price for C at date t. Z Take now Z ∈ M+ t (C) and A ∈ Pt and assume that C (τ )  C(τ ) on A for every exercise strategy τ ∈ Tt . Note that, by Proposition 19.2.7, there exists σ ∈ Tt such that ψt (C(σ )) = p. The strict monotonicity of ψt,T and Proposition 19.3.2 now imply that p(A) = ψt (C(σ )|A) < ψt (C Z (σ )|A) = πt (C Z (σ )|A) = πt,T (Z|A). This shows that p is a market-consistent buyer price for C at date t.



358

19 Market-Consistent Prices for American Options

The set of market-consistent prices corresponds to the intersection of the sets of marketconsistent seller and buyer prices. The following result shows that we can recover the two latter sets from the set of market-consistent prices. Proposition 19.4.5 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) st (C) = {p ∈ Xt ; p ≥ q for some q ∈ t (C)}. (ii) bt (C) = {p ∈ Xt ; p ≤ q for some q ∈ t (C)}. Proof We start by proving (i). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent seller price p ∈ st (C) and a market-consistent price r ∈ t (C) and set q = min{p, r}. It is easy to verify that q is a market-consistent price for C at date t. This establishes the desired inclusion. Next, we focus on (ii). The inclusion “⊃” is clear. To show the inclusion “⊂”, take any market-consistent buyer price p ∈ bt (C) and a market-consistent price r ∈ t (C) and set q = max{p, r}. It is easy to see that q is a market-consistent price for C at date t. This establishes the desired inclusion.  Seller Pricing Strategies and Lower-Optimal Strategies In this and the next section we introduce two classes of exercise strategies that will play an important role in our treatment of American options. On the one side, seller and buyer pricing strategies allow to express the seller and buyer market-consistent prices of the option in terms of the market-consistent prices of their associated payoff streams. On the other side, lower and upper optimal strategies allow to express the sub- and superreplication price of the option in terms of the sub- and superreplication price of their associated payoff streams. We start here with seller pricing strategies and lower optimal strategies. Definition 19.4.6 (Seller Pricing Strategy) Let t ∈ {0, . . . , T } and consider an American option C ∈ A. We say that an exercise strategy σ ∈ Tt is a seller pricing strategy for C at date t whenever

st (C(σ )) = st (C). The set of seller pricing strategies for C at date t is denoted by Tt s (C). If there is no  ambiguity, we simply write Tt s .

19.4 Market-Consistent Prices

359

Definition 19.4.7 (Lower-Optimal Strategy) Let t ∈ {0, . . . , T } and consider an American option C ∈ A. An exercise strategy σ ∈ Tt is said to be lower optimal for C at date t whenever πt− (C(σ )) = max πt− (C(τ )). τ ∈ Tt

The set of lower-optimal strategies for C at date t is denoted by Tt − (C) or just Tt − if there is no ambiguity about C.  While it follows immediately from Proposition 19.2.7 that every American option admits lower-optimal strategies, establishing the existence of seller pricing strategies requires more work. The proof of our existence result relies on the following lemma. Lemma 19.4.8 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A there exists a lower-optimal strategy σ ∈ Tt − satisfying πt+ (C(σ )) = max πt+ (C(τ )).

(19.3)

τ ∈ Tt −

Proof We first prove the existence result. From Proposition 19.2.7 applied to S = Tt we know that Tt − is nonempty. Now, consider two exercise strategies σ, τ ∈ Tt − and take E ∈ Ft . Note that C(1E σ + 1E c τ ) = 1E C(σ ) + 1E c C(τ ). By conditionality, we get πt− (C(1E σ + 1E c τ )) = 1E πt− (C(σ )) + 1E c πt− (C(τ )) = max πt− (C(ρ)). ρ∈Tt

This establishes that 1E σ + 1E c τ ∈ Tt − . As a result, we may again apply Proposition 19.2.7, this time to S = Tt − , to obtain that there always exists an exercise strategy σ ∈ Tt − satisfying (19.3).  We are now ready to establish the existence of seller pricing strategies for an American option C ∈ A. In addition, for every t ∈ {0, . . . , T − 1}, we show that every seller pricing strategy for C at date t is automatically lower optimal for C at date t and establish the identity

st (C) =



st (C(τ )).

τ ∈ Tt

This says that, for an agent to consider selling the “basket” of payoff streams associated to C, the price must be a market-consistent seller price for every payoff stream in the basket. This makes sense because the buyer is entitled to choose any of these payoff streams from the basket and the seller would not want to have sold the chosen payoff stream too cheaply.

360

19 Market-Consistent Prices for American Options

Theorem 19.4.9 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the set Tt s is nonempty and for every seller pricing strategy σ ∈ Tt s the following statements hold: (i) st (C(σ )) = st (C) = τ ∈Tt st (C(τ )). (ii) σ is lower optimal for C at date t. Proof We fix σ ∈ Tt − as in Lemma 19.4.8. The first thing to show is that

st (C(σ )) =



st (C(τ )).

(19.4)

τ ∈ Tt

Clearly, it suffices to establish the inclusion “⊂”. To this effect, take an arbitrary marketconsistent price p ∈ st (C(σ )) and τ ∈ Tt . Note that p ≥ πt− (C(σ )) ≥ πt− (C(τ )) because σ belongs to Tt − . As a result, for every A ∈ Pt there are two possibilities: Either p(A) > πt− (C(τ )|A) or p(A) = πt− (C(τ )|A). Note that, in the second case, we must have p(A) = πt− (C(σ )|A) and therefore 1A C(σ ) is replicable at date t by Proposition 18.6.12. In this case, we infer from (19.3) that πt− (C(τ )|A) = p(A) = πt− (C(σ )|A) = πt+ (C(σ )|A) ≥ πt+ (C(τ )|A), which implies that 1A C(τ ) is replicable at date t again by Proposition 18.6.12. Hence, we can use Theorem 18.6.11 to conclude that p is a market-consistent seller price for C(τ ) at date t. Since τ ∈ Tt was arbitrary, this delivers the desired inclusion and establishes (19.4). By (19.4), to prove that σ belongs to Tt s we need to show that

st (C) =



st (C(τ )).

(19.5)

τ ∈ Tt

To prove the inclusion “⊂”, let p ∈ st (C) and take an arbitrary τ ∈ Tt and a payoff stream Z ∈ M− t (C(τ )) such that Z t :T  C(τ ) on some A ∈ Pt . By Proposition 19.3.3, we find Z a marketed payoff Z ∈ M− t (C) such that C (τ )  C(τ ) on A and πt,T (Z) = πt (Z). Then, we get p(A) > πt,T (Z|A) = πt (Z|A) by market consistency. This shows that p is a market-consistent seller price for C(τ ) at date t. To establish the inclusion “⊃”, assume that p ∈ st (C(τ )) for every τ ∈ Tt . Take a Z marketed lower bound Z ∈ M− t (C) such that C (ρ)  C(ρ) on A for some ρ ∈ Tt

19.4 Market-Consistent Prices

361

and A ∈ Pt . It follows from Proposition 19.3.3 that there exists a suitable payoff stream Z ∈ M− t (C(ρ)) such that Z t :T  C(ρ) on A and πt (Z) = πt,T (Z). We infer from market consistency that p(A) > πt (Z|A) = πt,T (Z|A). This shows that p is a market-consistent seller price for C at date t and concludes the proof of (19.5). Statement (i) follows immediately from (19.5) and the definition of a seller pricing strategy. Statement (ii) is a straightforward consequence of (i).  We have just seen that a seller pricing strategy is automatically lower optimal. As illustrated by the following example, the converse implication does not hold in general. Example 19.4.10 (A Lower-Optimal Strategy That is Not a Seller Pricing Strategy) Consider the sample space  = {ω1 , . . . ω4 } and the two-period market with binomial information structure P given by P0 = {}, P1 = {{ω1 , ω2 }, {ω3 , ω4 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }}. We consider the basic security and the American option defined by 1 1 1 1

1 1

: 1 1

0 : 0 0

1 0 1 0

In this simple one-security setting market-consistent prices can be easily determined. In particular, it follows from Theorem 19.4.9 that

s1 (C) =



s1 (C(τ )) = {p ∈ X1 ; p > 0}.

τ ∈ T1

Moreover, we have max π1− (C(τ )) = 0.

τ ∈ T1

The constant exercise strategy σ = 1 ∈ T1 satisfies s1 (C(σ )) = {p ∈ X1 ; p ≥ 0}, showing that σ is not a seller pricing strategy for C at date 1. At the same time, we have  π1− (C(σ )) = 0, implying that σ is lower optimal for C at date 1.

362

19 Market-Consistent Prices for American Options

Buyer Pricing Strategies and Upper-Optimal Strategies We now take the buyer’s perspective and introduce the notion of a pricing strategy and an optimal strategy from a buyer’s perspective. Definition 19.4.11 (Buyer Pricing Strategy) Let t ∈ {0, . . . , T } and consider an American option C ∈ A. We say that an exercise strategy σ ∈ Tt is a buyer pricing strategy for C at date t whenever

bt (C(σ )) = bt (C). The set of buyer pricing strategies for C at date t is denoted by Tt b (C). If there is no  ambiguity, then we simply write Tt b . Definition 19.4.12 (Upper-Optimal Strategy) Let t ∈ {0, . . . , T } and consider an American option C ∈ A. An exercise strategy σ ∈ Tt is said to be upper optimal for C at date t whenever πt+ (C(σ )) = max πt+ (C(τ )). τ ∈ Tt

The set of upper-optimal strategies for C at date t is denoted by Tt + (C) or just Tt + if there is no ambiguity about C.  As for the corresponding seller’s notions, the existence of an upper-optimal strategy is a direct consequence of Proposition 19.2.7. However, establishing the existence of buyer pricing strategies requires a bit more preparation than the corresponding statement for seller benchmark strategies. We first need the following result on marketed upper bounds, which is of independent interest and has a variety of other applications; see Sect. 19.5. Proposition 19.4.13 Let t ∈ {0, . . . , T }. For every American option C ∈ A there exists a payoff X ∈ M+ t (C) such that πt,T (X) = max πt+ (C(τ )). τ ∈ Tt

Proof We prove the statement by backward induction. Base Step If t = T , then the statement is clearly satisfied by X = CT . Inductive Step Assume the statement holds for some date t ∈ {1, . . . , T } and consider a payoff Y ∈ M+ t (C) satisfying πt,T (Y ) = max πt+ (C(τ )). τ ∈ Tt

19.4 Market-Consistent Prices

363

+ By Proposition 17.4.3 we find W ∈ M+ t −1,T (Y ) such that πt −1,T (W ) = πt −1,T (Y ). Note that 

+ + + + πt −1,T (Y ) = πt −1,t (πt,T (Y )) = πt −1,t max πt (C(τ ) τ ∈ Tt

by time consistency. Note also that πt+−1,t

max πt+ (C(τ ) τ ∈ Tt



= max πt+−1 (C(τ )). τ ∈ Tt

The inequality “≤” is an immediate consequence of monotonicity. The inequality “≥” follows from Proposition 19.2.7. As a result, we get πt+−1,T (Y ) = max πt+−1 (C(τ )). τ ∈ Tt

Now, set E = {Ct −1 > maxτ ∈Tt πt+−1 (C(τ ))} ∈ Ft −1 and define Z = W + 1E

Ct −1 − πt −1,T (W ) VT [η] ∈ Mt −1,T . Vt −1 [η]

It is not difficult to see that max πt+−1 (C(τ )) =

τ ∈Tt−1

⎧ ⎨C

on E,

⎩maxτ ∈T π + (C(τ ))) t t −1

on E c .

t −1

As a consequence, we obtain πt −1,T (Z) = 1E Ct −1 + 1E c πt −1,T (W ) = max πt+−1 (C(τ )). τ ∈Tt−1

This implies, in particular, that πt −1,T (Z) ≥ πt+−1 (C(t − 1)) = Ct −1 . Moreover, since Ct −1 > πt+−1,T (Y ) = πt −1,T (W ) on E, we have Z ≥ W ≥ Y and for every u ∈ {t, . . . , T } we can therefore write πu,T (Z) ≥ πu,T (Y ) ≥ Cu . This shows that Z ∈ M+ t (C) and concludes the induction argument.



We establish a second preliminary result needed to prove the existence of buyer pricing strategies.

364

19 Market-Consistent Prices for American Options

Lemma 19.4.14 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A there exists an upper-optimal strategy σ ∈ Tt + satisfying πt− (C(σ )) = max πt− (C(τ )).

(19.6)

τ ∈ Tt +

Proof It follows from Proposition 19.2.7 applied to S = Tt that Tt + is nonempty. Now, let σ, τ ∈ Tt + and E ∈ Ft . Note that C(1E σ + 1E c τ ) = 1E C(σ ) + 1E c C(τ ). Then, it follows from conditionality that πt+ (C(1E σ + 1E c τ )) = 1E πt+ (C(σ )) + 1E c πt+ (C(τ )) = max πt− (C(τ )). τ ∈ Tt

This establishes that 1E σ + 1E c τ ∈ Tt + . Proposition 19.2.7 applied to S = Tt + now implies that there always exists an exercise strategy σ ∈ Tt + satisfying (19.6).  We are now in a position to establish the existence of buyer pricing strategies for an American option C ∈ A. In addition, for every t ∈ {0, . . . , T − 1}, we show that every buyer pricing strategy for C at date t is automatically upper optimal for C at date t and establish the identity

bt (C) =



bt (C(τ )).

τ ∈ Tt

This has an intuitive financial interpretation provided that we view C as a basket of payoff streams: The price of C, for an agent to consider buying it, must be a market-consistent buyer price for at least one of the payoff streams in the basket. This is because the buyer will not want to pay too much for whichever payoff stream he or she ends up choosing from the basket. Theorem 19.4.15 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the set Tt b is nonempty and for every buyer pricing strategy σ ∈ Tt b the following statements hold: (i) bt (C(σ )) = bt (C) = τ ∈Tt bt (C(τ )). (ii) σ is upper optimal for C at date t. Proof We fix σ ∈ Tt − as in Lemma 19.4.14 and start by showing that

bt (C) ⊂ bt (C(σ )).

(19.7)

19.4 Market-Consistent Prices

365

To this effect, take p ∈ bt (C) and consider a payoff stream Z ∈ M+ t (C(σ )) such that + Z t :T  C(σ ) on some A ∈ Pt . Since σ ∈ Tt , it follows from Proposition 19.4.13 that there exists a payoff W ∈ M+ t (C) such that πt,T (W ) = max πt+ (C(τ )) = πt+ (C(σ )) ≤ πt (Z). τ ∈ Tt

Clearly, the replicable payoff Z ∈ Mt,T given by Z=W+

πt (Z) − πt,T (W ) VT [η] Vt [η]

belongs to M+ t (C) and satisfies πt,T (Z) = πt (Z). We claim that for every τ ∈ Tt C Z (τ )  C(τ ) on A.

(19.8)

If (19.8) is not true, then we must have C Z (τ ) = C(τ ) on A for some τ ∈ Tt . It follows from Proposition 19.3.2 that 1A C(τ ) is replicable at date t and πt− (C(τ )|A) = πt+ (C(τ )|A) = πt,T (Z|A) ≥ πt+ (C(σ )|A).

(19.9)

Set ρ = 1A τ + 1Ac σ ∈ Tt . Since σ ∈ Tt + , we infer from conditionality and from (19.9) that ρ ∈ Tt + as well. As a consequence of (19.6), we see that πt− (C(σ )|A) ≥ πt− (C(ρ)|A) = πt− (C(τ )|A) ≥ πt+ (C(σ )|A).

(19.10)

Hence, Proposition 18.6.12 implies that 1A C(σ ) is replicable at date t. However, this entails πt,T (Z|A) = πt (Z|A) > πt (1A C(σ )|A) = πt− (C(τ )|A) = πt,T (Z|A) by using the strict monotonicity of πt and by combining (19.8) and (19.9). This is clearly not possible and we therefore conclude that (19.8) must hold. As p is a market-consistent buyer price for C at date t, we infer that p(A) < πt,T (Z|A) = πt (Z|A). This shows that p is also a market-consistent buyer price for C(σ ) at date t and concludes the proof of (19.7).

366

19 Market-Consistent Prices for American Options

As our next step, we establish that 

bt (C(τ )) ⊂ bt (C).

(19.11)

τ ∈ Tt

To this effect, let p ∈ bt (C(τ )) for some τ ∈ Tt and take Z ∈ M+ t (C) and A ∈ Pt such that C Z (ρ)  C(ρ) on A for every ρ ∈ Tt . In particular, this holds for ρ = τ . Then, by Proposition 19.3.3, we find a payoff stream Z ∈ M+ t (C(τ )) such that Z t :T  C(τ ) on A and πt (Z) = πt,T (Z). As a result of market consistency, we get p(A) < πt (Z|A) = πt,T (Z|A), showing that p is a market-consistent buyer price for C at date t. This concludes the proof of (19.11). It follows immediately from (19.7) and (19.11) that (i) holds. Statement (ii) is a straightforward consequence of (i).  We have seen that a buyer pricing strategy is automatically upper optimal. The next example shows that the contrary does not hold in general. Example 19.4.16 (An Upper-Optimal Strategy That is Not a Buyer Pricing Strategy) In the setting of Example 19.4.10 consider the American option 1 : 0 1

1 0 1 0

It follows from Theorem 19.4.15 that

b1 (C) =



b1 (C(τ )) = {p ∈ X1 ; p ≤ 1}.

τ ∈ T1

Moreover, we easily see that max π1+ (C(τ )) = 1.

τ ∈ T1

The constant exercise strategy σ = 2 ∈ T1 satisfies b1 (C(σ )) = {p ∈ X1 ; p < 1}, showing that σ is not a buyer pricing strategy for C at date 1. At the same time, we have  π1+ (C(σ )) = 1, implying that σ is upper optimal for C at date 1.

19.4 Market-Consistent Prices

367

Optionality Has No Market Value At first sight one would expect that a buyer of an American option would be prepared to offer more than the buyer price of any of its associated payoff streams and that a seller would want to ask more than the seller price of any of its associated payoff streams. After all the owner of the American option has the option to choose any of these payoff streams and, surely, this optionality must cost something, i.e. command a premium over and above the market-consistent prices of the individual payoff streams to which the option entitles. In view of the above results on benchmark strategies, this is, however, not the case. To see this, consider an American option C ∈ A at any date t ∈ {0, . . . , T − 1}: • The existence of a seller pricing strategy σ ∈ Tt s means that the market-consistent seller prices for the option coincide with the market-consistent seller prices for the individual payoff stream C(σ ). This makes clear that market prices do not imply a premium for the seller of C to charge for the optionality in C. • Similarly, the existence of a buyer pricing strategy σ ∈ Tt b means that the marketconsistent buyer prices for the option coincide with the market-consistent buyer prices for the individual payoff stream C(σ ). Hence, market prices do not imply that the buyer of C needs to pay a premium for the optionality in C. Remark 19.4.17 (American Options and Individual Preferences) The preceding discussion shows that the willingness of an agent to sell, respectively buy, an American option instead of a payoff stream associated to a seller, respectively buyer, pricing strategy cannot be justified in terms of market prices alone but must be explained in view of the agent’s individual preferences.  Market-Consistent Prices in Complete Markets If the market is complete, then for every t ∈ {0, . . . , T − 1} the market-consistent price price πt (X) is defined for every payoff stream X ∈ X and coincides with both the subreplication price πt− (X) and the superreplication price πt+ (X). As a result, for every American option C ∈ A we have max πt (C(τ )) = max πt− (C(τ )) = max πt+ (C(τ )). τ ∈ Tt

τ ∈ Tt

τ ∈ Tt

It follows that upper and lower optimal exercise strategies coincide in a complete market. This motivates the following definition. Definition 19.4.18 (Optimal Exercise Strategy) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. If the market is complete, then every exercise strategy σ ∈ Tt

368

19 Market-Consistent Prices for American Options

satisfying πt (C(σ )) = max πt (C(τ )) τ ∈ Tt

is said to be optimal for C at date t.



The following proposition summarizes all there is to say about the valuation of American options in complete markets. Proposition 19.4.19 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. If the market is complete, then for every σ ∈ Tt the following statements are equivalent: (a) σ is a seller pricing strategy for C at date t. (b) σ is a buyer pricing strategy for C at date t. (c) σ is optimal for C at date t. In this case, we have t (C) = t (C(σ )). In particular, C has a a unique marketconsistent price given by πt (C(σ )) = max πt (C(τ )). τ ∈ Tt

(19.12)

Proof Let σ ∈ Tt be optimal for C at date t. Then, since all payoff streams in a complete market are replicable at date t by Proposition 18.3.9, we have

st (C(σ )) = {p ∈ Xt ; p ≥ πt (C(σ ))} ⊂ {p ∈ Xt ; p ≥ πt (C(τ ))} = st (C(τ )) for every τ ∈ Tt . By Theorem 19.4.9, this shows that σ is a seller pricing strategy for C at date t and, hence, that (c) implies (a). Similarly, we have

bt (C(σ )) = {p ∈ Xt ; p ≤ πt (C(σ ))} ⊃ {p ∈ Xt ; p ≤ πt (C(τ ))} = bt (C(τ )) for every τ ∈ Tt . By Theorem 19.4.15, this shows that σ is a buyer pricing strategy for C at date t and, hence, that (c) implies (b). We know already from Theorems 19.4.9 and 19.4.15 that a seller pricing strategy is lower optimal and a buyer pricing strategy is upper optimal. Clearly, in a complete market, lower and upper optimality coincide with optimality by Proposition 18.6.12. Hence, we conclude that (a) and (b) both imply (c).

19.5 Replicable American Options

369

To conclude the proof, assume that σ ∈ Tt is optimal for C at date t. We have just proved that σ is both seller and a buyer pricing strategy for C at date t. Hence, % (

t (C) = s (C(σ )) ∩ bt (C(σ )) = {πt (C(σ ))} = max πt (C(τ )) . τ ∈ Tt

This shows that (19.12) holds and concludes the proof



As a result of the above proposition, it is not necessary to distinguish between seller and buyer pricing strategies in a complete market. Hence, the following definition makes sense. Definition 19.4.20 (Pricing Strategy) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. If the market is complete, then we say that an exercise strategy σ ∈ Tt is a pricing strategy for C at date t if any of the equivalent conditions in Proposition 19.4.19 holds. 

19.5

Replicable American Options

Recall that, by Proposition 18.6.12, the replicability of a payoff stream is equivalent to the existence of a unique market-consistent price. Motivated by this fact, it has become customary to refer to those American options that admit a single market-consistent price as replicable, or attainable, American options. Given the above discussion about the limits of replicability, this terminology is not ideal. We have nevertheless retained it because of its widespread use in the literature. In this section we will define replicable American options and show why they are superfluous in the sense to be specified below. Definition 19.5.1 (Replicable American Option) Let t ∈ {0, . . . , T − 1}. We say that an American option C ∈ A is replicable at date t whenever it admits a unique market-consistent price at date t, which is denoted by πt (C). For convenience, we also set πT (C) := CT .  Remark 19.5.2 (i) It is immediate to verify that the notion of replicability we have just introduced generalizes the corresponding notion introduced for single-maturity payoffs. Indeed, let X ∈ XT be a positive payoff and consider the American option C ∈ A given by C = (0, . . . , 0, X).

370

19 Market-Consistent Prices for American Options

Then, for every t ∈ {0, . . . , T − 1}, we have that C is replicable at date t if and only if X is replicable at date t. In this case, πt (C) = πt,T (X) holds. The case of payoffs maturing at intermediate dates is analogous; see Example 19.1.3. (ii) Let t ∈ {0, . . . , T − 1} and consider a positive replicable payoff Z ∈ Mt,T . Using Proposition 19.3.2 it is easy to show that the American option C Z ∈ A introduced in Example 19.3.1 is replicable at date t and satisfies πt (C Z ) = πt,T (Z). Moreover, every exercise strategy τ ∈ Tt satisfies πt (C Z (τ )) = πt,T (Z) and is, therefore, a seller and a buyer pricing strategy for C Z . We will see in our next proposition that this is a general feature of replicable options: seller and buyer pricing strategies always coincide. (iii) The preceding remark shows the link between Examples 19.1.3 and 19.3.1. More precisely, for every t ∈ {0, . . . , T −1} and every replicable payoff Z ∈ Mt,T , it shows that, in economic terms, owning the (replicable) American option C ∈ A given by C = (0, . . . , 0, Z) is equivalent to owning the (replicable) American option C Z ∈ A. Indeed, exercising C Z at any date u ∈ {t, . . . , T } delivers the same proceeds as selling  C at date u, namely πu,T (Z). As an immediate consequence of Proposition 19.4.19 we conclude that every American option in a complete market is replicable. Proposition 19.5.3 Let t ∈ {0, . . . , T −1}. If the market is complete, then every American option C ∈ A is replicable at date t. It is worthwhile noting that, contrary to the case of European contracts, an American option that is replicable at a given date need not be replicable at a later date. This is because, as time progresses, there are fewer and fewer exercise strategies available and condition (e) in Proposition 19.5.6 may cease to hold. Example 19.5.4 (Losing Replicability Through Time) Let  = {ω1 , . . . , ω8 } and consider the three-period market equipped with the binomial information structure P given by P0 = {}, P1 = {{ω1 , . . . , ω4 }, {ω5 , · · · , ω8 }}, P2 = {{ω1 , ω2 }, . . . , {ω7 , ω8 }}, P3 = {{ω1 }, . . . , {ω8 }}.

19.5 Replicable American Options

371

We consider the basic security and the American option defined by 1 1 1 1

: 1 1 1 1

1 1 1 1 1 1 1 1

0 1 0 : 0 0 1 0

1 0 0 0 0 0 0 0

Set Z = S31 ∈ M1,3 and σ = 1 ∈ T1 . It is immediate to see that Z ∈ M+ 1 (C) and C Z (σ ) = C(σ ). Hence, C is replicable at date 1 by Proposition 19.5.6. Now, consider the event A = {ω1 , ω2 , ω3 , ω4 } and note that a payoff X ∈ X3 belongs to M+ 2 (C) if and only if X = a + 2b1A for some a ∈ [0, ∞) and b ∈ [(1 − a)/2, ∞). However, in this case, there exists no exercise strategy σ ∈ T2 such that C X (σ ) = C(σ ). The explicit verification is left to the reader. As a result, we can use again Proposition 19.5.6 to infer that C is not replicable at date 2.  Our first result provides a characterization of the market-consistent price of replicable American options which is reminiscent of the representation for the market-consistent price of an American option in a complete market given in Proposition 19.4.19. Proposition 19.5.5 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A that is replicable at date t and every arbitrage-free family ψ ∈ E(π ) we have πt (C) = max ψt (C(τ )). τ ∈ Tt

Proof From Proposition 19.4.4 we know that maxτ ∈Tt ψt (C(τ )) is a market-consistent price for C at date t for every ψ ∈ E(π). The desired result now follows since C admits a unique market-consistent price at date t.  The following proposition records a number of equivalent conditions for an American option to be replicable. Proposition 19.5.6 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements are equivalent: (a) C is replicable at date t. (b) C(σ ) is replicable at date t for every buyer pricing strategy σ ∈ Tt b . (c) C(σ ) is replicable at date t for some buyer pricing strategy σ ∈ Tt b .

372

19 Market-Consistent Prices for American Options

(d) C(σ ) is replicable at date t for some upper-optimal strategy σ ∈ Tt + . Z (e) There exist Z ∈ M+ t (C) and σ ∈ Tt such that C (σ ) = C(σ ). In this case, for every buyer pricing strategy σ ∈ Tt b and every payoff Z ∈ Mt,T as in (e) we have πt (C) = πt (C(σ )) = πt,T (Z).

(19.13)

Proof First of all, assume that (a) holds and take any buyer pricing strategy σ ∈ Tt b . It follows from Proposition 19.4.5 that

bt (C(σ )) = bt (C) = {p ∈ Xt ; p ≤ πt (C)}. Then, we infer from Theorem 18.6.11 that C(σ ) must be replicable at date t, showing that (b) holds. It is clear that (b) implies (c) and it follows from Theorem 19.4.15 that (c) implies (d). Now, assume that (d) holds for some upper-optimal strategy σ ∈ Tt + . Then, the marketed bound Z ∈ M+ t (C) from Proposition 19.4.13 satisfies πt,T (Z) = max πt+ (C(τ )) = πt (C(σ )). τ ∈ Tt

(19.14)

Note that C Z (σ ) ≥ C(σ ). If C Z (σ )  C(σ ) on some A ∈ Pt , then we would get πt,T (Z|A) = πt (C Z (σ )|A) > πt (C(σ )|A) by Proposition 19.3.2 and by strict monotonicity. This is, however, in contrast to (19.14). As a result, we must have C Z (σ ) = C(σ ), showing that (e) holds. To conclude the proof of the equivalence, assume that condition (e) holds for some Z ∈ M+ t (C) and σ ∈ Tt . It follows from Proposition 19.3.2 that C(σ ) is replicable at date t and satisfies πt (C(σ )) = πt (C Z (σ )) = πt,T (Z). Since C Z (τ ) ≥ C(τ ) for every τ ∈ Tt , we have πt,T (Z) = πt (C Z (τ )) ≥ πt+ (C(τ )) ≥ πt− (C(τ )) for every τ ∈ that σ belongs Tt b by Lemma by Proposition

Tt by Proposition 19.3.2 and by monotonicity. As a result, we infer to Tt + and satisfies (19.4.14). This implies that σ belongs even to 19.4.14. Since C(σ ) has a unique market-consistent price at date t 18.6.12 and we have t (C) ⊂ t (C(σ )) by Theorem 19.4.9 and

19.5 Replicable American Options

373

Theorem 19.4.15, we conclude that (a) holds. In particular, πt (C) = πt (C(σ )) holds. This concludes the proof of the equivalence and establishes (19.13).  We conclude this section by showing that, for a replicable American option, the notions of seller and buyer pricing strategies coincide. Moreover, they also essentially coincide with lower and upper optimal exercise strategies. Proposition 19.5.7 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A that is replicable at date t and for every σ ∈ Tt the following statements are equivalent: (a) (b) (c) (d)

σ σ σ σ

is a buyer pricing strategy for C at date t. is upper optimal for C at date t and C(σ ) is replicable at date t. is a seller pricing strategy for C at date t. is lower optimal for C at date t.

Proof It follows from Theorem 19.4.15 and Proposition 19.5.6 that (a) implies (b). Next, assume that (b) holds and note that πt− (C(σ )) = πt+ (C(σ )) = max πt+ (C(τ )) ≥ max πt− (C(τ )). τ ∈ Tt

τ ∈ Tt

This shows that σ is lower optimal for C at date t. Moreover, we have πt+ (C(σ )) = max πt+ (C(τ )). τ ∈ Tt −

Hence, we can argue as in the proof of Theorem 19.4.9 to conclude that σ is a seller pricing strategy for C at date t, proving (c). That (c) implies (d) follows from Theorem 19.4.9. Finally, assume that (d) holds. By Proposition 19.5.6, there exists an upper-optimal strategy ρ ∈ Tt + such that C(ρ) is replicable. Then, we see that πt+ (C(σ )) ≥ πt− (C(σ )) ≥ πt− (C(ρ)) = πt+ (C(ρ)) = max πt+ (C(τ )). τ ∈ Tt

This shows that σ is upper optimal for C at date t. Moreover, we clearly have πt− (C(σ )) = max πt− (C(τ )). τ ∈ Tt +

Hence, we can argue as in the proof of Theorem 19.4.15 to conclude that σ is a buyer pricing strategy for C at date t. This establishes (a). 

374

19 Market-Consistent Prices for American Options

From Proposition 19.5.7 we conclude that, when an American option is replicable, it is not necessary to distinguish between seller and buying pricing strategies and, thus, the following definition is meaningful. Definition 19.5.8 (Pricing Strategy) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A that is replicable at date t. We say that an exercise strategy σ ∈ Tt is a pricing strategy for C at date t if any of the equivalent conditions in Proposition 19.5.7 holds.  Our next example shows that the replicability condition in point (b) of Proposition 19.5.7 is necessary. In other words, for a replicable American option, an exercise strategy may be upper optimal without being a buyer pricing strategy. This also highlights the different roles played by pricing strategies and optimal strategies in Proposition 19.5.6. Example 19.5.9 (An Upper-Optimal Strategy That is Not a Pricing Strategy) In the setting of Example 19.4.10 consider the American option 1 : 0 1

1 0 1 0

It is clear from Proposition 19.5.6 that C is replicable at date 1 because the payoff Z = S21 Z belongs to M+ 1 (C) and satisfies C (1) = C(1). In Example 19.4.16 we have shown that the constant exercise strategy σ = 2 ∈ T1 is upper optimal but is not a buyer pricing strategy for C at date 1. In line with Proposition 19.5.7, it can be easily checked that C(σ ) is not replicable at date 1.  Never Buy a Replicable American Option Proposition 19.5.6 allows us to make a very strong statement about American options that are replicable: They are not worth buying! Indeed, when transacted, a replicable American option should change hands at its unique market-consistent price. But Proposition 19.5.6 implies that the potential buyer can instead buy for the same price a marketed upper bound, which is at least as good regardless of the particular exercise strategy that is chosen. To spell this out, consider an American option C ∈ A and assume that C is replicable at date t ∈ {0, . . . , T − 1}. If an agent buys C, then it will have to be at the price πt (C). The fact that this is a market-consistent price means that from a market perspective buying C at this price is not “foolish”. But, is it reasonable? Since the option is replicable, we have the choice of spending πt (C) to buy C or, by Proposition 19.5.6, which critically depends on Proposition 19.4.13, a replicable payoff Z ∈ Mt,T satisfying πu,T (Z) ≥ Cu

19.6 Characterizing Market-Consistent Prices

375

for every u ∈ {t, . . . , T }. If the potential buyer cares about the optionality, then, clearly, he or she is better off buying Z. It follows that it does not make sense to buy the replicable American option C unless it coincides with C Z , in which case buying C amounts to buying Z. If the buyer does not particularly care about the optionality, then he or she is no worse off by buying Z than C. Hence, in both cases the buyer could just as well buy Z. The moral of the story is: Anything you can do with a replicable American option you can do better with a replicable payoff maturing at the terminal date T . Since in a complete market all American options are replicable, the above discussion implies that, from an economic perspective, American options in a complete market are entirely superfluous.

19.6

Characterizing Market-Consistent Prices

Let C ∈ A be an American option and fix t ∈ {0, . . . , T − 1}. In Proposition 19.4.4 we established that, for every arbitrage-free family ψ ∈ E(π), the quantity max ψt (C(τ )) τ ∈ Tt

(19.15)

is a market-consistent price for C at date t. This section continues the study of marketconsistent prices by showing that, in fact, every market-consistent price can be written in this form. In the proofs below we will repeatedly use the following simple result showing how to construct suitable arbitrage-free families of pricing extensions by way of composition and conditional convex combinations. Lemma 19.6.1 Let t ∈ {0, . . . , T − 1}. For all arbitrage-free families ϕ, ψ ∈ E(π ) the following statements hold: (i) There exists an arbitrage-free family φ ∈ E(π ) such that φt = ϕt,t +1 ◦ ψt +1 . (ii) For every Z ∈ Xt such that 0 ≤ Z ≤ 1 there exists φ ∈ E(π ) such that φt = Zϕt + (1 − Z)ψt . Proof To establish (i), consider the one-step arbitrage-free extensions ϕ0,1 , . . . , ϕt,t +1 , ψt +1,t +2 , . . . , ψT −1,T .

376

19 Market-Consistent Prices for American Options

As in the proof of Theorem 16.1.6, we can use them to easily construct, by way of composition, an arbitrage-free family φ ∈ E(π ) such that φs,s+1 = ϕs,s+1 for every s ∈ {0, . . . , t} and φs,s+1 = ψs,s+1 for every s ∈ {t + 1, . . . , T − 1}. It is immediate to verify that φt = ϕt,t +1 ◦ ψt +1 holds. Next, we focus on (ii). Clearly, the map Zϕu,u+1 + (1 − Z)ψu,u+1 is an arbitrage-free extension of πu,u+1 for every u ∈ {t, . . . , T −1}. Now, consider the one-step arbitrage-free extensions ϕ0,1 , . . . , ϕt −1,t , Zϕt,t +1 + (1 − Z)ψt,t +1 , . . . , ZϕT −1,T + (1 − Z)ψT −1,T . The desired assertion follows by arguing as in the proof of item (i).



Characterizing Market-Consistent Seller Prices We now introduce a special exercise strategy that will be shown to be a seller pricing strategy and that will be useful to prove our characterization of market-consistent seller prices. Definition 19.6.2 Let t ∈ {0, . . . , T }. For every American option C ∈ A we define the random variable % ( τt− (C) := min u ∈ {t, . . . , T } ; ∃ψ ∈ E(π ) : Cu = max ψu (C(τ )) . τ ∈ Tu

If the reference to C is unambiguous, we simply write τt− instead of τt− (C).



We first verify that τt− is a well-defined exercise strategy. Lemma 19.6.3 Let t ∈ {0, . . . , T }. For every American option C ∈ A the random variable τt− is a well-defined exercise strategy in Tt . Moreover, for every t ∈ {0, . . . , T −1} we have τt− = τt−+1 on {τt− ≥ t + 1}. Proof For every u ∈ {t, . . . , T } we can express the event {τt− ≤ u} as a union of Pu observable events as {τt− ≤ u} =

u  

{Cs = max ψs (C(τ ))}.

s=0 ψ∈E (π)

τ ∈ Ts

Since there are finitely many Pu -observable events, we infer that {τt− ≤ u} ∈ Fu and it follows from Proposition 19.2.3 that τt− is an exercise strategy in Tt . The recursive relation  is obvious from the definition of τt− .

19.6 Characterizing Market-Consistent Prices

377

We next show that the exercise strategy defined above is a seller benchmark strategy and provide the announced description of market-consistent seller prices. Theorem 19.6.4 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A we have % (

st (C) = st (C(τt− )) = p ∈ Xt ; p ≥ max ψt (C(τ )) for some ψ ∈ E(π) . τ ∈ Tt

In particular, τt− is a seller pricing strategy for C at date t. Proof By Proposition 19.4.4 and Theorem 19.4.9, we only have to establish the inclusion % (

st (C(τt− )) ⊂ p ∈ Xt ; p ≥ max ψt (C(τ )) ; ψ ∈ E(π ) . τ ∈ Tt

We prove this statement by backward induction on t ∈ {0, . . . , T }. To this aim, we set

sT (C(τT− )) = {CT }. Base Step The statement is obvious if t = T by our convention. Induction Step Assume the statement holds for t ∈ {1, . . . , T }. Take a market-consistent price p ∈ st (C(τt−−1 )) and set E = {τt−−1 = t − 1} ∈ Ft −1. Note that, on E, every market-consistent price for C(τt−−1 ) coincides with Ct −1 . Hence, by the definition of τt−−1 , p ≥ Ct −1 ≥ max ψtE (C(τ )) on E τ ∈ Tt

(19.16)

for some arbitrage-free family ψ E ∈ E(π ). Since p ∈ st (C(τt−−1 )), it follows from Proposition 19.2.6 that p ≥ ϕt −1 (C(τt−−1 )) for some arbitrage-free family ϕ ∈ E(π ). Then, using that τt−−1 = τt− on E c , we have p ≥ ϕt −1 (C(τt− )) = ϕt −1,t (ϕt (C(τt− ))) on E c

(19.17)

by conditionality and time consistency. As ϕt (C(τt− )) is a market-consistent price for C(τt− ) by Proposition 19.2.6, our induction hypothesis implies that ϕt (C(τt− )) ≥ max χt (C(τ )) τ ∈ Tt

(19.18)

378

19 Market-Consistent Prices for American Options

for a suitable arbitrage-free family χ ∈ E(π ). We infer from (19.17) and (19.18) and from monotonicity that p ≥ max ϕt −1,t (χt (C(τ ))) on E c . τ ∈ Tt

c

c

Let ψ E ∈ E(π ) be an arbitrage-free family such that ψtE−1 = φt −1,t ◦ ξt , which exists by Lemma 19.6.3. Then, we can write c

p ≥ max ψtE−1 (C(τ )) on E c . τ ∈ Tt

(19.19)

Finally, let ψ ∈ E(π ) be an arbitrage-free family satisfying c

ψt −1 = 1E ψtE−1 + 1E c ψtE−1 , which exists again by Lemma 19.6.3. As an immediate consequence of (19.16) and (19.19) p ≥ max ψt −1 ((C(τ ))). τ ∈ Tt

This concludes the induction argument and establishes the desired inclusion.



Characterizing Market-Consistent Buyer Prices We start by introducing a special buyer pricing strategy that will play a key role when characterizing market-consistent buyer prices. Definition 19.6.5 Let t ∈ {0, . . . , T }. For every American option C ∈ A we define the random variable % ( τt+ (C) := min u ∈ {t, . . . , T − 1} ; Cu > max ψu (C(τ )), ∀ψ ∈ E(π ) ∧ T . τ ∈Tu+1

If the reference to C is unambiguous, we simply write τt+ instead of τt+ (C).



Remark 19.6.6 Note that, in the above definition, we had to cap the values of τt+ at T to ensure that it can take only finite values in the range {0, . . . , T }.  Lemma 19.6.7 Let t ∈ {0, . . . , T }. For every American option C ∈ A the random variable τt+ is a well-defined exercise strategy in Tt . Moreover, for every t ∈ {0, . . . , T −1} we have τt+ = τt++1 on {τt+ ≥ t + 1} and there exists ψ ∈ E(π) such that Ct ≤ ψt,t +1



 max ψt +1 (C(τ ))

τ ∈Tt+1

on {τt+ ≥ t + 1}.

19.6 Characterizing Market-Consistent Prices

379

Proof For every u ∈ {t, . . . , T − 1} we can express the event {τt+ ≤ u} as a combination of Pu -observable events as {τt+ ≤ u} =

u  

{Cs > max ψs (C(τ ))}. τ ∈Ts+1

s=0 ψ∈E (π)

Since there are finitely many Pu -observable events, we infer that {τt+ ≤ u} ∈ Fu . Moreover, that {τt+ ≤ T } =  ∈ FT . It follows from Proposition 19.2.3 that τt+ is an exercise strategy in Tt . The recursive relation is obvious from the definition of τt+ . To show the last assertion, fix t ∈ {0, . . . , T − 1} and note that for every A ∈ Pt such that A ⊂ {τt+ ≥ t + 1} there exists an arbitrage-free family ψ A ∈ E(π ) satisfying A Ct ≤ max ψtA (C(τ )) ≤ ψt,t +1 τ ∈Tt+1



max ψtA+1 (C(τ ))

τ ∈Tt+1

 on A,

where the right-hand inequality is due to time consistency and monotonicity. Now, take ϕ ∈ E(π). By Lemma 19.6.1, we can find an arbitrage-free family ψ ∈ E(π) such that ψt =



1A ψtA + 1{τ + =t } ϕt . t

A∈Pt A⊂{τt+ ≥t+1}

Such a family clearly satisfies Ct ≤ max ψt (C(τ )) ≤ ψt,t +1 τ ∈Tt+1



 max ψt +1 (C(τ ))

τ ∈Tt+1

on {τt+ ≥ t + 1}

by time consistency and monotonicity. This concludes the proof.



The next result records the buyer counterpart to Theorem 19.6.4 by showing that τt+ is a buyer pricing strategy and by providing a description of market-consistent buyer prices. Theorem 19.6.8 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A we have % (

bt (C) = bt (C(τt+ )) = p ∈ Xt ; p ≤ max ψt (C(τ )) for some ψ ∈ E(π ) . τ ∈ Tt

In particular, τt+ is a buyer pricing strategy for C at date t. Proof By Proposition 19.4.4 and Theorem 19.4.9, we only have to show that

bt (C) ⊂ bt (C(τt+ )).

380

19 Market-Consistent Prices for American Options

We prove this statement by backward induction on t ∈ {0, . . . , T }. To this effect, we set

bT (C) = bT (C(τT+ )) = {CT }. Base Step In view of our convention, the statement is obvious if t = T . Induction Step Assume the statement holds for t ∈ {1, . . . , T } and take a marketconsistent price p ∈ bt −1 (C). By Theorem 19.4.15, p belongs to bt −1 (C(σ )) for some exercise strategy σ ∈ Tt −1 . Hence, by Proposition 19.2.6, we can find an arbitrage-free family ϕ ∈ E(π ) such that p ≤ ϕt −1 (C(σ )). We decompose  into the three sets E, F, G ∈ Ft −1 defined by E = {τt+−1 = t − 1}, F = {τt+−1 ≥ t} ∩ {σ = t − 1}, G = {τt+−1 ≥ t} ∩ {σ ≥ t}. We first focus on E and take any arbitrage-free family ψ E ∈ E(π). It is not difficult to show that p ≤ ϕt −1(C(σ )) ≤ Ct −1 = ψtE−1 (C(τt+−1 )) on E

(19.20)

by the definition of τt+−1 . Next we look at the behaviour on F . By Lemma 19.6.7, we can find an arbitrage-free family ϕ F ∈ E(π ) such that   Ct −1 ≤ ϕtF−1,t max ϕtF (C(τ )) on F. τ ∈ Tt

Note that maxτ ∈Tt ϕtF (C(τ )) belongs to bt (C) by Proposition 19.4.4. Hence, we can use the induction hypothesis and Proposition 19.2.6 to infer that max ϕtF (C(τ )) ≤ χtF (C(τt+ )) τ ∈ Tt

for some arbitrage-free family χ F ∈ E(π). Since τt+−1 = τt+ on F , we get Ct −1 ≤ ϕtF−1,t (χtF (C(τt+−1 ))) on F by time consistency and monotonicity. Let ψ F ∈ E(π ) be an arbitrage-free family such that ψtF−1 = ϕtF−1,t ◦ χtF , which exists by Lemma 19.6.3. Then, we have p ≤ ϕt −1 (C(σ )) = Ct −1 ≤ ψtF−1 (C(τt+−1 )) on F.

(19.21)

19.6 Characterizing Market-Consistent Prices

381

Finally, we focus on G. Note that σ = σ ∨ t on G. Hence, by time consistency and monotonicity, we can write   ϕt −1 (C(σ )) = ϕt −1,t (ϕt (C(σ ∨ t))) ≤ ϕt −1,t max ϕt (C(τ )) on G. τ ∈ Tt

As maxτ ∈Tt ϕt (C(τ )) belongs to bt (C) by Proposition 19.4.4, we can use the induction hypothesis and Proposition 19.2.6 to infer that max ϕt (C(τ )) ≤ χtG (C(τt+ )) τ ∈ Tt

for some arbitrage-free family χ G ∈ E(π ). Consequently, ϕt −1 (C(σ )) ≤ ϕt −1,t (χtG (C(τt+−1 ))) on G by time consistency and monotonicity, where we used that τt+−1 = τt+ on G. Now, let ψ G ∈ E(π ) be an arbitrage-free family such that ψtG−1 = ϕt −1,t ◦ χtG , which exists by Lemma 19.6.3. Then, we conclude that p ≤ ϕt −1 (C(σ )) ≤ ψtG−1 (C(τt+−1 )) on G.

(19.22)

Finally, let ψ ∈ E(π ) be an arbitrage-free family satisfying ψt −1 = 1E ψtE−1 + 1F ψtF−1 + 1G ψtG−1 , which exists again by Lemma 19.6.3. As an immediate consequence of (19.20), (19.21), and (19.22) we get p ≤ ψt −1 (C(τt+−1 )), showing that p belongs to bt −1 (C(τt+−1 )) by Proposition 19.2.6. This concludes the induction argument and establishes the desired inclusion.  Market-Consistent Prices and Pricing Extensions Recall from Proposition 16.1.2 that arbitrage-free families of pricing extensions are in one-to-one correspondence with families of pricing functionals of a complete arbitragefree market that extends the original market. This allowed us to interpret any marketconsistent price (of a single-maturity payoff as well as of a payoff stream) as a marketconsistent price in a complete arbitrage-free market extending the original market. In view of Proposition 19.5.5, the next result extends this interpretation to American options.

382

19 Market-Consistent Prices for American Options

Theorem 19.6.9 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A we have % (

t (C) = max ψt (C(τ )) ; ψ ∈ E(π ) . τ ∈ Tt

Proof The inclusion “⊃” was established in Proposition 19.4.4. To show the inclusion “⊂”, take a market-consistent price p ∈ t (C). It follows from Theorems 19.6.4 and 19.6.8 that max ϕt (C(τ )) ≤ p ≤ max ψt (C(τ )) τ ∈ Tt

τ ∈ Tt

for two arbitrage-free families ψ ∈ E(π ). Now, for every A ∈ Pt consider the function fA : [0, 1] → R defined by fA (a) = max{aψt (C(τ )|A) + (1 − a)ϕt (C(τ )|A)}. τ ∈ Tt

Being the maximum of finitely many real-valued continuous functions on [0, 1], the function fA is easily seen to be continuous and to satisfy fA (0) = max ϕt (C(τ )|A), fA (1) = max ψt (C(τ )|A). τ ∈ Tt

τ ∈ Tt

It follows that p(A) = fA (zA ) for a suitable scalar zA ∈ [0, 1]. Setting Z=



zA 1A ∈ Xt ,

A∈Pt

we can equivalently write p = max{Zψt (C(τ )) + (1 − Z)ϕt (C(τ ))}. τ ∈ Tt

By Lemma 19.6.1, there exists a family χ ∈ E(π ) satisfying χt = Zϕt + (1 − Z)ψt . Then, we can rewrite p as p = max χt (C(τ )). τ ∈ Tt

This delivers the desired inclusion and concludes the proof.



Market-Consistent Prices and Pricing Densities We can invoke Proposition 16.2.3 to reformulate the preceding theorem in terms of pricing densities. The explicit verification is left as an exercise.

19.7 Sub- and Superreplication Prices

383

Theorem 19.6.10 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A we have

t (C) = max τ ∈ Tt

T 

EP [Dt,u 1{τ =u} Cu |Pt ] ; D ∈ D(π ) .

u=t

Market-Consistent Prices and Pricing Measures The link between pricing densities and pricing measures established in Proposition 16.3.2 allows us to immediately derive from Theorem 19.6.10 the following characterization of market-consistent prices in terms of pricing measures. To do this, we have to express payments in units of a given numéraire strategy. As usual, we adopt the “tilde” notation to denote the corresponding rescaled payments; see Remark 14.5.6. Theorem 19.6.11 Let θ ∈ S 0,T be the numéraire strategy and t ∈ {0, . . . , T − 1}. For  ∈A  we have every American option C

T    ! "  = max u |Pt ; Q ∈ Qθ (π) . t C

EQ 1{τ =u} C τ ∈ Tt

u=t

Remark 19.6.12 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 19.10.11: Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A we have

t (C) = Vt [θ ] max τ ∈ Tt

19.7

T  u=t

 EQ

) 1{τ =u} Cu )) Pt ; Q ∈ Qθ (π ) . Vu [θ] )



Sub- and Superreplication Prices

This section is devoted to the study of sub- and superreplication prices for an American option, which are defined as the lower, respectively upper, bound of the corresponding set of market-consistent prices. As a preliminary step, it is useful to highlight the “interval-like” quality of the set of market-consistent prices. Proposition 19.7.1 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the set

t (C) is bounded and satisfies the following “interval-like” property: p, q ∈ t (C), r ∈ Xt , p ≤ r ≤ q ⇒ r ∈ t (C). In particular, {p(A) ; p ∈ t (C)} is a bounded interval for every A ∈ Pt .

384

19 Market-Consistent Prices for American Options

Proof To show that t (C) is bounded, take a marketed bound Z ∈ M+ t (C) such that πu,T (Z) > Cu for every u ∈ {t, . . . , T }. Moreover, set W = −VT [η] and note that W belongs to M− t (C) and satisfies πu,T (W ) < Cu for every u ∈ {t, . . . , T }. Then, every market-consistent price p ∈ t (C) must satisfy πt,T (W |A) < p(A) < πt,M (Z|A) for every A ∈ Pt . This establishes that t (C) is bounded. The “interval-like” property is immediate to verify.  The preceding result ensures that the sub- and superreplication prices introduced below are well-defined and can be interpreted as the boundaries of the “interval” of marketconsistent prices. Definition 19.7.2 (Sub/Superreplication Price) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. The subreplication price of C at date t is defined as πt− (C) := inf t (C). The superreplication price of C at date t is defined as πt+ (C) := sup t (C). For convenience, we also set πT− (C) := πT+ (C) := CT .



We start by showing some simple but useful characterizations of sub- and superreplication prices. Proposition 19.7.3 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) (ii) (iii) (iv)

πt− (C) = πt− (C(σ )) for every seller pricing strategy σ ∈ Tt s . πt− (C) = maxτ ∈Tt πt− (C(τ )). πt+ (C) = πt+ (C(σ )) for every buyer pricing strategy σ ∈ Tt b . πt+ (C) = maxτ ∈Tt πt+ (C(τ )).

Proof It is clear, see e.g. Proposition 19.4.5, that inf Πt (C) = inf Πts (C) and sup Πt (C) = sup Πtb (C). As a result, assertions (i) and (iii) follow from the definition of a seller, respectively buyer, pricing strategy. In view of items (i) and (iii), the remaining assertions follow immediately from Theorem 19.4.9 and Theorem 19.4.15, respectively.  A powerful and important result is that there always exist a marketed lower bound “realizing” the subreplication price and a marketed upper bound “realizing” the superreplication price. In particular, this result provides a justification for the language of suband superreplication prices.

19.7 Sub- and Superreplication Prices

385

Proposition 19.7.4 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) πt− (C) = sup{πt,T (Z) ; Z ∈ M− t (C)}. (ii) πt+ (C) = inf{πt,T (Z) ; Z ∈ M+ t (C)}. Moreover, the supremum in (i) and the infimum in (ii) are attained. Proof First, we focus on (i). We start by showing the inequality “≥”. To this effect, take a market-consistent price p ∈ t (C) and a marketed lower bound Z ∈ M− t (C). Note that we have C Z (τ ) ≤ C(τ ) for some exercise strategy τ ∈ Tt . Since p is a market-consistent seller price for C(τ ) at date t by Theorem 19.4.9, p ≥ πt (C Z (τ )) = πt,T (Z), where the equality is due to Proposition 19.3.2. Taking the infimum over p and the supremum over Z delivers the desired inequality. To conclude the proof, it suffices to − exhibit a marketed lower bound Z ∈ M− t (C) such that πt,T (Z) = πt (C). To this − s end, take a seller pricing strategy σ ∈ Tt and recall that πt (C) = πt− (C(σ )) by Proposition 19.7.3. Now, take a marketed lower bound Z ∈ M− t,T (Tt (C(σ ))) such that πt,T (Z) = πt− (C(σ )), which exists by Propositions 17.4.3 and 18.6.3. It remains to observe that C Z (σ ) ≤ C(σ ), showing that Z belongs to M− t (C). Now, we focus on (ii). We first establish the inequality “≤”. To this end, take a market-consistent price p ∈ t (C) and a marketed upper bound Z ∈ M+ t (C). By Theorem 19.4.15, there exists an exercise strategy τ ∈ Tt such that p is a market-consistent buyer price for C(τ ) at date t. Since C Z (τ ) ≥ C(τ ), we get p ≤ πt (C Z (τ )) = πt,T (Z) by Proposition 19.3.2. Taking the supremum over p and the infimum over Z delivers the desired inequality. To conclude the proof, it suffices to exhibit a marketed upper bound + Z ∈ M+ t (C) such that πt,T (Z) = πt (C), which is done by Propositions 19.4.13 and 19.7.3.  As a direct consequence of Proposition 19.7.3, sub- and superreplication prices are seen to enjoy the following “conditionality” property. (Since the set A does not carry a vector space structure, we are not entitled to use the language introduced for functionals defined on spaces of stochastic processes. This is why we put the term “conditional” between quotation marks.)

386

19 Market-Consistent Prices for American Options

Proposition 19.7.5 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A and every A ∈ Pt the following statements hold: (i) πt− (1A C t :T ) = 1A πt− (C). (ii) πt+ (1A C t :T ) = 1A πt+ (C). There is a simple and useful recursion formula for sub- and superreplication prices. To formulate it properly, note that every American option C ∈ A can be decomposed into the following two terms at any date t ∈ {0, . . . , T − 1}: • the exercise payoff Ct , • the residual option C t +1:T . The recursive formula, which captures a form of “time consistency”, says that the subreplication price of an American option is the maximum between the exercise payoff and the subreplication price of the residual option. Similarly, the superreplication price of an American option is the maximum between the exercise payoff and the superreplication price of the residual option. Proposition 19.7.6 (Time Consistency) Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: − − (i) πt− (C) = max{Ct , πt− (C t +1:T )} = max{Ct , πt,t +1 (πt +1 (C))}. + + (ii) πt+ (C) = max{Ct , πt+ (C t +1:T )} = max{Ct , πt,t +1 (πt +1 (C))}.

Proof We only prove (i). First of all, note that C t +1:T (τ ) ≤ C(τ ∨ (t + 1)) for every τ ∈ Tt . Hence, it is readily seen that sup πt− (C t +1:T (τ )) = sup πt− (C(τ )).

τ ∈ Tt

τ ∈Tt+1

Moreover, − sup πt− (C(τ )) = πt,t +1

τ ∈Tt+1



 sup πt−+1 (C(τ )) .

τ ∈Tt+1

The inequality “≤” is an immediate consequence of monotonicity. The inequality “≥” follows from Proposition 19.2.7. As a result, we have − sup πt− (C t +1:T (τ )) = πt,t +1

τ ∈ Tt



 sup πt−+1(C(τ )) .

τ ∈Tt+1

(19.23)

19.7 Sub- and Superreplication Prices

387

Now, we claim that % ( sup πt− (C(τ )) = max Ct , sup πt− (C(τ )) .

τ ∈ Tt

τ ∈Tt+1

(19.24)

The inequality “≥” is clear. To show the converse inequality, take any exercise strategy τ ∈ Tt and note that C(τ ) ≤ C(τ ∨ (t + 1)) on the event {τ ≥ t + 1}. Then, we have πt− (C(τ )) = 1{τ =t } Ct + 1{τ ≥t +1} πt− (C(τ )) ≤ 1{τ =t } Ct + 1{τ ≥t +1} πt− (C(τ ∨ (t + 1))) ≤ max{Ct , πt− (C(τ ∨ (t + 1)))} by monotonicity. This delivers the inequality “≤”. In view of Proposition 19.7.3, the desired assertions follow immediately from (19.23) and (19.24).  Representing Sub- and Superreplication Prices As a direct consequence of Theorem 19.6.9 we derive the following representation of suband superreplication prices. Proposition 19.7.7 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) πt− (C) = inf{maxτ ∈Tt ψt (C(τ )) ; ψ ∈ E(π )}. (ii) πt+ (C) = sup{maxτ ∈Tt ψt (C(τ )) ; ψ ∈ E(π )}. The representation of sub- and superreplication prices in terms of pricing densities follows immediately from Theorem 19.6.10. Proposition 19.7.8 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: % (  (i) πt− (C) = inf maxτ ∈Tt Tu=t EP [Dt,u 1{τ =u} Cu |Pt ] ; D ∈ D(π) . % (  (ii) πt+ (C) = sup maxτ ∈Tt Tu=t EP [Dt,u 1{τ =u} Cu |Pt ] ; D ∈ D(π ) . Finally, we can derive from Theorem 19.6.11 the following representation of sub- and superreplication prices in terms of pricing measures.

388

19 Market-Consistent Prices for American Options

Proposition 19.7.9 Let θ ∈ S 0,T be the numéraire strategy and t ∈ {0, . . . , T − 1}. For  ∈A  the following statements hold: every American option C ( %   ! " T  = inf maxτ ∈T  1 ; Q ∈ Q E |P (π ) . (i)  πt− C C {τ =u} u t Q θ u=t t ( %  ! "   T  = sup maxτ ∈T  (ii)  πt+ C u=t EQ 1{τ =u} Cu |Pt ; Q ∈ Qθ (π ) . t Remark 19.7.10 The preceding result can be equivalently formulated in the original unit of account as follows, see Exercise 19.10.11: Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: ) ' & % (  1 Cu ) ; Q ∈ Q (π ) . (i) πt− (C) = Vt [θ ] inf maxτ ∈Tt Tu=t EQ {τV=u} )P t θ & u [θ] ) ' % (  ) 1 C u (ii) πt+ (C) = Vt [θ ] sup maxτ ∈Tt Tu=t EQ {τV=u} )Pt ; Q ∈ Qθ (π ) . u [θ]



Market-Consistent Prices and Replicability The next result records another characterization of market-consistent prices that highlights when the bounds of the set of market-consistent prices are themselves market-consistent prices. Theorem 19.7.11 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A, every seller pricing strategy σ ∈ Tt s , and every A ∈ Pt the following statements hold: (i) If 1A C t :T is replicable at date t, then πt− (C|A) = πt+ (C|A) and {p(A) ; p ∈ t (C)} = {πt (1A C t :T |A)}. (ii) If 1A C t :T is not replicable at date t, then πt− (C|A) < πt+ (C|A) and {p(A) ; p ∈ t (C)} = [πt− (C|A), πt+ (C|A)) whenever 1A C(σ ) is replicable at date t, while {p(A) ; p ∈ t (C)} = (πt− (C|A), πt+ (C|A)) whenever 1A C(σ ) is not replicable at date t. Proof Assertion (i) is a direct consequence of Proposition 19.7.5. Then, we focus on (ii). It follows from Proposition 19.7.5 that πt− (C|A) < πt+ (C|A). Now, take a buyer pricing strategy τ ∈ Tt b and set C A = 1A C t :T ∈ A. The exercise strategy τA = 1A τ +1Ac T ∈ Tt is upper optimal for C A at date t by Proposition 19.7.5. Since C A (τA ) = 1A C(τ ), we infer

19.7 Sub- and Superreplication Prices

389

from Proposition 19.5.6 that 1A C(τ ) is not replicable at date t, so that πt+ (C|A) = πt+ (C(τ )|A) ∈ / {p(A) ; p ∈ bt (C(τ ))} by Theorem 18.6.11 and Proposition 19.7.3. As t (C) ⊂ bt (C(τ )), we conclude that / {p(A) ; p ∈ t (C)}. πt+ (C|A) ∈ Similarly, if 1A C(σ ) is not replicable at date t, then πt− (C|A) = πt− (C(σ )|A) ∈ / {p(A) ; p ∈ st (C(σ ))} by Theorem 18.6.11 and Proposition 19.7.3. As t (C) ⊂ st (C(σ )), we have / {p(A) ; p ∈ t (C)}. πt− (C|A) ∈ To conclude the proof, assume that 1A C(σ ) is replicable at date t. This implies that πt− (C|A) = πt− (C(σ )|A) ∈ {p(A) ; p ∈ t (C(σ ))}. again by Theorem 18.6.11 and Proposition 19.7.3. It now remains to observe that the inclusion t (C(σ )) ⊂ t (C) holds by Theorems 19.4.9 and 19.4.15.  In the case of single-maturity payoffs or payoff streams we saw that the only time that sub- and superreplication prices were market-consistent prices was when they coincided, i.e., when the contracts were replicable. The preceding result shows that, with American options, the situation is different. It is still true that, when sub- and superreplication prices coincide, then they represent the unique market-consistent price. In fact, this is the only circumstance under which the superreplication price can be a market-consistent price. This follows from Proposition 19.5.6 and Theorem 19.7.11. Proposition 19.7.12 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements are equivalent: (a) (b) (c) (d) (e) (f)

C is replicable at date t. πt+ (C) is a market-consistent price for C at date t. C(σ ) is replicable at date t for every buyer pricing strategy σ ∈ Tt b . C(σ ) is replicable at date t for some buyer pricing strategy σ ∈ Tt b . C(σ ) is replicable at date t for some upper-optimal strategy σ ∈ Tt + . πt− (C) = πt+ (C).

390

19 Market-Consistent Prices for American Options

It is, however, possible for the subreplication price to be market-consistent even though it does not coincide with the superreplication price. The next proposition lists a number of conditions for this to hold. Proposition 19.7.13 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements are equivalent: (a) (b) (c) (d)

πt− (C) is a market-consistent price for C at date t. C(σ ) is replicable at date t for every seller pricing strategy σ ∈ Tt s . C(σ ) is replicable at date t for some seller pricing strategy σ ∈ Tt s . C(σ ) is replicable at date t for every lower-optimal strategy σ ∈ Tt − .

Proof It follows from Theorem 19.7.11 that (a) implies (b). Clearly, (b) implies (c). Now, suppose that (c) holds and let σ ∈ Tt s be a seller pricing strategy such that C(σ ) is replicable at date t. Take any exercise strategy τ ∈ Tt − . Then, we can use Proposition 18.6.12 to get πt− (C(τ )) = πt− (C(σ )) ∈ st (C(σ )) ⊂ st (C(τ )), where the first equality and the inclusion follow from Theorem 19.4.9. This shows that C(τ ) is replicable at date t again by Proposition 18.6.12 and establishes (d). Finally, assume that (d) holds and take any seller pricing strategy σ ∈ Tt s . Since σ belongs to Tt − by Theorem 19.4.9, C(σ ) is replicable at date t by assumption. Now Theorem 19.7.11 ensures that (a) holds.  We show with a concrete example that the replicability of an American option has no impact on whether the corresponding subreplication price is a market-consistent price or not. Example 19.7.14 (Replicability of C Versus Market Consistency of πt− (C)) In the setting of Example 19.4.10 consider the American option a : 0 a

2 0 2 0

with a ∈ {0, 1}. It follows from Proposition 19.7.6 that − + π1− (C) = max{C1 , π1,2 (C2 )} = a, π1+ (C) = max{C1 , π1,2 (C2 )} = 2.

19.8 Market-Consistent Strategies

391

This shows that C is not replicable at date 1. Now, take a = 0 and note that

s1 (C) =



s1 (C(τ )) = {p ∈ X1 ; p > 0}

τ ∈ T1

by Theorem 19.4.9. This shows that π1− (C) is not a market-consistent price for C at date 1. In line with Proposition 19.7.13, the exercise strategy σ = 2 ∈ T1 is lower optimal for C at date 1, but C(σ ) is not replicable at date 1. Next, take a = 1. It follows from Theorem 19.4.9 that

s1 (C) =



s1 (C(τ )) = {p ∈ X1 ; p ≥ 1},

τ ∈ T1

showing that π1− (C) is a market-consistent price for C at date 1. In line with Proposition 19.7.13, the exercise strategy σ = 1 ∈ T1 is a seller pricing strategy for C at date 1 and C(σ ) is not replicable at date 1. 

19.8

Market-Consistent Strategies

In the preceding sections we focused on the study of the range of prices at which an agent should consider buying or selling an American option. However, an equally important question is: Which exercise strategies should the owner of an American option consider adopting? This question, which is new and needs to be asked because of the optionality of the new contracts, leads to the concept of a market-consistent exercise strategy. It is important to emphasize that we are not asking which particular strategy the owner should ultimately choose. In line with our approach, we are only looking for a set of exercise strategies such that, if the option were exercised according to one of them, the buyer could not have done better by trading directly in the market. Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. When evaluating the merits of a particular exercise strategy σ ∈ Tt , we need to imagine that the option holder commits to executing it and gives up all alternative exercise strategies. In addition, we have to bear in mind that exercising the option at some date u ∈ {t, . . . , T − 1} means receiving the exercise payoff Cu and giving up the residual option C t +1:T . The following two critical observations will motivate our definition of a market-consistent strategy: • On the event {σ = u} the holder will exercise the option. This is financially equivalent to selling (giving up) the residual option C u+1:T for the amount Cu . For this to be reasonable, Cu has to be a market-consistent seller price for C u+1:T . • On the event {σ > u} the holder will not exercise the option. Since the holder is committed to exercising according to σ , this is financially equivalent to buying the

392

19 Market-Consistent Prices for American Options

payoff stream C(τ ) for the price Cu , which is only reasonable provided Cu is a marketconsistent buyer price for C(τ ). The preceding discussion naturally leads to the following formal definition. Definition 19.8.1 (Market-Consistent Exercise Strategy) Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. We say that an exercise strategy σ ∈ Tt is market consistent for C at date t if the following properties hold: (1) For every u ∈ {t, . . . , T − 1} there is a market-consistent price p ∈ u (C u+1:T ) such that Cu ≥ p on {σ = u}. (2) For every u ∈ {t, . . . , T − 1} there is a market-consistent price p ∈ u (C(σ )) such that Cu ≤ p on {σ ≥ u + 1}.  Remark 19.8.2 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A and a market-consistent exercise strategy σ ∈ Tt . Then, by Theorem 19.4.15, condition (2) in Definition 19.8.1 implies (2 ) For every u ∈ {t, . . . , T − 1} there is a market-consistent price p ∈ u (C u+1:T ) such that Cu ≤ p on {σ ≥ u + 1}. This condition means that, on {σ ≥ u + 1}, the exercise payoff Cu should be a marketconsistent buyer price for the residual option C u+1:T . This clearly makes sense: Not exercising at time u can be interpreted as the holder effectively paying Cu to keep C u+1:T . One may thus wonder whether conditions (2) and (2 ) are equivalent. However, this is not true because (2 ) fails to capture the fact that the holder has committed to exercising according to σ . The following example shows that, as a result of this failure, (2 ) does not imply (2). Take the setting of Example 19.5.4 and consider the American option 2 3 2 : 0 2 3 2

4 0 4 0 4 0 4 0

It is easy to show that every strategy in T1 satisfies Condition (2 ). Moreover, it is also easy to see that every strategy in T1 satisfies Condition (1) in Definition 19.8.1. In particular,

19.8 Market-Consistent Strategies

393

the constant strategy σ = 2 ∈ T1 satisfies (1) and (2 ). However, σ fails to satisfy Condition (2).  Earliest and Latest Market-Consistent Exercise We start with a straightforward reformulation of the two special exercise strategies τt− and τt+ that we introduced in Sect. 19.6. From it we see that τt− corresponds to the first date at which the exercise payoff becomes a market-consistent seller price for the residual option, i.e., the earliest date at which the owner should consider exercising. By contrast, τt+ coincides with the first date at which the exercise payoff is no longer a market-consistent buyer price for the residual option, i.e., the latest date at which the agent should definitely exercise. Proposition 19.8.3 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) τt− = inf{u ∈ {t, . . . , T − 1} ; ∃p ∈ u (C u+1:T ) : Cu ≥ p} ∧ T . (ii) τt+ = inf{u ∈ {t, . . . , T − 1} ; Cu > p, ∀p ∈ u (C u+1:T )} ∧ T . (iii) τt− ≤ σ ≤ τt+ for every market-consistent strategy σ ∈ Tt at date t. Proof Assertions (i) and (ii) follow immediately from Theorem 19.6.9. To prove (iii), let σ ∈ Tt be market consistent for C at date t and take u ∈ {t, . . . , T − 1}. Since on the event {σ = u} ∩ {τt− ≥ u + 1} we have max ψu (C(τ )) ≤ Cu < max ψu (C(τ ))

τ ∈Tu+1

τ ∈ Tu

for some ψ ∈ E(π ) by Theorem 18.5.1, we infer that τt− ≤ σ . Similarly, since on the event {τt+ = u} ∩ {σ ≥ u + 1} we have max ψu (C(τ )) < Cu ≤ ψu (C(σ ))

τ ∈Tu+1

for some ψ ∈ E(π ) by Theorem 18.5.1, we infer that σ ≤ τt+ .



In fact, much more is true: For every American option C ∈ A and for every date t ∈ {0, . . . , T − 1}, the exercise strategy τt− , respectively τt+ , is the earliest, respectively the latest, market-consistent strategy for C at date t. Proposition 19.8.4 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) τt− is market consistent for C at date t. (ii) τt+ is market consistent for C at date t.

394

19 Market-Consistent Prices for American Options

Proof To establish (i), take any u ∈ {t, . . . , T − 1} and note that Cu ≥ max ψu (C(τ )) τ ∈Tu+1

on {τt− = u} for some ψ ∈ E(π). In addition, as τt− = τu− on {τt− ≥ u + 1}, it follows from Theorem 19.6.4 that Cu ≤ max ϕu (C(τ )) ≤ ψu (C(τu− )) = ψu (C(τt− )) τ ∈ Tu

on {τt− ≥ u + 1} for some ϕ ∈ E(π). This implies that τt− is market consistent for C at date t. Similarly, to show (ii), take any u ∈ {t, . . . , T − 1} and note that Cu ≥ max ψu (C(τ )) τ ∈Tu+1

on {τt+ = u} for some (in fact, every) ψ ∈ E(π ). In addition, as τt+ = τu+ on {τt+ ≥ u+1}, it follows from Theorem 19.6.8 that Cu ≤ max ψu (C(τ )) ≤ ϕu (C(τu+ )) = ϕu (C(τt+ )) τ ∈ Tu

on {τt+ ≥ u + 1} for some ϕ ∈ E(π ). Hence, τt+ is market consistent for C at date t.



Remark 19.8.5 It is easy to see that the set of market-consistent exercise strategies for an American option is not an “interval” in the sense that not all strategies that prescribe exercising after the earliest and before the latest market-consistent strategies are themselves market consistent.  A More Reasonable Last Date to Exercise in a Market-Consistent Way? In Proposition 19.4.13 we saw that, for every t ∈ {0, . . . , T } and every American option C ∈ A, there is a marketed upper bound X ∈ M+ t (C) such that πt,T (X) = max πt+ (C(τ )), τ ∈ Tt

or equivalently, by Proposition 19.7.3, such that πt,T (X) = πt+ (C). Since owning the marketed upper bound X is clearly more attractive than owning C, it would make sense to exercise as soon as the exercise payoff is sufficiently high to acquire

19.8 Market-Consistent Strategies

395

such an upper bound. This naturally leads to considering the following special exercise strategy. Definition 19.8.6 Let t ∈ {0, . . . , T }. For every American option C ∈ A, we define the random variable τt∗ (C) := min{u ∈ {t, . . . , T } ; Cu = πu+ (C)}. If the reference to the underlying American option is unambiguous, we simply write τt∗ instead of τt∗ (C).  As usual, we first verify that the above expression defines an exercise strategy. Lemma 19.8.7 Let t ∈ {0, . . . , T }. For every American option C ∈ A the random variable τt∗ is a well-defined exercise strategy in Tt . Moreover, for every t ∈ {0, . . . , T −1} we have τt∗ = τt∗+1 on {τt∗ ≥ t + 1}. Proof For every u ∈ {t, . . . , T } we can express the event {τt∗ ≤ u} as a union of Pu observable events as {τt∗ ≤ u} =

u 

{Cs = πs+ (C)}.

s=0

This immediately shows that {τt∗ ≤ u} ∈ Fu . By Proposition 19.2.3, τt∗ is an exercise  strategy in Tt . The recursive relation is obvious from the definition of τt∗ . The next proposition records some properties of the above special exercise strategy. In particular, it shows that it is both a market-consistent strategy and a buyer benchmark strategy and that it coincides with the earliest market-consistent strategy whenever the underlying American option is replicable. Proposition 19.8.8 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A the following statements hold: (i) τt∗ is a buyer pricing strategy for C at date t. (ii) τt∗ is a market-consistent strategy for C at date t. (iii) If C is replicable at date t, then τt∗ = τt− . Proof To prove (i), take any arbitrage-free family ψ ∈ E(π ). Since τt+ ≥ τt∗ by Proposition 19.7.7, for every u ∈ {t, . . . , T } we have Cu = πu+ (C) ≥ ψu (C(τu+ )) = ψu (C(τt+ )) on {τt∗ = u}

396

19 Market-Consistent Prices for American Options

by Proposition 19.7.7 and by conditionality. Therefore, ψt (τt∗ )

=1

{τt∗ =t }

Ct +

T 

ψt,u (1{τt∗ =u} Cu )

u=t +1

≥1

{τt∗ =t }

ψt (C(τt+ )) +

T 

ψt,u (1{τt∗ =u} ψu (C(τt+ )))

u=t +1

=1

{τt∗ =t }

ψt (C(τt+ )) +

T 

ψt,u (ψu (1{τt∗ =u} C(τt+ )))

u=t +1

=1

{τt∗ =t }

ψt (C(τt+ )) +

T 

ψt (1{τt∗ =u} C(τt+ ))

u=t +1

=

ψt (τt+ ).

Here, we have used the fact that for every u ∈ {t + 1, . . . , T } the payoff stream 1{τt∗ =u} C(τt+ ) is P-adapted because τt+ ≥ τt∗ . It follows from Proposition 19.2.6 that

bt (C(τt+ )) ⊂ bt (C(τt∗ )), in view of Theorems 19.4.15 and 19.6.8 yields the desired assertion. To show (ii), fix u ∈ {t, . . . , T − 1}. On the one side, note that for every marketconsistent price p ∈ u (C u+1:T ) Cu = πu+ (C) ≥ p on {τt∗ = u}. On the other side, since τt∗ = τu∗ on {τt∗ ≥ u + 1} and τu∗ is a buyer pricing strategy for C at date u by point (i), it follows from Proposition 19.7.3 that Cu < πu+ (C) = πu+ (C(τu∗ )) = πu+ (C(τt∗ )) on {τt∗ ≥ u + 1}. As a consequence, for every A ∈ Pu such that A ⊂ {τt∗ ≥ u + 1} we find a marketconsistent price pA ∈ u (C(τt∗ )) satisfying pA ≥ Cu (A). Take any p ∈ u (C(τt∗ )). Then, the market-consistent price q=



1A pA + 1{τt∗ ≤u} p ∈ u (C(τt∗ ))

A∈Pu A⊂{τt∗ ≥u+1}

is easily seen to satisfy q ≥ Cu on {τt∗ ≥ u + 1}. This shows that τt∗ is a market-consistent strategy for C at date t. Finally, we focus on (iii). Since τt∗ is a buyer pricing strategy for C at date t by point (i), we can apply Proposition 19.5.6 to infer that C(τt∗ ) is replicable at date t. Now, fix a

19.8 Market-Consistent Strategies

397

date u ∈ {t, . . . , T − 1} and set E = {τt− = u} ∩ {τt∗ ≥ u + 1} ∈ Fu . Moreover, take any A ∈ Pu such that A ⊂ E. Since τt∗ = τu∗ on A and C(τt∗ ) is replicable at date u, it follows that 1A C(τu∗ ) is replicable at date u. Now, by definition of τt− , we find an arbitrage-free family ψ ∈ E(π ) satisfying Cu = max ψu (C(τ )) ≥ ψu (C(τu∗ )) = πu+ (C(τu∗ )) on A. τ ∈ Tu

As τu∗ is a buyer pricing strategy for C at date u by point (i), Proposition 19.7.3 yields Cu ≥ πu+ (C) on A. This implies that τt∗ ≤ u on A, which is, however, impossible. As a consequence, the set E must be empty and {τt− = u} ⊂ {τt∗ ≤ u} must hold. This yields τt− ≥ τt∗ . Since the converse inequality always holds by market consistency, the proof is complete.  Remark 19.8.9 It is easy to show that the converse implication in point (iii) of Proposition 19.8.8 fails to hold in general; see Exercise 19.10.2.  ψ-Optimal Exercise Strategies We now exhibit a class of market-consistent exercise strategies, which, as we shall prove in Theorem 19.9.6, will turn out to encompass all market-consistent exercise strategies. Definition 19.8.10 (ψ-Optimal Strategy) Consider an American option C ∈ A and let t ∈ {0, . . . , T − 1}. Moreover, take an arbitrage-free family ψ ∈ E(π). An exercise strategy σ ∈ Tt is said to be ψ-optimal for C at date t whenever ψt (C(σ )) = max ψt (C(τ )). τ ∈ Tt

ψ

ψ

The set of ψ-optimal exercise strategies is denoted by Tt (C) or simply Tt if there is no ambiguity about C.  Proving that ψ-optimal exercise strategies are market consistent will require the following simple lemma, which we single out because it will also be used in the proof of Theorem 19.9.6. Lemma 19.8.11 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A and for every arbitrage-free family ψ ∈ E(π) we have

max ψt (C(τ )) = ψt,t +1

τ ∈Tt+1

 max ψt +1 (C(τ )) .

τ ∈Tt+1

398

19 Market-Consistent Prices for American Options

Proof Note that ψt (C(τ )) = ψt,t +1 (ψt +1 (C(τ ))) for every τ ∈ Tt +1 by time consistency. As a result, the inequality “≤” follows from the monotonicity of ψt,t +1 while the inequality “≥” is a direct consequence of Proposition 19.2.7.  We will also use the following characterization of ψ-optimal strategies. Proposition 19.8.12 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. For every ψ ∈ E(π ) and every σ ∈ Tt the following statements are equivalent: (a) σ is ψ-optimal for C at date t. (b) ψu (C(σ )) = maxτ ∈Tu ψu (C(τ )) on {σ ≥ u} for every u ∈ {t, . . . , T − 1}. (c) Cu = maxτ ∈Tu ψu (C(τ )) on {σ = u} and Cu ≤ ψu (C(σ )) on {σ ≥ u + 1} for every u ∈ {t, . . . , T − 1}. Proof It is clear that (b) implies (a). To show the converse implication, assume that (a) holds so that σ is ψ-optimal for C at date t, but there exist u ∈ {t, . . . , T − 1} and A ∈ Pu with A ⊂ {σ ≥ u} such that ψu (C(σ )|A) < max ψu (C(τ )|A). τ ∈ Tu

Let τ ∗ ∈ Tu be such that ψu (C(τ ∗ )|A) = max ψu (C(τ )|A). τ ∈ Tu

Setting τ = 1A τ ∗ + 1Ac σ ∈ Tt , it is readily seen that ψu (C(τ ∗ )) = 1A ψu (C(τ ∗ )) + 1Ac ψu (C(σ ))  ψu (C(σ )). Thanks to the strict monotonicity of ψt,u , we get ψt (C(τ ∗ ))  ψt (C(σ )). Since this is impossible due to the ψ-optimality of σ , we conclude that (b) must hold. Hence, (a) and (b) are equivalent. Note that (b) implies (c). To conclude the proof, we have to show the converse implication. To this end, assume that (c) holds. We prove that ψu (C(σ )) = max ψu (C(τ )) on {σ ≥ u} τ ∈ Tu

by backward induction on u ∈ {t, . . . , T }.

19.9 Characterizing Market-Consistent Strategies

399

Base Step The assertion is trivial when u = T . Induction Step Assume that for some u ∈ {t + 1, . . . , T } we have ψu (C(σ )) = max ψu (C(τ )) on {σ ≥ u} τ ∈ Tu

Then, by time consistency and conditionality, we get ψu−1 (C(σ )) = 1{σ =u−1} Cu−1 + 1{σ ≥u} ψu−1,u (ψu (C(σ )))   = 1{σ =u−1} Cu−1 + 1{σ ≥u} ψu−1,u max ψu (C(τ )) τ ∈ Tu

= 1{σ =u−1} Cu−1 + 1{σ ≥u} max ψu−1 (C(τ )) τ ∈ Tu

on {σ ≥ u−1} by induction hypothesis and Proposition 19.8.11. In view of our assumption on σ , we infer that ψu−1 (C(σ )) = max ψu−1 (C(τ )) on {σ ≥ u − 1}. τ ∈Tu−1

This concludes the induction argument and establishes that (b) holds.



In view of Theorems 19.2.6 and 19.6.9, the characterization of a ψ-optimal strategy recorded in condition (c) in the above proposition immediately implies that any exercise strategy that is ψ-optimal for some arbitrage-free system is market consistent. Corollary 19.8.13 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. If σ ∈ Tt is ψ-optimal for C at date t for some ψ ∈ E(π), then σ is market consistent for C at date t.

19.9

Characterizing Market-Consistent Strategies

In this final section we provide a characterization of market-consistent exercise strategies in terms of arbitrage-free extensions of the pricing functionals that is close in spirit to our characterization of market-consistent prices. The Replicable Case To better highlight the financial interpretation of the general characterization of marketconsistent strategies, we first provide a characterization in the replicable case. We start with the following simple result.

400

19 Market-Consistent Prices for American Options

Lemma 19.9.1 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A and every exercise strategy σ ∈ Tt that is market-consistent for C at date t we have πt− (C(σ )) ≥ πt− (C(τt+ )). Proof We establish this by showing that for every u ∈ {t, . . . , T } we have πu− (C(σ )) ≥ πu− (C(τt+ )) on {σ ≥ u}. We proceed by backward induction on u ∈ {t, . . . , T }. Base Step As πT− (C(σ )) = πT− (C(τt+ )) = CT , the assertion is clear when u = T . Induction Step Let u ∈ {t + 1, . . . , T } and assume that πu− (C(σ )) ≥ πu− (C(τt+ )) on the event {σ ≥ u}. Recall that τt+ ≥ σ by Proposition 19.8.4. Thanks to time consistency and conditionality, we get − − (C(σ )) = 1{σ =u−1} Cu−1 + 1{σ ≥u} πu−1,u (πu− (C(σ ))) πu−1 − ≥ 1{σ =u−1} ψu−1 (C(τu+ )) + 1{σ ≥u} πu−1,u (πu− (C(τt+ ))) − = 1{σ =u−1} ψu−1 (C(τt+ )) + 1{σ ≥u} πu−1 (C(τt+ )) − ≥ πu−1 (C(τt+ ))

for some arbitrage-free family ψ ∈ E(π ) by market consistency of σ and by Proposition 18.6.7. This concludes the induction argument.  We can now show that, for a replicable option, an exercise strategy is market consistent precisely when it is a pricing strategy. Proposition 19.9.2 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A that is replicable at date t. For every σ ∈ Tt the following statements are equivalent: (a) (b) (c) (d) (e) (f)

σ σ σ σ σ σ

is market consistent for C at date t. is a pricing strategy for C at date t. is upper optimal for C at date t and C(σ ) is replicable. is lower optimal for C at date t. is ψ-optimal for C at date t for every ψ ∈ E(π ). is ψ-optimal for C at date t for some ψ ∈ E(π ).

Proof Proposition 19.5.7 established that (b), (c), and (d) are equivalent and Corollary 19.8.13 that (f) implies (a). Moreover, (e) clearly implies (f).

19.9 Characterizing Market-Consistent Strategies

401

To prove that (a) implies (d), assume that σ is market consistent for C at date t. Since τt+ is a buyer pricing strategy for C at date t by Theorem 19.6.8, we infer from Proposition 19.5.6 that C(τt+ ) is replicable at date t. Hence, Lemma 19.9.1 yields πt− (C(σ )) ≥ πt− (C(τt+ )) = πt+ (C(τt+ )) = max πt+ (C(τ )) ≥ max πt− (C(τ )). τ ∈ Tt

τ ∈ Tt

This shows that σ is lower optimal for C at date t. Finally, to prove that (d) implies (e), assume that σ is lower optimal for C at date t and note that πt− (C(σ )) = πt− (C) = πt+ (C) ≥ πt+ (C(σ )) by Proposition 19.7.3. This shows that C(σ ) is replicable at date t by Proposition 18.6.12. As a result, for every ψ ∈ E(π ) we have ψt (C(σ )) = πt+ (C) ≥ max ψt (C(τ )) τ ∈ Tt

by Proposition 19.7.7, showing that σ is ψ-optimal for C at date t.



Remark 19.9.3 It was shown in Example 19.5.9 that the replicability condition in item (c) of Proposition 19.9.2 is necessary unless the market is complete.  An immediate consequence of Proposition 19.9.2 is a characterization of marketconsistent exercise strategies for American options in a complete market. Note that, in this case, the optimality and ψ-optimality of an exercise strategy clearly coincide. Corollary 19.9.4 Let t ∈ {0, . . . , T − 1} and consider an American option C ∈ A. If the market is complete, then for every σ ∈ Tt the following statements are equivalent: (a) σ is market consistent for C at date t. (b) σ is a pricing strategy for C at date t. (c) σ is optimal for C at date t. The General Case We now provide our announced characterization of market-consistent exercise strategies in terms of arbitrage-free extensions of the pricing functionals. More precisely, we show that a necessary and sufficient condition for an exercise strategy to be market consistent is that it is an optimal strategy in some of the complete arbitrage-free markets extending the original market.

402

19 Market-Consistent Prices for American Options

We start with the following preliminary result that will prove useful in our main proof below. Lemma 19.9.5 Let t ∈ {0, . . . , T } and consider a payoff stream X ∈ X . For all arbitrage-free families ϕ, ψ ∈ E(π ) there exists an arbitrage-free family χ ∈ E(π ) such that for every u ∈ {t, . . . , T } we have χu (X) ≥ max{ϕu (X), ψu (X)}. Proof We prove the statement by backward induction on t ∈ {0, . . . , T }. Base Step The assertion is trivial when t = T . Induction Step Let t ∈ {1, . . . , T } and assume that χu (X) ≥ max{ϕu (X), ψu (X)} for every u ∈ {t, . . . , T }, for some χ ∈ E(π ). Set E = {ϕt −1,t (χt (X)) ≥ ψt −1,t (χt (X))} ∈ Ft −1 . Define the arbitrage-free family φ ∈ E(π) by prescribing the one-step functionals as follows

φu,u+1 =

⎧ ⎪ ⎪ ⎨χu,u+1 , ⎪ ⎪ ⎩

if u ∈ {0, . . . , t − 2},

1E ϕt −1,t + 1E c ψt −1,t ,

if u = t − 1,

χu,u+1 ,

if u ∈ {t, . . . , T − 1}.

It is clear that φu (X) = χu (X) ≥ max{ϕu (X), ψu (X)} for every u ∈ {t, . . . , T }. Moreover, φt −1 (X) = Xt −1 + 1E ϕt −1,t (χt (X)) + 1E c ψt −1,t (χt (X)) ≥ Xt −1 + 1E ϕt −1,t (χt (X)) + 1E c ψt −1,t (χt (X)) = Xt −1 + max{ϕt −1,t (χt (X)), ψt −1,t (χt (X))} ≥ Xt −1 + max{ϕt −1,t (ϕt (X)), ψt −1,t (ψt (X))} = max{ϕt −1 (X), ψt −1 (X)} by the monotonicity of ϕt −1,t and ψt −1,t . This concludes the induction argument.



19.9 Characterizing Market-Consistent Strategies

403

Theorem 19.9.6 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A and for every σ ∈ Tt the following statements are equivalent: (a) σ is market consistent for C at date t. (b) σ is ψ-optimal for C at date t for some ψ ∈ E(π ). Proof In view of Proposition 19.8.12, it suffices to show that (a) implies (b). To this end, assume that σ is a market-consistent strategy for C at date t. We split the proof into three steps. Step 1. We claim that there exists ψ ∈ E(π ) such that for every u ∈ {t, . . . , T − 1} Cu = max ψu (C(τ )) on {σ = u}. τ ∈ Tu

To prove this, note first that for every u ∈ {t, . . . , T − 1} there exists an arbitrage-free family ψ u ∈ E(π ) such that Cu ≥ maxτ ∈Tu+1 ψuu (C(τ )) on {σ = u} by Theorem 19.6.9. In particular, for every u ∈ {t, . . . , T − 1} we have Cu = max ψuu (C(τ )) on {σ = u}. τ ∈ Tu

By composition, there exists ψ ∈ E(π ) with one-step functionals defined by

ψu,u+1

⎧ ⎨ψ t , = u,u+1 u−1 ⎩ 1{σ =s} ψ s s=t

u,u+1

if u ∈ {0, . . . , t}, u + 1{σ ≥u} ψu,u+1 ,

if u ∈ {t + 1, . . . , T − 1}.

Then, it is not difficult to see that for every u ∈ {t, . . . , T − 1} and every X ∈ X we have ψu (X) = ψuu (X) on {σ = u}, which yields Cu = max ψuu (C(τ )) = max ψu (C(τ )) on {σ = u}. τ ∈ Tu

τ ∈ Tu

This establishes the desired claim. Step 2. We claim that there exists ϕ ∈ E(π) such that for every u ∈ {t, . . . , T − 1} Cu ≤ ϕu (C(σ )) on {σ ≥ u + 1}. We prove this by backward induction on u ∈ {t, . . . , T − 1}. Note first that for every u ∈ {t, . . . , T − 1} there exists ϕ u ∈ E(π) satisfying Cu ≤ ϕuu (C(σ )) on {σ ≥ u + 1}.

404

19 Market-Consistent Prices for American Options

Base Step If u = T − 1, then it suffices to take ϕ = ϕ T −1 . Induction Step Let u ∈ {t + 1, . . . , T − 1} and assume that there exists ϕ ∈ E(π ) such that Cs ≤ ϕs (C(σ )) on {σ ≥ s + 1} for every s ∈ {u, . . . , T − 1}. Then, by Lemma 19.9.5 we can find χ ∈ E(π ) satisfying Cs ≤ max{ϕs (C(σ )), ϕsu−1 (C(σ ))} ≤ χs (C(σ )) on {σ ≥ s + 1} for every s ∈ {u − 1, . . . , T − 1}. This concludes the induction argument. Step 3. We claim that there exists an arbitrage-free family φ ∈ E(π ) such that for every u ∈ {t, . . . , T } φu (C(σ )) = ϕu (C(σ )) on {σ ≥ u}. By composition, there exists φ ∈ E(π ) with one-step functionals defined by

φu,u+1

⎧ ⎨ψ if u ∈ {0, . . . , t − 1}, u,u+1 , = ⎩1{σ ≤u} ψu,u+1 + 1{σ ≥u+1} ϕu,u+1 , if u ∈ {t, . . . , T − 1}.

We prove that φ satisfies the claim by backward induction on u ∈ {t, . . . , T }. Base Step The assertion is trivial when u = T . Induction Step Let u ∈ {t + 1, . . . , T } and assume that φu (C(σ )) = ϕu (C(σ )) on the event {σ ≥ u}. Then, we easily see that φu−1 (C(σ )) = 1{σ =u−1} Cu−1 + 1{σ ≥u} φu−1,u (φu (C(σ ))) = 1{σ =u−1} Cu−1 + 1{σ ≥u} ϕu−1,u (ϕu (C(σ ))) = ϕu−1 (C(σ )) on {σ ≥ u − 1} by time consistency and conditionality. This concludes the induction argument.

19.9 Characterizing Market-Consistent Strategies

405

By Proposition 19.8.12, the desired assertion will be proved if we show that for every u ∈ {t, . . . , T } we have φu (C(σ )) = max φu (C(τ )) on {σ ≥ u}.

(19.25)

τ ∈ Tu

We establish the claim by backward induction on u ∈ {t, . . . , T }. Base Step The assertion is trivial when u = T . Induction Step Assume that for some u ∈ {t + 1, . . . , T } we have φu (C(σ )) = max φu (C(τ )) on {σ ≥ u}. τ ∈ Tu

Note that for every u ∈ {t, . . . , T − 1} and every X ∈ X we have φu (X) = ψu (X) on {σ = u}. Then, by time consistency and conditionality and by Lemma 19.8.11, φu−1 (C(σ )) = 1{σ =u−1} Cu−1 + 1{σ ≥u} φu−1,u (φu (C(σ )))   = 1{σ =u−1} max ψu (C(τ )) + 1{σ ≥u} φu−1,u max φu (C(τ )) τ ∈ Tu

τ ∈ Tu

= 1{σ =u−1} max φu (C(τ )) + 1{σ ≥u} max φu−1 (C(τ )) τ ∈ Tu

τ ∈ Tu

on {σ ≥ u − 1}. This implies that φu−1 (C(σ )) ≥ max φu−1 (C(τ )) on {σ ≥ u − 1}. τ ∈ Tu

As φu−1 (C(σ )) ≥ Cu−1 on {σ ≥ u − 1}, we actually have φu−1 (C(σ )) = max φu−1 (C(τ )) on {σ ≥ u − 1}. τ ∈Tu−1

This concludes the induction argument and establishes the desired assertion.



In Proposition 19.9.2 we have seen that, for a replicable American option, the notion of a market-consistent strategy is equivalent to that of a pricing strategy. We conclude by showing that these two notions are independent in the case of a nonreplicable option.

406

19 Market-Consistent Prices for American Options

Example 19.9.7 (Not All Market-Consistent Strategies Are Pricing Strategies) In the setting of Example 19.4.10 consider the American option 2 0 2 0

1 : 0 1

It is immediate to verify that τ1− = 1 and τ1+ = 2, so that π1− (C) = π1− (C(τ1− )) = C1 = 1, + π1+ (C) = π1+ (C(τ1+ )) = π1,2 (C2 ) = 2,

by Proposition 19.7.3. Now, set A = {ω1 , ω2 } and consider the exercise strategy given by σ = 1A + 21Ac ∈ T1 . It is easy to see that σ is market consistent for C at date 1 and satisfies − π1− (C(σ )) = 1A C1 + 1Ac π1,2 (C2 ) = 1A , + π1+ (C(σ )) = 1A C1 + 1Ac π1,2 (C2 ) = 1A + 21Ac .

Consequently, σ is neither lower, nor upper optimal for C at date 1. In particular, σ is neither a seller, nor a buyer pricing strategy for C at date 1 by Theorems 19.4.9 and 19.4.15.  Example 19.9.8 (Not All Pricing Strategies Are Market Consistent) In the setting of Example 19.5.4 consider the American option 2 0 1 : 0 1 0 1

2 1 0 0 0 0 0 0

19.10 Exercises

407

Set A = {ω1 , ω2 } and B = {ω1 , . . . , ω4 }. It can be easily verified that τ1− = τ1+ = 2. Since τ1− is a seller pricing strategy for C at date 1 by Theorem 19.6.4, we get

s1 (C) = s1 (C(τ1− )) = s1,2 (C2 ) = {p ∈ X1 ; p(B) > 1, p(B c ) ≥ 1}. Similarly, since τ1+ is a buyer pricing strategy for C at date 1 by Theorem 19.6.8, we get

b1 (C) = b1 (C(τ1+ )) = b1,2 (C2 ) = {p ∈ X1 ; p(B) < 2, p(B c ) ≤ 1}. Now, consider the exercise strategy τ = 31A + 21Ac ∈ T1 and note that

s1 (C(τ )) = s1,3 (1A C3 + 1Ac C2 ) = {p ∈ X1 ; p(B) > 1, p(B c ) ≥ 1},

b1 (C(τ )) = b1,3 (1A C3 + 1Ac C2 ) = {p ∈ X1 ; p(B) < 2, p(B c ) ≤ 1}. We infer that τ is both a seller and a buyer pricing strategy for C at date 1. However, τ is clearly not market consistent for C at date 1 by Proposition 19.8.3, because it requires to exercise later than τ1+ on the event A. 

19.10 Exercises In all exercises below we consider the multi-period economy described in Chap. 14 and adhere to the market specifications introduced there. In addition, we assume that the market is arbitrage free. Exercise 19.10.1 Assume that the market is complete and consider an American option C ∈ A. Show that for every t ∈ {0, . . . , T − 1} we have πt (C) = max{Ct , πt,t +1 (πt +1 (C))}. This means that the market-consistent price of the option at date t is the maximum between the exercise value Ct and the market-consistent price of the residual option πt,t +1(πt +1 (C)), which is sometimes called the continuation value of the option at date t. Exercise 19.10.2 Let t ∈ {0, . . . , T − 1}. For every American option C ∈ A consider the following statements: (i) C is replicable at date t. (ii) τt− = τt+ .

408

19 Market-Consistent Prices for American Options

Show that neither of the above assertions implies the other. Exercise 19.10.3 Let 0 ≤ s < t < T . Show that for an American option C ∈ A and an arbitrage-free extension ψs,t ∈ E(πs,t ) we may have ψs,t (p) ∈ s (C). p ∈ t (C) ⇒ Exercise 19.10.4 Let t ∈ {0, . . . , T − 1} and take a positive replicable payoff Z ∈ Mt,T . (i) Show that every exercise strategy in Tt is a market-consistent exercise strategy for C Z at date t. (ii) Deduce that τt− (C Z ) = t and τt+ (C Z ) = T . Exercise 19.10.5 Let t ∈ {0, . . . , T } and consider an American option C ∈ A. Show that an exercise strategy σ ∈ Tt may fail to be market consistent for C at date t even though τt− ≤ σ ≤ τt+ . Exercise 19.10.6 Let t ∈ {0, . . . , T } and consider an American option C ∈ A. For every −,ψ :  → {t, . . . , T } by ψ ∈ E(π ) define a random variable τt −,ψ

τt

% ( := min u ∈ {t, . . . , T } ; Cu = max ψu (C(τ )) . τ ∈ Tu

+,ψ

Similarly, define a random variable τt +,ψ

τt

:  → {t, . . . , T } by

% ( := min u ∈ {t, . . . , T − 1} ; Cu > max ψu (C(τ )) ∧ T . τ ∈Tu+1

−,ψ

+,ψ

and τt are well-defined exercise strategies in Tt . (i) Show that τt −,ψ +,ψ (ii) Show that τt and τt are both ψ-optimal for C at date t. (iii) For every ψ-optimal strategy σ ∈ Tt for C at date t we have −,ψ

τt

+,ψ

≤ σ ≤ τt

.

−,ψ

; ψ ∈ E(π )}. (iv) Show that τt− = inf{τt +,ψ ; ψ ∈ E(π)}. (v) Show that τt+ = sup{τt (vi) Show that the infimum in item (iii) and the supremum in item (iv) are attained. Exercise 19.10.7 Let t ∈ {0, . . . , T } and consider an American option C ∈ A. Show that for every σ ∈ Tt and every ψ ∈ E(π ) the following statements are equivalent: (a) σ is ψ-optimal for C at date t.

19.10 Exercises

409

(b) For every u ∈ {t, . . . , T − 1} the following conditions hold: (1) Cu = maxτ ∈Tu ψu (C(τ )) on {σ = u}. (2) maxτ ∈Tu ψu (C(τ )) = ψu,u+1 (maxτ ∈Tu+1 ψu+1 (C(τ ))) on {σ ≥ u + 1}. (c) For every u ∈ {t, . . . , T − 1} the following conditions hold: (1) Cu = maxτ ∈Tu ψu (C(τ )) on {σ = u}. +,ψ (2) σ ≤ τt . Exercise 19.10.8 Let t ∈ {0, . . . , T }. For every American option C ∈ A define τt−− := min{u ∈ {t, . . . , T } ; Cu = πu− (C)}. (i) (ii) (iii) (iv) (v)

Show that τt−− is a well-defined exercise strategy in Tt . Show that always τt−− ≤ τt− . Show that in general τt−− = τt− . Deduce that in general τt−− is not market consistent for C at date t. Show that τt−− = τt− if the market is complete.

Exercise 19.10.9 Let r ∈ (−1, ∞) and assume that S 1 satisfies St1 = (1 + r)t for every t ∈ {0, . . . , T }. Let i ∈ {2, . . . , N} and p ∈ (0, ∞) and consider the American call option C ∈ A given by C = (0, max{S1i − p, 0}, . . . , max{STi − p, 0}). Assume that {STi > p} is nonempty. Moreover, fix t ∈ {0, . . . , T − 1}. (i) Show that for every arbitrage-free family ψ ∈ E(π) we have Sti − p < ψt,T (max{STi − p, 0}) − ψt,T (max{p − STi , 0}). (ii) Deduce that for every arbitrage-free family ψ ∈ E(π ) we have Ct  ψt,T (CT ). (iii) Use this to infer that τt− = T . Deduce that the only market-consistent strategy for C at date t is the constant strategy τ = T ∈ Tt . In words, exercising an American call option before maturity is not market consistent. Show that the same conclusion may not hold for an American put option.

410

19 Market-Consistent Prices for American Options

Exercise 19.10.10 Let  = {ω1 , . . . , ω6 } and let P(ω1 ) = · · · = P(ω6 ) = 16 . Consider a two-period market with information structure P given by P0 = {}, P1 = {{ω1 , ω2 , ω3 }, {ω4 , ω5 , ω6 }}, P2 = {{ω1 }, {ω2 }, {ω3 }, {ω4 }, {ω5 }, {ω6 }}. We consider two basic securities specified by 1 1 1 1 1 1

1 1

: 1

1 1

6 2

4

: 3

2

8 4 6 2 3 1

We know from Exercise 16.4.8 that the market is incomplete and arbitrage free. Consider the American option 1 : 0

0 1

(i) (ii) (iii) (iv)

2 0 1 2 0 1

Determine the set of market-consistent prices for C at date 0. Determine the sub- and superreplication price for C at date 0. Deduce whether C is replicable at date 0 or not. Identify all the market-consistent exercise strategies for C at date 0.

Exercise 19.10.11 Let t ∈ {0, . . . , T − 1} and let R ∈ R be a rescaling process. Show  that is replicable at date t we have  ∈A that for every American option C    = Rt π  πt C

 T  C0 C ,..., . R0 RT

19.10 Exercises

411

 show that  ∈A Moreover, for every American option C    = Rt t t C



 T  C0 C ,..., , R0 RT

   = Rt πt−  πt− C

 T  C0 C ,..., , R0 RT

   = Rt πt+ C

 T  C0 C . ,..., R0 RT

 πt+

Exercise 19.10.12 Let t ∈ {0, . . . , T } and let θ ∈ S 0,T be the numéraire strategy. For T by  ∈A  and every τ ∈ Tt define C τ ∈ X every American option C τ (ω) (ω). τ (ω) := C C Show that τ = C

T 

u . 1{τ =u} C

u=t

Deduce that the following assertions hold: (   % ! "  = max EQ C τ |Pt ; Q ∈ Qθ (π) , t C

τ ∈ Tt

   =  πt− C    =  πt+ C

inf

! " τ |Pt , max EQ C

sup

! " τ |Pt . max EQ C

Q∈Qθ (π) τ ∈Tt

Q∈Qθ (π) τ ∈Tt

Moreover, if the market is complete, then    = UtC  πt C   under the (unique) for every t ∈ {0, . . . , T }, where U C ∈ X is the P-Snell envelope of C pricing measure Q ∈ Qθ (π ), as defined in Exercise 13.6.8.

A

Sets and Maps

We assume from the reader a working acquaintance with the basic language of sets and mappings. This appendix explains the particular set theoretic notation and terminology, as well as some standard logical symbols, we use in this book. We do this from an informal and “naive” perspective, which is, however, sufficient for our purposes. A nice treatment of set theory as is required for most purposes in mathematics can be found in Halmos [12]. We also recall some basic facts from combinatorics, a very intuitive account of which can be found in Hausner [15], Chung and AitSahlia [4], or Ross [21].

A.1

Logical Symbols

If S and T are two statements, then “S ⇒ T ” means that “statement S implies statement T ” and “S ⇐⇒ T ” that “statement S is equivalent to statement T ”. Moreover, if E is an expression, then F := E means that the symbol F is defined by the expression E. For a set A, the expression “x ∈ A” means that “x is an element of A” and “x ∈ / A” that x is not an element of A. Let P be a property which may or may not hold for an element x ∈ A. Then, • • • •

{x ∈ A ; P (x)} denotes the set of all elements in A for which property P holds. “∀x ∈ A : P (x) . . . ” means “for every x ∈ A such that property P holds . . . ”. “∃x ∈ A : P (x)” means “there exists an x ∈ A for which property P holds”. “P (x), ∀x ∈ A” or, shorter, “P (x), x ∈ A” both mean “property P holds for every x ∈ A”.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

413

414

A.2

A Sets and Maps

Sets and Maps

We use the following standard notation for the two sets of numbers we encounter in this book: • N is the set of natural numbers (without zero). • R is the set of real numbers. The empty set is denoted by ∅. If S is a subset of A, then we write S ⊂ A. Subsets of A that are not identical to A are said to be proper. Let I be a set. If Ai is a set for every i ∈ I , then we denote by 

Ai

i∈I

the union of the sets Ai , i.e. the set consisting precisely of all the elements of the sets Ai . Similarly, we denote by 

Ai

i∈I

the intersection of the sets Ai , i.e., the set consisting precisely of all the elements belonging to each of the sets Ai . The (Cartesian) product of a finite number of sets A1 , . . . , An is the set A1 × · · · × An consisting of all n-tuples (x1 , . . . , xn ) with x1 ∈ A1 , . . . , xn ∈ An . For every set A we write · · × A. . An := A + × ·,n

times

We say that the sets A and B are disjoint whenever A ∩ B = ∅. If I is an index set and Ai is a set for every i ∈ I , then we say that the sets Ai are pairwise disjoint if Ai and Aj are disjoint for all distinct i, j ∈ I . The set difference of the sets A and B is defined by A \ B := {x ∈ A ; x ∈ / B}. If S is a subset of the set A, the complement of S relative to A is the set S c := A \ S. Most of the times it is clear from the context what A is and we just speak of the “complement of S”. The following result is used frequently.

A

Sets and Maps

415

Proposition A.2.1 (De Morgan’s Laws) Let I be an index set and let Ai be a set for every i ∈ I . Then, the following statements hold:  c (i) A = Ac .  i∈I i c i∈I ic (ii) = i∈I Ai . i∈I Ai Let A and B be two sets. If f : A → B is a map from A to B we denote by f (x) the value or image of f at x ∈ A. Sometimes we write x → f (x) to indicate that f assigns the value f (x) to x ∈ A. The sets A and B are the domain and the codomain of f , respectively. We use the terms “map”, “mapping”, and “function” synonymously. Recall that the map f : A → B is injective if f (x1 ) = f (x2 ) implies x1 = x2 for all x1 , x2 ∈ A, and that f is surjective if for each y ∈ B there exists x ∈ A such that f (x) = y. If f is injective and surjective, then we say that f is bijective or, equivalently, that it is a one-toone correspondence between A and B. In this case, the map f −1 : B → A defined by f −1 (y) := x, where x is the unique element of A such that f (x) = y, is well defined and is called the inverse of f . Consider a third set C and maps f : A → B and g : B → C. The composition of f and g is the map g ◦ f : A → C defined by g ◦ f (x) := g(f (x)). If f : A → B is a map and S ⊂ A, then the image of S under f is the set f (S) := {f (x) ; x ∈ S}. The image of f is the set f (A). For E ⊂ B, the pre-image of E under f is the set f −1 (E) := {x ∈ A ; ∃y ∈ E : f (x) = y}. We say that two sets A and B have the same cardinality if there exists a one-to-one correspondence between A and B. Two finite sets have the same cardinality if and only if they have the same number of elements. The number of elements of a finite set, its cardinality, is denoted by card(A). The concept of cardinality for infinite sets is more subtle, but we will not need it here. We will only need the following differentiation of infinite sets. A set A is said to be countable provided there exists a one-to-one correspondence between A and N. If an infinite set is not countable, it is said to be uncountable.

416

A.3

A Sets and Maps

Basic Combinatorics

In this section we recall some elementary results from combinatorics that we illustrate using urn models. Consider a set A with n elements representing an urn containing n different balls. The type of problem we look at now involves counting the number of ways in which one can take a sample of, or draw, k balls from A. There are two features of sampling experiments that need to be specified to make the problem well defined: • Replacement: Are the balls replaced, i.e., returned to the urn, after each draw? In this case the same ball can be drawn several times. • Order relevance: Does the order in which we draw the different balls matters? Sampling with Replacement (Order Matters) The simplest case is when we sample with replacement in such a way that order matters. A typical outcome of this sampling experiment is an ordered tuple that can be viewed as the element of a Cartesian product. The Basic Counting Principle tells us what is the cardinality of a Cartesian product (involving finite sets). Proposition A.3.1 (Basic Counting Principle) Let A1 , . . . , An be finite sets. Then, we have card(A1 × · · · × An ) =

n 

card(Ai ).

i=1

The Basic Counting Principle tells us how many different outcomes are possible in a sampling experiment with replacement in which order matters. Proposition A.3.2 There are nk ways to draw k balls from an urn containing n balls if, after being drawn, balls are replaced in the urn, and if the order in which the balls are drawn matters. Sampling Without Replacement (Order Matters) The next simplest case is when we sample without replacement, but still keep track of the order in which the balls are drawn. Before stating the result, we recall that, for each n ∈ N, the n-factorial is the number n! := 1 · 2 · · · · · (n − 1) · n. For n = 0, we set 0! := 1. n! ways to draw k balls from an urn containing n balls Proposition A.3.3 There are (n−k)! if, after being drawn, balls are not replaced in the urn, and if the order in which the balls are drawn matters.

A

Sets and Maps

417

n! A special case arises when k = n. Then (n−k)! = n! and we can view n! as the number of possible permutations or ordered arrangements of the n balls.

Sampling Without Replacement (Order Does Not Matter) We now draw k balls from the urn without replacing them and without paying attention to the order in which they are drawn. Recall that, for n ∈ N and k ∈ {0, . . . , n}, the kth binomial coefficient at level n is defined by

 n n! . := (n − k)! k! k   Proposition A.3.4 There are nk ways to draw k balls from an urn containing n balls if, after being drawn, balls are not replaced in the urn, and if the order in which the balls are drawn does not matter. This sampling experiment is equivalent to selecting a subset of A with k elements. Hence, counting the number of possible outcomes in this sampling experiment is the same as counting the number of subsets of A that contain k elements. Corollary A.3.5 Let A be a set with cardinality n.  every k ∈ {0, . . . , n} the number  For of subsets of A that contain k elements is equal to nk . The following proposition records a variety of useful properties of binomial coefficients and explains where the term “binomial coefficient” comes from. Proposition A.3.6 (Binomial Formula) For every n ∈ N and for all a, b ∈ R the following statements hold:    (i) nk=0 nk a k bn−k = (a + b)n . n n k n−k (ii) k=0 k ka b = na(a + b)n−1 . n n 2 k n−k = na(a + b)n−1 (1 + (n − 1)a). (iii) k=0 k k a b Sampling with Replacement (Order Does Not Matter) The sampling experiment with replacement in which the order does not matter is the most difficult of the sampling experiments.   Proposition A.3.7 There are n+k−1 ways to draw k balls from an urn containing n balls k if, after being drawn, balls are replaced in the urn, and if the order in which the balls are drawn does not matter.

B

Vector Spaces

Vector spaces are the subject matter of linear algebra. The classical example is the Euclidean space. Vector spaces are one of the basic mathematical structures and the language of linear algebra is required in virtually all areas of applied mathematics. In this appendix we recall the notion of a vector space and list some basic results from linear algebra that are used frequently in the book. We mainly focus on the case of finitedimensional vector spaces. For a detailed treatment of linear algebra in a finite-dimensional setting we refer to Axler [2] or the classical Halmos [12].

B.1

The Vector Space Axioms

In this book we only consider real vector spaces, whose definition we recall next. Definition B.1.1 (Vector Space) A set V is called a vector space, or linear space, over R if it is equipped with an operation, called sum, that assigns to all x, y ∈ V a unique element x + y ∈ V , and an operation, called product, that assigns to all a ∈ R and x ∈ V a unique element ax ∈ V , such that the following axioms hold for all x, y, z ∈ V and a, b ∈ R: (1) (2) (3) (4) (5)

x + y = y + x (commutative law). (x + y) + z = x + (y + z) (associative law). x + 0 = x for some element 0 ∈ V (existence of a zero vector). x + (−x) = 0 for some element −x ∈ V (existence of an inverse). a(x + y) = ax + ay (distributive law for sum).

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

419

420

B Vector Spaces

(6) (a + b)x = ax + bx (distributive law for addition of scalars). (7) (ab)x = a(bx) (distributive law for product of scalars). (8) ax = x whenever a = 1 (normalization of product). For any x, y ∈ V we simply write y − x instead of y + (−x). In particular, x − x = 0. The elements of V are called vectors. In this context, real numbers are called scalars.  A key operation in a vector space is the building of linear combinations. Definition B.1.2 (Linear Combination) For all x1 , . . . , xm ∈ V and a1 , . . . , am ∈ R the vector m 

ai xi := a1 x1 + · · · + am xm

i=1

is called a linear combination of x1 , . . . , xm . The scalars a1 , . . . , am are called the coefficients of the linear combination. 

B.2

Linear Subspaces

Standing Assumption Throughout this section we fix a vector space V .

A subset of V is called a linear subspace if it is “closed” under the vector space operations, i.e., if it is itself a vector space when equipped with the operations it inherits from V . Definition B.2.1 (Linear Subspace) A subset M ⊂ V is said to be a linear subspace of V if x + y ∈ M for all x, y ∈ M and ax ∈ M for all a ∈ R and x ∈ M.  The algebraic operations on V can be used to define subsets of V in a natural way. Definition B.2.2 (Operations for Sets) Let A, B ⊂ V and take a ∈ R. The sum of A and B is the set A + B := {x + y ; x ∈ A, y ∈ B}. The product of A by a is the set aA := {ax ; x ∈ A}.



B Vector Spaces

421

Proposition B.2.3 Let M and N be linear subspaces of V . Then, M + N is also a linear subspace of V . Moreover, aM is also a linear subspace of V for every a ∈ R. Given a nonempty subset A of V , the set span(A) is the smallest linear subspace containing A. Here is a constructive definition. Definition B.2.4 (Span) The set of all linear combinations of vectors in a set A ⊂ V is called the span of A and is denoted by span(A), i.e., span(A) :=

m 

ai xi ; a1 , . . . , am ∈ R, x1 , . . . , xm ∈ A, m ∈ N .

i=1

If A consists of a single vector x, then we simply write span(x) instead of span({x}). If M = span(A), then we say that M is spanned, or generated, by A. 

B.3

Bases and Dimensions

Standing Assumption Throughout this section we fix a vector space V .

A basis of V is a “minimal” set of vectors whose span coincides with V . Definition B.3.1 (Linear Independence, Basis) We say that x1 , . . . , xm ∈ V are linearly independent if, for every choice of a1 , . . . , am ∈ R, we have m 

ai xi = 0 ⇒ a1 = · · · = am = 0 .

i=1

A subset A ⊂ V is said to be linearly independent if x1 , . . . , xm are linearly independent for every finite selection of elements x1 , . . . , xm ∈ A. A linearly independent set that spans V is called a basis for V .  Proposition B.3.2 (Existence of a Basis) Every vector space V admits a basis and every basis has the same cardinality. The previous result makes the following definition meaningful.

422

B Vector Spaces

Definition B.3.3 (Finite-Dimensional Vector Space) We say that V is finite dimensional if it is spanned by a finite number of vectors. In this case, the dimension of V is defined by dim(V ) := card B, where B is an arbitrary basis of V . If V is not finite dimensional, then it is said to be infinite dimensional. 

Standing Assumption For the remainder of this section V is a finite-dimensional vector space.

Proposition B.3.4 (Coordinates) Let B = {x1 , . . . , xm } be a basis for V . Then, for every x ∈ V , there exist unique scalars a1 (x), . . . , am (x) ∈ R such that x=

m 

ai (x)xi .

i=1

The above coefficients are called the coordinates of x with respect to B. We can always extract a basis from a set that spans the entire vector space. Proposition B.3.5 Assume that A ⊂ V satisfies span(A) = V . Then, A contains a basis of V . Conversely, every linearly-independent set can be completed to a basis. Proposition B.3.6 Let A ⊂ V be a linearly-independent set. Then, there exists a basis B for V such that A ⊂ B. The following proposition provides a useful formula for the dimension of the sum of two linear subspaces. Proposition B.3.7 For all linear subspaces M, N ⊂ V we have dim(M + N) = dim(M) + dim(N) − dim(M ∩ N). In particular, if M ∩ N = {0}, then dim(M + N) = dim(M) + dim(N).

B Vector Spaces

423

Definition B.3.8 (Linear Complement) Let M, N ⊂ V be linear subspaces. We say that N is a linear complement of M (in V ) if: (1) M + N = V . (2) M ∩ N = {0}. In this case, V is also said to be the direct sum of M and N and one writes V = M ⊕ N.  Proposition B.3.9 Let M ⊂ V be a proper linear subspace of V . Then, M admits a linear complement (in fact, infinitely many). If N ⊂ V is a linear complement of M, then every x ∈ V can be written in a unique way as x = xM + xN with xM ∈ M and xN ∈ N. Moreover, dim(N) = dim(V ) − dim(M).

B.4

Linear Maps

Standing Assumption Throughout this section we fix two finite-dimensional vector spaces V and W .

Linear maps are maps between vector spaces that preserve the vector space operations in the sense recalled below. Definition B.4.1 (Linear Map) A map ϕ : V → W is said to be linear if it satisfies the following properties for all x, y ∈ V and a ∈ R: (1) ϕ(x + y) = ϕ(x) + ϕ(y) (additivity). (2) ϕ(ax) = aϕ(x) (homogeneity). If W = R, then we say that ϕ is a linear functional. The set of all linear maps between V and W is denoted by L(V , W ).  The vector space operations on V and W can be used to define vector space operations on L(V , W ) in a natural way.

424

B Vector Spaces

Definition B.4.2 (Operations) Consider two maps ϕ : V → W and ψ : V → W . The sum of ϕ and ψ is the map ϕ + ψ : V → W defined by (ϕ + ψ)(x) := ϕ(x) + ψ(x). Similarly, if a ∈ R, the product of ϕ by a is the map aϕ : V → W defined by (aϕ)(x) := aϕ(x).



Proposition B.4.3 The set L(V , W ) is a vector space and dim(L(V , W )) = dim(V ) dim(W ). In particular, we dim(L(V , R)) = dim(V ). The set of all vectors in V that are assigned to the zero vector in W via a given linear maps is a linear subspace (of V ). Definition B.4.4 (Kernel) Consider a map ϕ : V → W . The kernel of ϕ is defined by ker(ϕ) := {x ∈ V ; ϕ(x) = 0}.



Proposition B.4.5 For every linear map ϕ : V → W the set ker(ϕ) is a linear subspace of V . The next result shows that the image of a linear map is a linear subspace (of the codomain). Proposition B.4.6 For every linear map ϕ : V → W the set ϕ(V ) is a linear subspace of W . Moreover, if B is a basis for V , then ϕ(V ) = span(ϕ(B)). We have the following relationship between the dimension of the kernel and the dimension of the image of a linear map. Proposition B.4.7 For every linear map ϕ : V → W we have dim(V ) = dim(ker(ϕ)) + dim(ϕ(V )). In particular, for every nonzero linear functional ϕ : V → R we have dim(V ) = dim(ker(ϕ)) + 1.

C

Normed Spaces

A normed space is a vector space equipped with a function, the norm, that assigns to every vector a scalar number, called the length, or norm, of the vector. There is a natural notion of distance in every normed space, which allows introducing topological notions such as open, closed, or compact sets and continuous functions. The classical example of a finitedimensional normed space is the Euclidean space equipped with the Euclidean norm. This appendix reviews some basic results from the theory of finite-dimensional normed spaces. For a thorough treatment of the topic we refer to Amann and Escher [1], Axler [2], Halmos [12], and Kreyszig [17].

C.1

The Normed Space Axioms

Definition C.1.1 (Normed Space) A normed (vector) space is a vector space V equipped with a function · : V → R, called the norm on V , satisfying the following axioms for all x, y ∈ V and for every a ∈ R: (1) (2) (3) (4)

x ≥ 0 (positivity). x = 0 if and only if x = 0 (discrimination). ax = |a| x (positive homogeneity). x + y ≤ x + y (triangle inequality).



All possible norms that can be defined on a finite-dimensional vector space are equivalent in the sense specified by the proposition below. In fact, one can show that the equivalence of all possible norms characterizes the finite-dimensionality of normed spaces.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

425

426

C Normed Spaces

Proposition C.1.2 Let  · 1 and  · 2 be two norms on a finite-dimensional vector space V . Then, there exist two constants c1 , c2 ∈ (0, ∞) such that for every x ∈ V we have c1 x2 ≤ x1 ≤ c2 x2 . Amongst the many ways to define a norm on the Cartesian product of two normed spaces, we highlight two in the next proposition. We focus on the product of two normed spaces. The extension to an arbitrary finite number of normed spaces is obvious. Proposition C.1.3 Let V and W be finite-dimensional normed spaces with respective norms  · V and  · W . For every p ∈ [1, ∞) the map  · p : V × W → R defined as p

p

1

(x, u)p := (xV + uW ) p is a norm on V × W . Moreover, the map  · max : V × W → R defined as (x, u)max := max{xV , uW } is also a norm on V × W .

C.2

Norm and Topology

Standing Assumption Throughout this section we fix a finite-dimensional normed space V with norm  · .

There is a natural notion of distance between two vectors in a normed space V . Definition C.2.1 (Distance) For all vectors x, y ∈ V we define the distance between x and y by d(x, y) := x − y.  The next proposition shows that every normed space is a metric space with respect to the distance function induced by the norm.

C Normed Spaces

427

Proposition C.2.2 The following statements hold for all x, y, z ∈ V and a ∈ R: (i) (ii) (iii) (iv)

d(x, y) ≥ 0 (positivity). d(x, y) = 0 if and only if x = y (discrimination). d(x, y) = d(y, x) (symmetry). d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality).

Definition C.2.3 (Ball, Bounded Set) of radius r centered at x is the set

For every x ∈ V and every r ∈ (0, ∞) the ball

Br (x) := {y ∈ V ; d(x, y) ≤ r}. A set A ⊂ V is said to be bounded if A ⊂ Br (0) for some r ∈ (0, ∞).



Normed spaces carry a natural topological structure, i.e., we can define open and closed sets in a natural way. Definition C.2.4 Let A be a subset of V . The set A is said to be open whenever for each x ∈ A there exists r ∈ (0, ∞) such that Br (x) ⊂ A. The set A is said to be closed whenever Ac is open.  Arbitrary unions and finite intersections of open sets are open. Similarly, arbitrary intersections and finite unions of closed sets are closed. Proposition C.2.5 Let I be an arbitrary index set and let Ai be a subset of V for every i ∈ I . Then, the following statements hold: (i) If Ai is open for every i ∈ I , then i∈I Ai is also open. If, in addition, I is finite, then i∈I Ai is also open. (ii) If Ai is closed for every i ∈ I , then i∈I Ai is also closed. If, in addition, I is finite, then i∈I Ai is also closed. The last topological concept we require is that of compactness. Definition C.2.6 (Compact Set) Let A be a subset of V . An open cover for A is a family A = {Ai ; i ∈ I } consisting of open subsets of V such that A ⊂ i∈I Ai . If A is an open cover for A and J ⊂ I , then we say that B = {Aj ; j ∈ J } is a subcover for A if A ⊂ j ∈J Aj . The set A is said to be compact if every open cover for A admits a finite subcover.  The following result is a consequence of V being finite dimensional.

428

C Normed Spaces

Proposition C.2.7 (Heine-Borel Theorem) A subset of V is compact if and only if it is closed and bounded.

C.3

Sequences and Convergence

Let A be a subset of V . Recall that a sequence taking values in A is a map f : N → A. We follow the custom of setting xn = f (n) for every n ∈ N and denoting the sequence by (xn ). We write (xn ) ⊂ A to indicate that (xn ) is a sequence taking values in A. A subsequence of (xn ) is a sequence (yn ) such that yn = xα(n) for some map α : N → N that is strictly increasing, i.e., α(m) < α(n) for all m, n ∈ N such that m < n. In normed spaces there is a natural notion of convergence of sequences. Definition C.3.1 (Convergence) A sequence (xn ) ⊂ V is said to converge (with respect to  · ) to a vector x ∈ V if for every r ∈ (0, ∞) there exists a number nr ∈ N such that xn ∈ Br (x) for every n ∈ N such that n ≥ nr . If (xn ) converges to x, then we write  xn → x. Proposition C.3.2 (Limit) A sequence (xn ) ⊂ V can converge to at most one vector in V , which is then called the limit of (xn ). If this is the case, every subsequence of (xn ) converges to the same limit. A sequence converges to some vector if and only if the distance between each of its elements and the limit vector goes to 0 as specified below. Proposition C.3.3 For every sequence (xn ) ⊂ V and every vector x ∈ V the following statements are equivalent: (i) xn → x. (ii) xn − x → 0. The convergence in a finite-dimensional normed space is essentially a “convergence by coordinates”; see Proposition B.3.4. Proposition C.3.4 Let B be a basis of V . Set m = dim(V ) and for every x ∈ V denote by a1 (x), . . . , am (x) ∈ R the coordinates of x with respect to B. Then, for every sequence (xn ) ⊂ V and every x ∈ V the following statements are equivalent: (a) ai (xn ) → ai (x) for every i ∈ {1, . . . , m}. (b) xn → x.

C Normed Spaces

429

The following result captures the fact that the algebraic and topological structures in a normed vector space are “consistent”. Proposition C.3.5 For all sequences (xn ), (yn ) ⊂ V and (an ) ⊂ R and for all x, y ∈ V and a ∈ R the following statements hold: (i) If xn → x and yn → y, then xn + yn → x + y. (ii) If xn → x and an → a, then an xn → ax. The fact that V is a metric space allows to characterize closedness and compactness in terms of sequences. Proposition C.3.6 For every set A ⊂ V the following statements hold: (i) A is closed if and only if it is sequentially closed, i.e., for every x ∈ V (xn ) ⊂ A, xn → x ⇒ x ∈ A. (ii) A is compact if and only if it is sequentially compact, i.e., every sequence in A admits a convergent subsequence.

C.4

Continuous Maps

Standing Assumption Throughout this section we fix two finite-dimensional normed spaces V and W with respective norms  · V and  · W . Definition C.4.1 (Continuous Map) Let x ∈ V . A map ϕ : V → W is said to be continuous at x if, for every ε ∈ (0, ∞) there exists r ∈ (0, ∞) such that f (x) − f (y) ≤ ε holds for every y ∈ Br (x). We say that ϕ is continuous (on V ) if it is continuous at every point of V .  As recorded next, continuity can be characterized in terms of sequences.

430

C Normed Spaces

Proposition C.4.2 Let x ∈ V . For every map ϕ : V → W the following statements are equivalent: (a) For every sequence (xn ) ⊂ V and every x ∈ V we have xn → x ⇒ ϕ(xn ) → ϕ(x). (b) ϕ is continuous at x. It is easy to see that a linear map is continuous if and only if it is continuous at 0. Hence, the following result implies the automatic continuity of linear maps between finitedimensional normed spaces. Proposition C.4.3 For every linear map ϕ : V → W there exists a constant c ∈ (0, ∞) such that ϕ(x)W ≤ cxV for every x ∈ V . In particular, ϕ is continuous. The image of a compact set under a continuous map is again compact. Proposition C.4.4 For every compact set A ⊂ V and every continuous map ϕ : V → W the set ϕ(A) is compact. A famous and extremely useful corollary to the preceding result is that continuous functions attain their maximum and their minimum on a compact set. Proposition C.4.5 (Weierstrass Theorem) For every compact set A ⊂ V and every continuous functional ϕ : V → R there exist xmin , xmax ∈ A such that ϕ(xmax ) = sup ϕ(x), ϕ(xmin) = inf ϕ(x). x∈A

x∈A

D

Inner-Product Spaces

An inner product space is a vector space with an additional operation assigning to every pair of vectors a scalar number, called the inner product of the vectors. Every inner product induces a norm in a natural way so that the length of a vector is well defined. The inner product allows, in addition, to define the notion of the angle between two vectors. Inner product spaces generalize the Euclidean space, where the inner product is called the scalar product. This appendix reviews some basic results from the theory of inner-product spaces. For a comprehensive treatment of the topic we refer to Amann and Escher [1], Axler [2], Halmos [12], and Kreyszig [17].

D.1

The Inner-Product Space Axioms

Definition D.1.1 (Inner-Product Space) An inner-product space is a vector space V equipped with a function (· , ·) : V × V → R, called the inner product, satisfying the following axioms for all x, y, z ∈ V and a ∈ R: (1) (2) (3) (4) (5)

(x, x) ≥ 0 (positivity). (x, x) = 0 if and only if x = 0 (discrimination). (x, y) = (y, x) (symmetry). (x + z, y) = (x, y) + (z, y) (additivity). (ax, y) = a(x, y) (homogeneity).



Every inner product induces a norm. Hence, every inner-product space is automatically a normed space.

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

431

432

D Inner-Product Spaces

Proposition D.1.2 Let (· , ·) be an inner product on the vector space V . Then, the function √  ·  : V → R defined by x := (x, x) is a norm, called the norm induced by (· , ·). The next result highlights further connections between an inner product and the associated norm. Proposition D.1.3 Let (· , ·) be an inner product on the vector space V and let  ·  be its induced norm. The following statements hold for all x, y ∈ V : (i) (x, y) = 12 (x + y2 − x2 − y2 ). (ii) |(x, y)| ≤ x y. The inequality in (ii), which is referred to as the Cauchy-Schwarz inequality, is strict if and only if x and y are linearly independent. The Cartesian product of inner-product spaces carries a natural inner-product structure. Here, consider the product between two spaces. The definition extends to an arbitrary finite number of spaces in an obvious way. Proposition D.1.4 Let V and W be finite-dimensional inner-product spaces with respective inner products (· , ·)V and (· , ·)W . Then, the map (·, ·)2 : (V × W ) × (V × W ) → R defined as ((x, u), (y, v))2 := (x, y)V + (u, v)W is an inner product on V × W . Moreover, its associated norm is the  · 2 norm defined in Proposition C.1.3.

D.2

Orthogonal Vectors

Standing Assumption We assume that V is a finite-dimensional inner-product space with inner product (· , ·) and associated norm  · .

D

Inner-Product Spaces

433

The notion of an angle between two vectors is motivated by the fact that, by the CauchySchwarz inequality, for all nonzero vectors x, y ∈ V we have −1 ≤

(x, y) ≤ 1. x y

Definition D.2.1 (Orthogonal Vectors) Consider two nonzero vectors x, y ∈ V . The angle formed by x and y is the unique number ϑ(x, y) ∈ [0, π] such that cos ϑ(x, y) =

(x, y) . x y

We say that x and y are orthogonal if they form an angle equal to holds.

π 2,

i.e., if (x, y) = 0 

The following result provides a generalization of the Pythagorean theorem to the context of inner-product spaces. Proposition D.2.2 (Pythagorean Theorem) For all orthogonal vectors x, y ∈ V we have x + y2 = x2 + y2 . Definition D.2.3 (Orthonormal Sytem) A finite set A = {x1 , . . . , xm } ⊂ V is said to be an orthonormal system if xi  = 1 for every i ∈ {1, . . . , m} and (xi , xj ) = 0 for all i, j ∈ {1, . . . , m} with i = j . An orthonormal basis for V is an orthonormal system that is also a basis.  Proposition D.2.4 If A = {x1 , . . . , xm } ⊂ V is an orthonormal system, then m ≤ dim(V ) and A is a linearly-independent set. Moreover, A is contained in an orthonormal basis for V . In particular, V admits an orthonormal basis.

D.3

Orthogonal Projections

Standing Assumption We assume that V is a finite-dimensional inner-product space with inner product (· , ·) and associated norm  · .

434

D Inner-Product Spaces

Definition D.3.1 (Orthogonal Complement) Let A be a subset of V . The orthogonal complement of A is the set A⊥ := {x ∈ V ; (x, y) = 0 for all y ∈ A}.



The following results collects some crucial properties of the orthogonal complement of a linear subspace. Proposition D.3.2 For every linear subspace M ⊂ V the set M ⊥ is a linear subspace satisfying the following properties: (i) V = M + M ⊥ . (ii) M ∩ M ⊥ = {0}. (iii) (M ⊥ )⊥ = M. In particular, M ⊥ is a linear complement of M and, hence, each vector x ∈ V can be uniquely decomposed into x = xM + xM ⊥ for suitable xM ∈ M and xM ⊥ ∈ M ⊥ . Proposition D.3.3 For every set A ⊂ V the following statements are equivalent: (a) A⊥ = {0}. (b) span(A) = V . Based on Proposition D.3.2 we can define the notion of an orthogonal projection. Definition D.3.4 (Orthogonal Projection) Let M ⊂ V be a linear subspace. The map PM : V → M defined by setting PM (x) := xM , where xM is defined as in Proposition D.3.2, is called the orthogonal projection of V onto M.  We provide a simple characterization of orthogonal projections.

D

Inner-Product Spaces

435

Proposition D.3.5 Let M ⊂ V be a linear subspace. Then, PM is the only map from V to M such that: (i) PM is linear. (ii) PM (x) = x for every x ∈ M. (iii) PM (x) = 0 for every x ∈ M ⊥ . In particular, we have ker(PM ) = M ⊥ and PM (V ) = M. The orthogonal projection enjoys the following important optimality property. Proposition D.3.6 Let M ⊂ V be a linear subspace. Then, for every x ∈ V we have PM (x) − x = inf y − x. y∈M

D.4

Riesz Representation

Standing Assumption We assume that V is a finite-dimensional inner-product space with inner product (· , ·) and associated norm  · . It is clear that for every vector y ∈ V the mapping x → (x, y) from V to R defines a linear functional on V . The Riesz Representation Theorem states that, in fact, these are the only linear functionals on V . Proposition D.4.1 (Riesz Representation) For every linear functional ϕ : V → R there exists a unique vector z ∈ V such that ϕ(x) = (z, x) for every x ∈ V . Moreover, we have z = 0 if ϕ is the zero functional and z=

ϕ(y) y y2

for any nonzero y ∈ ker(ϕ)⊥ if ϕ is nonzero.

Bibliography

1. H. Amann, J. Escher, Analysis I (Birkhäuser, Boston, 2005) 2. S. Axler, Linear Algebra Done Right (Springer, Cham, 2015) 3. F. Black, M. Scholes, The Pricing of Options and Corporate Liabilities, J Polit Econ, 81, 637– 654, (1973) 4. K.L. Chung, F. AitSahlia, Elementary Probability Theory: With Stochastic Processes and an Introduction to Mathematical Finance (Springer, New York, 2006) 5. N.J. Cutland, A. Roux, Derivative Pricing in Discrete Time (Springer, London, 2013) 6. D. Duffie, Dynamic Asset Pricing Theory, 3rd ed., (Princeton University Press, 2010) 7. Ph. Dybvig, S.A. Ross, Arbitrage, in The New Palgrave Dictionary of Economics, ed. by J. Eatwell, M. Milgate, P. Newman, vol. I (Macmillan, New York, 1987), pp. 100–106 8. R.J. Elliott, P.E. Kopp, Mathematics of Financial Markets (Springer, 2004) 9. H. Föllmer, A. Schied, Stochastic Finance: An Introduction in Discrete Time (De Gruyter, Berlin, 2017) 10. D. Gillies, Philosophical Theories of Probability (Routledge, London, 2000) 11. I. Hacking, The Emergence of Probability: A Philosophical Study of Early Ideas About Probability, Induction and Statistical Inference (Cambridge University Press, Cambridge, 2006) 12. P.R. Halmos, Finite-Dimensional Vector Spaces (Springer, New York, 1974) 13. J.M. Harrison, D.M. Kreps, Martingales and Arbitrage in Multiperiod Securities Markets, J Econ Theory, 20, 381–408, (1979) 14. J.M. Harrison, S.R. Pliska, Martingales and Stochastic Integrals in the Theory of Continuous Trading, Stoch Process Appl, 11, 215–260, (1981) 15. M. Hausner, Elementary Probability Theory (Springer, New York, 1977) 16. P. Koch-Medina, S. Merino, Mathematical Finance and Probability: A Discrete Introduction (Birkhäuser, Basel, 2003) 17. E. Kreyszig, Advanced Engineering Mathematics (Wiley, Hoboken, 2011) 18. R. Mansuy, The Origins of the Word “Martingale”. Electron. J. Hist. Probab. Stat. 5, 1–10 (2009) 19. R.C. Merton, Theory of Rational Option Pricing, Bell J Econ Manag Sci, 4, 141–183, (1973) 20. S.R. Pliska, Introduction to Mathematical Finance (Blackwell, 1997) 21. S.M. Ross, A First Course in Probability (Pearson, London, 2012) 22. S. Shreve, Stochastic Calculus for Finance I: The Binomial Asset Pricing Model (Springer, 2005) 23. J. Van der Hoek, R.J. Elliott, Binomial Models in Finance (Springer, 2006)

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

437

Index

A American option, 344 Arbitrage payoff, 127, 263 price, 126, 262 Arbitrage-free extension, 137, 281 market, 128, 263 Arbitrage-free family pricing densities, 287 pricing extensions, 281 Arrow-Debreu market, 116, 251 payoff, 115, 251 Atom, 160

B Ball, 62, 427 Basic security, 106, 239 Basis, 421 Binomial coefficient, 417 formula, 417 Bond, 108 Boundedness, 62, 427

C Call option, 109 Cardinality, 415 Cartesian product, 414 Cauchy-Schwarz inequality, 74, 432 Claim, 105, 238 Closedness, 64, 427

Compactness, 66, 427 Complement, 414 Composition, 415 Conditional expectation, 178, 180 functional, 190 probability, 177 Cone, 10 hull, 10 Continuity, 69, 429 Convergence, 62, 428 Convex combination, 8 functional, 12 hull, 8 set, 7 Convexity, 7, 12, 195 Coordinate, 422 Correlation, 48 Countable set, 415 Covariance, 47

D Density pricing, 138, 287 Radon-Nikodym, 51 Riesz, 96, 213 Derivative, 109 Dimension, 422 Directed family, 19 Distance, 62, 426 Distribution, 54

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

439

440

Index

E Economy, 103, 235 Event, 1 Exchange option, 110 Exercise strategy, 346 market-consistent, 392 optimal, 359, 362, 367, 397 pricing, 358, 362, 369, 374 Expectation, 40 conditional, 178, 180 Expected value, 40 Extension, 87, 205 arbitrage-free, 137, 281 pricing, 137, 281

I Independence, 34, 37 Independent events, 34 random variables, 37 Indicator, 3 Inequalities for processes, 224 for random variables, 15 Infimum, 17 Information structure, 222 Inner product, 74, 431 Inner-product space, 74, 431 Intersection, 414

F Factorial, 416 Field, 163 generated, 169, 171 generated by partition, 162 Filtration, 222 Financial contract, 105, 238 Forward-looking truncation, 319 Functional, 5 concave, 12 conditional, 190 continuous, 69, 429 convex, 12 extension, 87 linear, 6, 423 monotone, 20 operations, 6 positive, 21 positively homogeneous, 13 pricing, 130, 270, 327 strictly-monotone, 20 strictly-positive, 21 subadditive, 13 sublinear, 13 superadditive, 13 superlinear, 13

J Jensen Inequality, 57

H Hahn-Banach Theorem, 85 Heine-Borel Theorem, 428

K Kernel, 86, 424 Kolmogorov Theorem, 180

L Law of One Price, 128, 266 Law of Total Expectation, 179 Law of Total Probability, 178 Limit, 62, 428 Linear combination, 420 complement, 423 functional, 6, 423 independence, 421 map, 423 subspace, 420 Linearity, 6, 194, 423 Localization, 192, 228

M Map concave, 195 continuous, 429 convex, 195 injective, 415

Index linear, 194, 423 monotone, 197 one-to-one, 415 positive, 198 positively homogeneous, 195 strictly-monotone, 197 strictly-positive, 198 subadditive, 195 sublinear, 195 superadditive, 195 superlinear, 195 surjective, 415 Market, 106, 239 arbitrage-free, 128, 263 Arrow-Debreu, 116, 251 complete, 115, 249 Cox-Ross-Rubinstein, 257 multinomial, 257 Market-consistent exercise strategy, 392 price, 130, 149, 270, 305, 327, 332, 356 Marketed bound, 148, 304, 331, 355 payoff, 112, 246 payoff stream, 322 space, 112, 246 Martingale, 226 Maximum, 16 Mean square error, 44 Measurability, 166 Measure pricing, 141 probability, 25 risk-neutral, 142 Minimum, 16 Monotonicity, 20, 197

N Norm, 60, 425 Normed space, 60, 425 Numéraire, 104, 238 portfolio, 117 strategy, 252

O Observable event, 162 One-to-one correspondence, 415

441 Operations for maps, 190, 228 for processes, 223 for random variables, 4 Option American, 344 European call, 109 European put, 109 exchange, 110 Orthogonal complement, 76, 434 projection, 78, 434 Orthogonality, 76, 433 Orthonormality, 433

P Partition, 160 coarser, 165 discrete, 161 finer, 165 generated, 169, 171 trivial, 161 Payoff, 105, 238 Payoff stream, 318 Portfolio, 110, 241 market, 112 numéraire, 117 replicating, 112 Positive cone, 16 functional, 21 process, 224 random variable, 15 Positive homogeneity, 13, 195 Positivity, 21, 198 Price, 105, 238 Pricing density, 138, 287 extension, 137, 281 functional, 130, 270, 327 measure, 141, 294 Probability conditional, 177 distribution, 29, 54 mass function, 28 measure, 25 space, 25 support, 25

442 Probability measure dominated, 50 equivalent, 50 Process, 223 adapted, 225 positive, 224 rescaling, 118, 251 strictly-positive, 224 Product, 4, 6, 190, 223, 228 Proper subset, 414 Put-call parity, 121 Put option, 109 Pythagorean Theorem, 76, 433

Q Quotient, 4

R Radon-Nikodym density, 51 theorem, 52 Random variable, 3 Bernoulli, 30 binomial, 30 constant, 3 distribution, 29 expected value, 40 independent, 37 measurable, 166 nonzero positive, 15 orthogonal, 76 positive, 15 strictly positive, 15 uncorrelated, 48 variance, 45 Redundant security, 113 Replicable American option, 369 payoff, 112, 246 payoff stream, 322 Rescaling process, 118, 251 Riesz density, 96, 213 representation, 96, 435 Risk-neutral measure, 142

Index S Sample space, 1 Scalarization, 204 Self-financing strategy, 244 Sequence, 428 Set bounded, 62, 427 closed, 64, 427 compact, 66, 427 cone, 10 convex, 7 open, 64, 427 Share, 108 Snell envelope, 231 Span, 421 Stock, 108 Strategy exercise, 346 trading, 241 Strictly-positive functional, 21 process, 224 random variable, 15 Strict monotonicity, 20, 197 Strict positivity, 21, 198 Subadditivity, 13, 195 Sublinearity, 13, 195 Subreplication price, 153, 310, 336, 384 Subsequence, 428 Sum, 4, 6, 190, 223, 228 Superadditivity, 13, 195 Superlinearity, 13, 195 Superreplication price, 153, 310, 336, 384 Supremum, 17

T Terminal payoff equivalent, 320 Theorem Hahn-Banach, 85 Kolmogorov, 180 Pythagorean, 76 Weierstrass, 69 Time consistency, 271, 280, 287, 311, 329, 338, 386 Tower property, 184 Trading strategy, 241 acquisition value, 242 liquidation value, 242

Index market, 243 numéraire, 252 replicating, 246, 322 self-financing, 244 value, 245 Truncation, 319

U Union, 414 Unit of account, 104, 238

443 V Variance, 45 Vector space, 5, 419

W Weierstrass Theorem, 69, 430

Index of Symbols

∅, 414 ⊂, 414 ∪, 414 ∩, 414 ×, 414 An , 414 Ac , 414 f : A → B, 415 ◦, 415 +, 4, 223 =, 15, 224 =, 15, 224 ≥, 15, 224 , 15, 224 >, 15, 224 , 50 ∼, 50 416 n!, n , k 417 1E , 3 ||X||p , 60, 425 (X, Y )P , 74, 431 (Xn ), 18, 428 Xn → X, 63, 428 A, 344 p Br (X), 62, 427 Ber(p), 30 Bin(p, m), 30 card(A), 415 co(A), 8 cone(A), 10 COVP [X, Y ], 47 C(τ ), 348 C Z , 351

dQ dP , 51

D(π), 138 Dt,M , 287 D, 287 D(πt,M ), 287 D(π ), 287 dim(M), 422 E, 1 E (), 1 EP [X], 40 EP [X|A], 178 EP [X|P], 180 Eπ (L), 88 Eπ+ (L), 89 E (π), 137 E (πt,M ), 281 E(π), 281 F(P), 162 F , 222 Ft , 237 inf, 17 K, 104, 236 ker(π), 86, 424 L, 3 L+ , 16 L(P), 166 L, 223 L(P), 225 Lt , 237 M, 112 M⊥ , 76, 434 M− (X), 148 M+ (X), 148 Mt,M , 246

© Springer Nature Switzerland AG 2020 P. Koch-Medina, C. Munari, Market-Consistent Prices, https://doi.org/10.1007/978-3-030-39724-1

445

446

Index of Symbols

M− t,M (X), 304

M+ t,M (X), 304 Mt , 322 M− t (X), 331 M+ t (X), 331 M− t (C), 355 M+ t (C), 355 max, 16 min, 16 MSEP [X, Y ], 44 N, 106, 239 N, 414 P, 160 P, 222, 236 Pt , 236 P, 25, 104, 237 PX , 28 P(E|A), 177 PM , 78, 434 Qθ (π), 141 Qθ (π ), 294 R, 414 R, 118, 251 S i , 106 S i , 239 S t , 241 S t,M , 244 span(A), 421 sup, 17 supp(P), 25 T , 235 Tt (X), 320 Ttθ (X), 320 Tt , 346 Tts , 358 Ttb , 362 Tt− , 359 Tt+ , 362 Vt [λ], 111 Acq Vt [λ], 242 Liq

Vt

[λ], 242

Vt [λ], 245 VARP [X], 45 X , 105 XM , 239 X , 318 Xt :T , 319 απ , 204 η, 112 η, 243 π(X), 130 π − (X), 153 π + (X), 153

s (X), 149

b (X), 149

(X), 149 πt,M (X), 270 − πt,M (X), 310

+ (X), 310 πt,M

st,M (X), 305

bt,M (X), 305

t,M (X), 305 πt (X), 327 πt− (X), 336 πt+ (X), 336

st (X), 332

bt (X), 332

t (X), 332 πt (C), 369 πt− (C), 384 πt+ (C), 384

st (C), 356

bt (C), 356

t (C), 356 π(X|A), 192 π(X|A), 228 τt− , 376 τt+ , 378 τt∗ , 395 ψt,M , 280 ψt , 330 ψ, 280 , 1, 104, 236