The Probabilistic Foundations of Rational Learning 131633578X, 9781316335789

According to Bayesian epistemology, rational learning from experience is consistent learning, that is learning should in

563 33 3MB

English Pages [239] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Probabilistic Foundations of Rational Learning
 131633578X, 9781316335789

Table of contents :
Contents
List of Figures
2.1 Two-armed bandit problem.
3.1 Taking Turns game.
3.2 Number of transitions.
3.3 The Shapley game.
3.4 Payoffs based on Markov chain.
Preface and Acknowledgments
Introduction
Abstract Models of Learning
1 Consistency and Symmetry
1.1 Probability
1.2 Pragmatic Approaches
1.3 Epistemic Approaches
1.4 Conditioning and Dynamic Consistency
1.5 Symmetry and Inductive Inference
1.6 Summary and Outlook
2 Bounded Rationality
2.1 Fictitious Play
2.2 Bandit Problems
2.3 Payoff-Based Learning Procedures
2.4 The Basic Model of Reinforcement Learning
2.5 Luce’s Choice Axiom
2.6 Commutative Learning Operators
2.7 A Minimal Model
2.8 Rationality and Learning
3 Pattern Learning
3.1 Taking Turns
3.2 Markov Fictitious Play
3.3 Markov Exchangeability
3.4 Cycles
3.5 Markov Reinforcement Learning
3.6 Markov Learning Operators
3.7 The Complexity of Learning
4 Large Worlds
4.1 It’s a Large World (After All)
4.2 Small World Rationality
4.3 Learning the Unknown
4.4 Exchangeable Random Partitions
4.5 Predicting the Unpredictable
4.6 Generalizing Fictitious Play
4.7 Generalizing Reinforcement Learning
4.8 Learning in Large Worlds with Luce’s Choice Axiom
5 Radical Probabilism
5.1 Prior Probabilities
5.2 Probability Kinematics
5.3 Radical Probabilism
5.4 Dynamically Consistent Models
5.5 Martingales
5.6 Conditional Probability and Conditional Expectation
5.7 Predicting Choices
6 Reflection
6.1 Probabilities of Future Probabilities
6.2 Dynamic Consistency
6.3 Expected Accuracy
6.4 Best Estimates
6.5 General Distance Measures
6.6 The Value of Knowledge
6.7 Genuine Learning
6.8 Massaging Degrees of Belief
6.9 Countable Additivity
7 Disagreement
7.1 Agreeing to Disagree
7.2 Diverging Opinions
7.3 Learning from Others
7.4 Averaging and Inductive Logic
7.5 Generalizations
7.6 Global Updates
7.7 Alternatives
7.8 Conclusion
8 Consensus
8.1 Convergence to the Truth
8.2 Merging of Opinions
8.3 Nash Equilibrium
8.4 Merging and Probability Kinematics
8.5 Divergence and Probability Kinematics
8.6 Alternative Approaches
8.7 Rational Disagreement
Appendix A Inductive Logic
A.1 The Johnson–Carnap Continuum of Inductive Methods
A.2 De Finetti Representation
A.3 Bandit Problems
Appendix B Partial Exchangeability
B.1 Partial Exchangeability
B.2 Representations of Partially Exchangeable Arrays
B.3 Average Reinforcement Learning
B.4 Regret Learning
Appendix C Marley’s Axioms
C.1 Abstract Families
C.2 Marley’s Theorem
C.3 The Basic Model
Bibliography
Index

Citation preview

The Probabilistic Foundations of Rational Learning

According to Bayesian epistemology, rational learning from experience is consistent learning, that is learning should incorporate new information consistently into one’s old system of beliefs. Simon Huttegger argues that this core idea can be transferred to situations where the learner’s informational inputs are much more limited than conventional Bayesianism assumes, thereby significantly expanding the reach of a Bayesian type of epistemology. What results from this is a unified account of probabilistic learning in the tradition of Richard Jeffrey’s “radical probabilism”. Along the way, Huttegger addresses a number of debates in epistemology and the philosophy of science, including the status of prior probabilities, whether Bayes’ rule is the only legitimate form of learning from experience, and whether rational agents can have sustained disagreements. His book will be of interest to students and scholars of epistemology, of game and decision theory, and of cognitive, economic, and computer sciences. S I M O N M . H U T T E G G E R is Professor of Logic and Philosophy of Science at the University of California, Irvine. His work focuses on game and decision theory, probability, and the philosophy of science, and has been published in numerous journals.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:41:57, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:41:57, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

The Probabilistic Foundations of Rational Learning SIMON M. HUT TEGGER

University of California, Irvine

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:41:57, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 4843/24, 2nd Floor, Ansari Road, Daryaganj, Delhi – 110002, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107115323 DOI: 10.1017/9781316335789 c Simon M. Huttegger 2017  This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2017 Printed in the United Kingdom by Clays, St Ives plc A catalogue record for this publication is available from the British Library. ISBN 978-1-107-11532-3 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:41:57, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

For my parents, Maria and Simon

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:44:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Downloaded from https://www.cambridge.org/core. Access paid by the UCSB Libraries, on 16 Jul 2018 at 14:44:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Contents

List of Figures page [x] Preface and Acknowledgments

[xi]

Introduction [1] Abstract Models of Learning

[1]

1 Consistency and Symmetry

[9]

1.1 1.2 1.3 1.4 1.5 1.6

Probability [10] Pragmatic Approaches [12] Epistemic Approaches [15] Conditioning and Dynamic Consistency [19] Symmetry and Inductive Inference [24] Summary and Outlook [29]

2 Bounded Rationality 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8

3 Pattern Learning 3.1 3.2 3.3 3.4 3.5 3.6 3.7

[32]

Fictitious Play [33] Bandit Problems [34] Payoff-Based Learning Procedures [37] The Basic Model of Reinforcement Learning Luce’s Choice Axiom [43] Commutative Learning Operators [47] A Minimal Model [50] Rationality and Learning [52]

[41]

[56]

Taking Turns [56] Markov Fictitious Play [59] Markov Exchangeability [62] Cycles [64] Markov Reinforcement Learning [68] Markov Learning Operators [72] The Complexity of Learning [73]

4 Large Worlds

[77]

4.1 It’s a Large World (After All) [78] 4.2 Small World Rationality [82] 4.3 Learning the Unknown [85]

vii

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 07 Jan 2020 at 11:11:31, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

viii

Contents

4.4 4.5 4.6 4.7 4.8

Exchangeable Random Partitions [88] Predicting the Unpredictable [90] Generalizing Fictitious Play [94] Generalizing Reinforcement Learning [96] Learning in Large Worlds with Luce’s Choice Axiom

5 Radical Probabilism 5.1 5.2 5.3 5.4 5.5 5.6 5.7

6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9

[126]

[149]

Agreeing to Disagree [150] Diverging Opinions [151] Learning from Others [154] Averaging and Inductive Logic Generalizations [161] Global Updates [163] Alternatives [164] Conclusion [165]

8 Consensus 8.1 8.2 8.3 8.4 8.5 8.6 8.7

[121]

Probabilities of Future Probabilities [126] Dynamic Consistency [129] Expected Accuracy [131] Best Estimates [133] General Distance Measures [137] The Value of Knowledge [139] Genuine Learning [141] Massaging Degrees of Belief [143] Countable Additivity [146]

7 Disagreement 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8

[102]

Prior Probabilities [103] Probability Kinematics [106] Radical Probabilism [111] Dynamically Consistent Models [114] Martingales [119] Conditional Probability and Conditional Expectation Predicting Choices [124]

6 Reflection

[98]

[156]

[167]

Convergence to the Truth [168] Merging of Opinions [171] Nash Equilibrium [173] Merging and Probability Kinematics [177] Divergence and Probability Kinematics [181] Alternative Approaches [184] Rational Disagreement [185]

Appendix A Inductive Logic

[189]

A.1 The Johnson–Carnap Continuum of Inductive Methods

[189]

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 07 Jan 2020 at 11:11:31, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Contents

A.2 De Finetti Representation A.3 Bandit Problems [192]

[191]

Appendix B Partial Exchangeability B.1 B.2 B.3 B.4

[196]

Partial Exchangeability [196] Representations of Partially Exchangeable Arrays Average Reinforcement Learning [198] Regret Learning [199]

Appendix C Marley’s Axioms

ix

[197]

[201]

C.1 Abstract Families [201] C.2 Marley’s Theorem [202] C.3 The Basic Model [204]

Bibliography [206] Index [220]

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 07 Jan 2020 at 11:11:31, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Figures

2.1 3.1 3.2 3.3 3.4

Two-armed bandit problem. [35] Taking Turns game. [58] Number of transitions. [62] The Shapley game. [64] Payoffs based on Markov chain. [68]

x

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:01:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Preface and Acknowledgments

The work presented here develops a comprehensive probabilistic approach to learning from experience. The central question I try to answer is: “What is a correct response to some new piece of information?” This question calls for an evaluative analysis of learning which tells us whether, or when, a learning procedure is rational. At its core, this book embraces a Bayesian approach to rational learning, which is prominent in economics, philosophy of science, statistics, and epistemology. Bayesian rational learning rests on two pillars: consistency and symmetry. Consistency requires that beliefs are probabilities and that new information is incorporated consistently into one’s old beliefs. Symmetry leads to tractable models of how to update probabilities. I will endorse this approach to rational learning, but my main objective is to extend it to models of learning that seem to fall outside the Bayesian purview – in particular, to models of so-called “bounded rationality.” While these models may often not be reconciled with Bayesian decision theory (maximization of expected utility), I hope to show that they are governed by consistency and symmetry; as it turns out, many bounded learning models can be derived from first principles in the same way as Bayesian learning models. This project is a continuation of Richard Jeffrey’s epistemological program of radical probabilism. Radical probabilism holds that a proper Bayesian epistemology should be broad enough to encompass many different forms of learning from experience besides conditioning on factual evidence, the standard form of Bayesian updating. The fact that boundedly rational learning can be treated in a Bayesian manner, by using consistency and symmetry, allows us to bring them under the umbrella of radical probabilism; in a sense, a broadly conceived Bayesian approach provides us with “the one ring to rule them all” (copyright Jeff Barrett). As a consequence, the difference between high rationality models and bounded rationality models of learning is not as large as it is sometimes thought to be; rather than residing in the core principles of rational learning, it originates in the type of information used for updating. Many friends and colleagues have helped with working out the ideas presented here. Jeff Barrett (who contributed much more than the ring

xi

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.001

xii

Preface and Acknowledgments

metaphor), Brian Skyrms, and Kevin Zollman have provided immensely helpful feedback prior to as well as throughout the process of writing this book. My late friend Werner Callebaut introduced me to Herbert Simon’s ideas about bounded rationality. Hannah Rubin spotted a number of weaknesses in my arguments. Gregor Grehslehner, Sabine Kunrath, and Gerard Rothfus read the entire manuscript very carefully and gave detailed comments. Many others have provided important feedback: Johannes Brandl, Justin Bruner, Kenny Easwaran, Jim Joyce, Theo Kuipers, Louis Narens, Samir Okasha, Jan-Willem Romeijn, Teddy Seidenfeld, Bas van Fraassen, and Carl Wagner. I have also profited from presenting material at the University of Groningen, the University of Salzburg, the University of Munich, the University of Bielefeld, and the University of Michigan, and from conversations with Albert Anglberger, Brad Armendt, Cristina Bicchieri, Peter Brössel, Jake Chandler, Christian Feldbacher, Patrick Forber, Norbert Gratzl, Josef Hofbauer, Hannes Leitgeb, Arthur Merin, Cailin O’Connor, Richard Pettigrew, Gerhard Schurz, Reuben Stern, Peter Vanderschraaf, Kai Wehmeier, Paul Weingartner, Charlotte Werndl, Greg Wheeler, Sandy Zabell, and Francesca Zaffora Blando. I would, moreover, like to thank the team at Cambridge University Press and two anonymous referees. UC Irvine provided time for a much needed sabbatical leave in 2013–14, which I spent writing the first third of this book by commuting between the Department of Philosophy at Salzburg and the Munich Center for Mathematical Philosophy. I’d like to thank these two institutions for their hospitality, as well as Laura Perna and Volker Springel for allowing me to live in their beautiful and wonderfully quiet Munich apartment. Some parts of the book rely on previously published articles. Material from “Inductive Learning in Small and Large Worlds” (Philosophy and Phenomenological Research) is spread out over Chapters 2, 4, and 5; Chapter 6 is mostly based on “In Defense of Reflection” (Philosophy of Science) and “Learning Experiences and the Value of Knowledge” (Philosophical Studies); and Chapter 8 draws on my “Merging of Opinions and Probability Kinematics” (The Review of Symbolic Logic). I thank the publishers for permission to reproduce this material here. My greatest personal thanks go to a number of people whose generosity and help have been essential for putting me in the position to write this book. Back in Salzburg, I’m particularly indebted to Hans Czermak and Georg Dorn; without Georg I would have left philosophy, and without Hans I wouldn’t have learned any interesting mathematics. Since I first came to Irvine, the members of the Department of Logic and Philosophy

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.001

Preface and Acknowledgments

xiii

of Science, and especially Brian Skyrms, have taken an interest in my intellectual development and my career that has gone far beyond the call of duty. The unwavering support of my parents, to whom I dedicate this book, has been invaluable; I learned from them that meaningfulness and deep engagement are more important than mere achievement or success. My sisters and my brother have been my earliest companions and friends, and they still are among my best. Finally, I thank Sabine, Teresa, and Benedikt for their love.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.001

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.001

Introduction Abstract Models of Learning

Learning is something we are all very familiar with. As children we learn to recognize faces, to walk, to speak, to climb trees and ride bikes, and so many other things that it would be a hopeless task to continue the list. Later we learn how to read and write; we learn arithmetic, calculus, and foreign languages; we learn how to cook spaghetti, how to drive a car, or what’s the best response to telemarketing calls. Even as adults, when many of our beliefs have become entrenched and our behaviors often are habitual, there are new alternatives to explore if we wish to do so; and sometimes we even may revise long-held beliefs or change our conduct based on something we have learned. So learning is a very important part of our lives. But it is not restricted to humans, assuming we understand it sufficiently broadly. Animals learn when they adjust their behavior to external stimuli. Even plants and very simple forms of life like bacteria can be said to “learn” in the sense of responding to information from their environment, as do some of the machines and computer programs created by us; search engines learn a lot about you from your search history (leading to the funky marketing idea that the underlying algorithms know more about you than you do yourself). Thus, learning covers a wide variety of phenomena that share a particular pattern: some old state of an individual (what you believe, how you act, etc.) is altered in response to new information. This general description encompasses many distinct ways of learning, but it is too broad to characterize learning events. There are all kinds of epistemically irrelevant or even harmful factors that can have an influence on how an individual’s state is altered. In order to better understand learning events and what sets them apart from other kinds of events, this book uses abstract models of learning, that is, precise mathematical representations of learning protocols. Abstract models of learning are studied in many fields, such as decision and game theory, mathematical psychology, and computer science. I will explore some learning models that I take to be especially interesting. But this should by no means suggest that this book provides a comprehensive overview of learning models. A cursory look into the literature already

1

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

2

Introduction

reveals a great and sometimes bewildering variety of learning methods, which are applied in many different contexts for various purposes. My hope is, of course, that the ideas put forward in this book will also help to illuminate other models of learning. One reason to study mathematical representations of learning has to do with finding descriptively adequate models of human or animal learning. In contrast, the question of how to justify particular methods of learning takes center stage if we wish to study the philosophical foundations of learning. Here we are not asking whether a learning model describes a real individual, but whether the learning method expressed by the model is rational. A theory of rational learning allows us to evaluate which of various learning methods is the correct one to use. The descriptive function of abstract learning models is of obvious importance, and there certainly is an interplay between the descriptive and the evaluative levels – after all, we do think that sometimes we actually learn from new information according to a correct scheme. In this book, however, I will mostly focus on the evaluative side. Before I explain how I hope to achieve this goal, let me clarify two immediate points of concern. First, it is tempting to speak of rational learning only in cases where the learner has reflective attitudes about her own modes of learning. By this I mean that a learner has the cognitive abilities and the language to analyze and evaluate her own learning process. If rational learning is restricted in this way, a theory of rational learning can only be developed for very sophisticated agents; organisms or machines who lack these self-reflective abilities could never learn rationally, by definition. I’m not going to follow this very narrow understanding of rational learning, for a simple reason: even if an agent lacks sophisticated reflective abilities, it is at least in principle possible to evaluate her learning process from her perspective; that is to say, we can ask whether, or under which circumstances, it is rational for an agent to adopt this learning procedure in the light of a set of evaluative criteria. This allows us to investigate many otherwise unintelligible ways of updating on new information. The second point is a rather obvious fact about learning models, which is nonetheless easily forgotten (for this reason I’m going to highlight it at several points in the book). Abstract models of learning, like all models, involve idealizations. For the descriptive function of learning models this means that many of the complexities of a real individual’s learning behavior are ignored in a learning model in order to make it tractable. If we wanted a model that captures each and every aspect of a learner, then we might as well forego the modeling process and stick to the original. Something

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

Abstract Models of Learning

3

similar is true for matters of rational learning. Without idealizing and simplifying assumptions it is impossible to obtain sharp results, and without sharp results it is difficult to have a focused discussion of the problems associated with rational learning. This is not to deny that idealizations require critical scrutiny; they certainly do, but scrutiny has to come from a plausible point of view. One point of view from which to examine idealizations in models of rational learning will come up repeatedly in our discussion of updating procedures. Rational models express an ideal according to a set of evaluative standards. In practice, such an ideal might be difficult to attain. But this by itself does not speak against the evaluative standards. A rational model conveys a set of standards in its pure form, and this helps in evaluating real learning processes even if they fail to meet those standards completely. In philosophy, the question of which learning procedures are rational is closely connected to the problem of induction. The problem of induction suggests that there is no unconditional justification for our most cherished patterns of inductive inference, namely those that project past regularities into the future. Chapter 1 presents what I take to be the Bayesian establishment view of the problem of induction, which mostly goes back to the groundbreaking work of Bruno de Finetti. The Bayesian treatment builds on the idea that the rationality of inductive procedures is a conditional, relative one. Inductive learning methods are evaluated against the background of particular inductive assumptions, which describe the fundamental beliefs of an agent about a learning situation. The Bayesian program consists in identifying a class of rational learning rules for each salient set of inductive assumptions, while acknowledging the fact that inductive assumptions may not themselves be justified unconditionally. The two fundamental ideas underlying this program are consistency and symmetry. Chapter 1 shows how consistency fixes the basic structure of learning models: static consistency requires rational degrees of belief to be probabilities; dynamic consistency requires that rational learning policies incorporate new information consistently into an agents old system of beliefs. Symmetries are used to capture inductive assumptions that fine-tune the basic structure of consistent learning models to fit specific epistemic situations. The most famous example is exchangeability, which says that probabilities are order invariant. Exchangeability is the basic building block of de Finetti’s theory of inductive inference and Rudolf Carnap’s inductive logic. Combining de Finetti’s and Carnap’s works leads to a subjective inductive logic which successfully solves the problem of how

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

4

Introduction

to learn from observations in a special but important type of epistemic situation. In my opinion, the Bayesian establishment view is entirely satisfactory for the kinds of learning situations de Finetti and his successors were concerned with. The main drawback is its range of applicability. Richard Jeffrey shed light on this matter by challenging an implicit assumption of the orthodox theory: that new information always comes as learning the truth value of an observational proposition, such as whether a coin lands heads or tails. Working from another direction, Herbert Simon noted that the Bayesian model only applies to very sophisticated agents. In particular, standard Bayesian learning often violates plausible procedural and informational bounds of real-world agents. But there are other learning procedures that respect those bounds, at least to some extent. One of the most important ones is reinforcement learning. Reinforcement learning has a bad reputation in some circles because of its association with behaviorism. However, it exhibits quite interesting and robust properties in learning situations where an agent has no observational access to states of the world, but only to realized payoffs, which determine success. Reinforcement learning requires agents to choose acts with higher probability if they were successful in the past. While this suggests some kind of rationality, reinforcement learning seems to fall short of the Bayesian ideal of choosing an act that maximizes expected utility with respect to a system of beliefs. The same is true of other boundedly rational learning procedures. Are bounded rationality learning procedures therefore irrational, full stop? Or do they live up to some standards of rationality? An affirmative answer to the first question would run against the inclusive view I wish to promote in this book: evaluating the virtues of bounded rationality learning procedures will be blocked if rational learning is the exclusive province of classical Bayesian agents. What brings us closer to an affirmative answer to the second question is to keep separate Bayesian decision theory and Bayesian learning theory. A learning procedure may fail to maximize expected payoffs while adhering to the two basic principles of rational Bayesian learning, consistency and symmetry; just think of a model that chooses acts with the help of the conditional probabilities of Bayesian updating, but uses them in other ways than maximizing expected utility. This indicates that models of learning that are incompatible with Bayesian decision theory may nonetheless be rational in a way that is similar to Bayesian updating. Various aspects of this idea will be developed in Chapters 2–6, where I hope to show how Bayesian principles of consistency and symmetry

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

Abstract Models of Learning

5

apply to boundedly rational learning rules and to probabilistic learning in general. Thus, I will argue that there is a rational core to learning that encompasses both classical Bayesian updating and other probabilistic models of learning. In Chapter 2, I consider the class of payoff-based learning models. This class is of special importance in decision and game theory because payoffbased processes do not need any information about states of the world. Most of the chapter focuses on a particular reinforcement learning model, called the basic model of reinforcement learning. The basic model is conceptually challenging because it is based on the notions of choice probability and propensity, which depart quite significantly from the elements of the classical Bayesian model. The key to developing the foundations of the basic model is Duncan Luce’s seminal work on individual choice behavior. In particular, Luce’s choice axiom and the theory of commutative learning operators can be used to establish principles of consistency and symmetry for the basic model of reinforcement learning. At the same time, this will provide a template for analyzing other models. The learning procedures discussed in Chapters 1 and 2 have a very simple structure, since their symmetries express order invariance: that is, the order in which new pieces of evidence arrive has no effect on how an agent updates. However, order invariance makes it impossible to detect patterns – a criticism Hilary Putnam has brought against Carnap’s inductive logic. Chapter 3 shows that de Finetti’s ideas on generalizing exchangeability can be used to solve these problems. Order invariant learning procedures can be modified in a way that allows them to detect patterns. Besides deflating Putnam’s criticism of inductive logic, this demonstrates that there is a sense in which learning rules can be successful in learning environments of arbitrary finite complexity. The topic of Chapter 4 is the problem of learning in large worlds. An abstract model of learning operates within its own small world. What I mean by this is that the inferences drawn within the model are not based on all the information that one might deem relevant in a learning situation. A description of the learning situation is a large world if it includes all relevant distinctions one can possibly think of. Consequently, conceptualizing the large world is a forbidding task even under the most favorable circumstances. Since learning does usually take place in a small world, the rationality of consistent inductive inferences drawn within the learning model are called into question. Without any clear understanding of the large world, it seems difficult to judge whether small world inductive inferences would also be judged rational in the large world.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

6

Introduction

The problem of large worlds has also been discussed in decision theory by Leonard Savage, Jim Joyce, and others. By taking a clue from Joyce’s discussion we will be able to clarify one aspect of the problem of learning in large worlds. The main idea is to require learning processes to be consistently embeddable into larger worlds. Consistent embeddability guarantees that one’s coarse-grained inferences stay the same in larger worlds. I demonstrate the usefulness of this idea with two examples of large world learning procedures. One is a generalization of Carnap’s inductive logic to situations where types of observations are not known in advance. The prototypical example of this is what is known in statistics as the “sampling of species process,” in which one may observe hitherto unknown species. A similar process can be used to modify the basic model of reinforcement learning. Both models exhibit a particular invariance that renders them robust in large worlds. This invariance is based on Luce’s choice axiom. I will argue that, while this does not give us a fully general solution to the problem of learning in large worlds, it does provide us with some guidance as to how to approach learning in complex situations that involve many unknowns. The first four chapters present a variety of learning procedures. However, with the exception of standard Bayesian learning, it is unclear why they deserve to be called learning procedures. At a purely formal level all we have is a sequence of quantities that describe how the state of an agent changes over time. Bayesian updating proceeds from learning the truth of an observational proposition, but there are no such evidential propositions in the other models. Now, obviously, a change in belief or behavior may be due to all kinds of influences (having too many drinks, low blood sugar level, forgetting, etc.). How can we make sure that individuals update on genuine information if information cannot be captured by a factual proposition? I try to answer this question in Chapters 5 and 6. Chapter 5 lays the groundwork by embedding abstract models of learning into Jeffrey’s epistemology of radical probabilism. As mentioned above, Jeffrey has argued that standard Bayesian learning is too narrow because it does not take into account uncertain evidence – evidence that cannot be neatly summarized by a factual proposition. Jeffrey extended Bayesian updating to what he called probability kinematics, also known as Jeffrey conditioning. I will argue that we should think of other probabilistic learning procedures along the same lines. My argument relies on criteria for generalized probabilistic learning studied by Michael Goldstein, Bas van Fraassen, and Brian Skyrms. Generalized probabilistic learning can be thought of in terms of a black box, where nothing at all is assumed about the structure of the learning event.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

Abstract Models of Learning

7

Despite this lack of structure, consistency requires that generalized learning observes reflection and martingale principles. These principles say, roughly speaking, that an agent’s new opinions need to cohere with her old opinions. Such principles of dynamic consistency are rather controversial. For this reason, Chapter 6 presents an extended discussion of three ways of justifying reflection principles and their proper place as principles of epistemic rationality. In the final two chapters, I switch gears and turn to applications of rational learning to social settings. That learning often does take place in a social context is a commonplace observation. One question that arises out of the concerns of this book is whether there can be disagreement among rational agents. Chapters 7 and 8 examine two aspects of that question: learning from others and learning from the same evidence. Chapter 7 treats the problem of expert disagreement in terms of Bayesian rational learning. The main question is how one should respond to learning that epistemic peers disagree with one another; epistemic peers are, roughly speaking, equally qualified to judge the matter at hand. Proposals range from conciliatory views (meeting midway between the opinions of peers) to extreme views (stick close to one opinion). I present a reconstruction of the peer disagreement problem in terms of Carnapian inductive logic that explains the epistemic conditions under which an agent should respond to disagreement in a conciliatory or a steadfast way. Chapter 8 develops some consequences of rational learning from the same evidence. This topic is of interest to Bayesian philosophy of science, since a couple of convergence theorems in Bayesian statistics demonstrate the irrelevance of prior opinions in the long run: even if our initial beliefs disagree, under certain conditions our beliefs come closer as we update on the same evidence. Besides discussing the implications of this result, I will drop one of its main assumptions – that evidence is learned with certainty – to see whether convergence also holds for Jeffrey conditioning. The answer depends on which kind of uncertain evidence is available. There is a solid kind of uncertain evidence that implies convergence in the proper circumstances. However, there also is a fluid kind of uncertain evidence that allows agents to have sustained, long-run disagreements even though they are updating on the same evidence. Overall, Chapters 7 and 8 show that whether there is rational disagreement depends on the epistemic circumstances; they also suggest that there are plausible epistemic circumstances in which rational learning is compatible with deep disagreements. I finish Chapter 8 by arguing that there is nothing wrong with this view.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

8

Introduction

If you have gotten this far through the introduction, you would probably like to know more about the background required for reading this book. A little background in probability theory and decision theory is desirable, but otherwise I have tried to make the book rather accessible. My emphasis is on revealing the main ideas without getting lost in technicalities, but also without distorting them. Some of the more technical material has been published in journal articles, and the rest can be found in the appendices. The material covered in this book can sometimes be fairly dry and abstract. This is partly an inherent feature of a foundational study that seeks to unravel the rational principles underlying abstract models of learning. I try to counteract this tendency by using examples from decision and game theory so that one can see learning processes in action. However, I also trust that the reader will find some joy in the austere charm of formal modeling.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:57:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.002

1

Consistency and Symmetry

From the theoretical, mathematical point of view, even the fact that the evaluation of probability expresses somebody’s opinion is then irrelevant. It is purely a question of studying it and saying whether it is coherent or not; i.e., whether it is free of, or affected by, intrinsic contradictions. In the same way, in the logic of certainty one ascertains the correctness of the deductions but not the accuracy of the factual data assumed as premises. Bruno de Finetti Theory of Probability I Symmetry arguments are tools of great power; therein lies not only their utility and attraction, but also their potential treachery. When they are invoked one may find, as did the sorcerer’s apprentice, that the results somewhat exceed one’s expectations. Sandy Zabell Symmetry and Its Discontents

This chapter is a short introduction to the philosophy of inductive inference. After motivating the issues at stake, I’m going to focus on the two ideas that will be developed in this book: consistency and symmetry. Consistency is a minimal requirement for rational beliefs. It comes in two forms: static consistency guarantees that one’s degrees of beliefs are not self-contradictory, and dynamic consistency requires that new information is incorporated consistently into one’s system of beliefs. I am not going to present consistency arguments in full detail; my goal is, rather, to give a concise account of the ideas that underlie the standard theory of probabilistic learning, known as Bayesian conditioning or conditionalization, in order to set the stage for generalizing these ideas in subsequent chapters. Bayesian conditioning provides the basic framework for rational learning from factual propositions, but it does not always give rise to tractable models of inductive inference. In practice, nontrivial inductive inference requires degrees of beliefs to exhibit some kind of symmetry. Symmetries are useful because they simplify a domain of inquiry by distinguishing some of its features as invariant. In this chapter, we examine the most famous probabilistic symmetry, which is known as exchangeability and was studied extensively by Bruno de Finetti in his work on inductive inference.

9

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

10

Consistency and Symmetry

Exchangeability also plays an important role in the works of W. E. Johnson and Rudolf Carnap, which together with de Finetti’s contributions give rise to a very plausible model of inductive reasoning. Although there is no new material in this chapter, some parts of it – in particular, dynamic consistency – are not wholly uncontroversial. The reader who is familiar with these ideas and essentially agrees with a broadly Bayesian point of view is encouraged to skip ahead to Chapter 2. Everyone else, please stay with me.

1.1 Probability In our lives we can’t help but adopt certain patterns of inductive reasoning. Will my son be sick tomorrow? What will the outcome of the next presidential election be? How much confidence should we have in the standard model of particle physics? These sorts of questions challenge us to form an opinion based on the information available to us. If most children in my son’s preschool are sick in addition to him being unusually tired and mopish, I conclude that he will most likely be sick tomorrow. When predicting the outcomes of elections, we look at past elections, at polls, at the opinions of experts, and other sources of evidence. In order to gauge the empirical correctness of scientific theories, we examine the relevant experimental data. Despite many differences, there is a common theme in these examples: one aims to evaluate the probability of events or hypotheses in the light of one’s current information. We usually feel quite comfortable making such inferences because they seem perfectly valid to us. What is known as “Hume’s problem of induction” might therefore seem dispiriting. David Hume presented a remarkably simple and robust argument leading to the conclusion that there is no unqualified rational justification for our inductive inferences. The logic of inductive reasoning can neither be justified by deductive reasoning nor by inductive reasoning (at least, not without begging the question).1 Many philosophers have taken Hume’s conclusion as a call to arms, perceiving it as a challenge to come up with a genuine solution – an unqualified and fully general justification of induction that somehow bypasses Hume’s arguments. If this is what we understand by a solution, it seems fair to say that none has been forthcoming.2 1 The argument can be found in Hume (1739) and in Hume (1748). Skyrms (1986) provides a

very accessible introduction. 2 This is argued in detail by Howson (2000).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.1 Probability

11

There is a sense in which this is as it should be (or, at least, as it has to be): according to the subjective Bayesianism of Frank Ramsey, Bruno de Finetti, and Leonard Savage, an absolute foundation of inductive reasoning would be a little bit like magic. The subjectivist tradition has no problem with Hume’s problem. Savage puts it as follows: In fact, Hume’s arguments, and modern variants of them such as Goodman’s discussion of “bleen” and “grue,” appeal to me as correct and realistic. That all my beliefs are but my personal opinions, no matter how well some of them may coincide with opinions of others, seems to me not a paradox but a truism.3

The theory of inductive inference created by de Finetti, Savage, and their successors is more than a mere consolation prize, though. While it does not exhibit the kind of ultimate, blank-slate rationality that has been exposed as illusory by Hume, the theory is far from being arbitrary, for it provides us with qualified and local justifications of inductive reasoning. Inductive inference, according to the Bayesian school of thought, is about our beliefs and opinions and how they change in the light of new information. More specifically, Bayesians take beliefs to be partial or graded judgments. That these epistemic states exist and are of considerable importance for our epistemic lives is fairly uncontroversial. What’s equally uncontroversial is that partial beliefs sometimes change. But what might be less clear is how to model partial beliefs and their dynamics. Leaving aside the fine print, Bayesians hold that partial beliefs are best modeled by assigning probabilities to propositions, and that the dynamics of partial beliefs should proceed by updating probability assignments by conditionalization. Together, these basic premises are known as probabilism. Let me be a bit more precise. Throughout this book I will use the most common framework for representing partial beliefs: an agent’s epistemic state is given by a measurable space consisting of a set of basic events, or atoms, , and a σ -algebra, F, of subsets of .4 The elements of  can be thought of as “possible worlds” (not in a metaphysically loaded sense), and the elements of F may be referred to as “events,” “states of affairs,” or “propositions.” The conditions under which elements of F are the case are the objects of an epistemic agent’s partial beliefs. I will model partial beliefs 3 Savage (1967, p. 602). Goodman’s grue paradox highlights another problem of inductive

inference: how to justify which properties we project into the future; see Goodman (1955). 4 A σ -algebra of subsets of  is a class of sets closed under complementation and under taking

countable unions. If a class of subsets of  is only closed under taking finite unions (as well as complementation), it is an algebra.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

12

Consistency and Symmetry

as a probability measure P which assigns probabilities to all elements of F. I also take P to be countably additive (since this is a somewhat controversial assumption in the philosophy of probability, I’ll provide some comments in Chapters 6 and 8). The triple (, F, P) is a probability space. There is a good deal of idealization that goes into representing an agent’s partial beliefs by a probability space. Probabilism, though, does not require all these idealizations. In particular, an agent’s best judgments need not always be representable by a unique probability measure. Richard Jeffrey explains the issue with characteristic sharpness: Probabilism does not insist that you have a precise judgment in every case. Thus, a perfectly intelligible judgmental state is one in which you take rain to be more probable than snow and less probable than fair weather but cannot put numbers to any of the three because there is no fact of the matter. (It’s not that there are numbers in your mind but it’s too dark in there for you to read them.)5

Probabilism is about the partial beliefs of epistemic agents, and not all partial beliefs lend themselves to being represented by a probability space. A probability space has a rich mathematical structure that sometimes is too precise for the actual epistemic state of an agent. Probabilism has ways of dealing with epistemic states of this sort, such as comparative probabilities or interval-valued probability assignments, which greatly enhance its applicability. Still, I think the standard approach is a good compromise; probability spaces are plausible approximations of actual epistemic states, but they are also tractable and allow us to derive precise results. In the next two sections, I will briefly consider a couple of more principled reasons why we should represent partial beliefs as probabilities. Besides providing some insights as to how probabilities should be understood, this serves to introduce the idea of consistency. In its manifestation as dynamic consistency, we will see that this idea proves to be the key to understanding the second element of probabilism: how partial beliefs change over time.

1.2 Pragmatic Approaches The connection between probability and fairness has been a central aspect of probability theory since Pascal, Fermat, and Huygens.6 So it is perhaps no surprise that fair betting odds constitute the best known bridge between 5 Jeffrey (1992, p. 48). 6 Pascal and Fermat discuss the fair distribution of stakes in gambles that are prematurely

terminated; see the letters translated in Smith (1984).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.2 Pragmatic Approaches

13

probabilities and beliefs. Ramsey mentioned the possibility of using fair betting odds to show that rational degrees of belief must be probabilities, and de Finetti actually carried out the project; a similar argument can already be found in Bayes’ essay.7 The basic premise of the betting approach is that your fair odds for a proposition A – the odds at which you see no advantage in either side of a bet on A – can be taken as a measure of your partial belief that A is true. While this way of measuring degrees of belief might not always work, the underlying idea is very plausible. When do you regard one side of a bet on A to be more advantageous? Answer: if your partial beliefs tilt toward it. Thus, odds are unfair if your partial beliefs favor one side of the bet over the other. By adjusting the bet, we can in principle find your fair odds for A (at least as long as your partial beliefs are sufficiently determinate). Given your beliefs, you don’t prefer one side of a bet at fair odds over the other. This is the sense in which fair odds represent your partial belief that A is true.8 The Dutch book theorem asserts a tight connection between fair betting odds and probabilities. It says, in effect, that whenever fair betting odds fail to behave like probabilities, there exist fair bets which together lead to a sure loss – a loss no matter how the world turns out to be; the converse is also true.9 Thus, fair odds can be exploited unless they are probabilities. Being led into a sure loss is unfortunate for one’s financial bottom line. What is more important, however, is the type of epistemic defect the Dutch book theorem indicates. Partial beliefs that give rise to exploitable fair odds are inconsistent. This insight goes back to one of Ramsey’s passing remarks and has been elaborated more fully by Brian Skyrms, Brad Armendt, and David Christensen.10 What makes sure loss possible is the fact that different numerical beliefs (fair betting odds) are assigned to propositions with exactly the same truth conditions. For example, assigning degrees of belief to two mutually exclusive events and their union in a way that violates additivity is tantamount to assigning two distinct degrees of belief to one and the same event (the union).11 7 Ramsey mentions the approach to probability through fair bets in Ramsey (1931). De Finetti

8

9 10 11

(1937) gives a thorough account of an approach that he developed a few years earlier (e.g., de Finetti, 1931). The thought that Bayes already had the essential argument is advanced in Howson (2000); see also Bayes (1763). I ignore many details here. In particular, I assume that fair betting odds exist and are unique. Because our beliefs often are vague or ambiguous, this need not be the case. Even if they exist, they need not be unique, in which case we might have upper and lower probabilities instead. Kemeny (1955). See Ramsey (1931), Skyrms (1987b), Armendt (1993), and Christensen (1996). There are other ways to explain the Dutch book argument in terms of inconsistency; they do not get at the inconsistency of beliefs, though, and might therefore be taken as only indirectly

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

14

Consistency and Symmetry

Partial beliefs that exhibit such inconsistencies are irrational. An analogy with deductive logic might be used to illustrate this point. Truth value assignments can be thought of as expressions of full belief.12 There is a consensus among epistemologists that consistency is a minimal requirement for the rationality of full beliefs; the reason is that no inconsistent set of full beliefs is satisfiable (it does not have a model).13 As a result, inconsistent beliefs are self-undermining: some beliefs necessarily defeat others. Partial beliefs that are not represented by a probability model are self-undermining in a similar way, since the numerical beliefs they correspond to contradict one another. It is important to note that classical logic plays a crucial role in these considerations. Partial beliefs that fail to give rise to probabilities assign distinct numerical beliefs to propositions that are equivalent according to the underlying Boolean logic. If we instead use some alternative logic, such as intuitionistic logic, a calculus of numerical beliefs may emerge that is different from the probability calculus.14 There is nothing wrong with this. Propositions that are classically equivalent need not be equivalent according to the standards of nonclassical logics, and thus beliefs that are formed with an eye toward nonclassical standards need not be probabilities. Conceived in this way, the Dutch book approach is, in my view, highly plausible.15 Like any idealized model it has some drawbacks. The most important limitation of the Dutch book approach is its reliance on measuring beliefs as fair betting odds. This connection is crucial: probabilism is an epistemological theory, so it is the partial beliefs of an epistemic agent, and not her betting behavior, which are probabilism’s primary concern. If odds don’t reflect partial beliefs, then inconsistent odds are just that – inconsistent evaluations of betting schemes – and not indicators of inconsistent beliefs; inconsistent odds would in this case fail to diagnose any epistemic defect.

12 13 14 15

relevant for the question of whether Dutch books indicate epistemic defects. Jeffrey (1965) and Howson and Urbach (1993) think the Dutch book theorem diagnoses an inconsistency in one’s evaluations of fairness. Seidenfeld et al. (1990) emphasize the fact that a Dutch book is tantamount to violating the principle of strict dominance. Betting is always worse than not betting if it leads to a sure loss. Strict dominance can be understood in terms of consistency. If I prefer the status quo to a sure loss regardless of the state of the world, but choose otherwise, my choices contradict my preferences. Although there are more nuanced views on full beliefs, see, e.g., Leitgeb (2017). There are dissenters though; for instance, Christensen (2004) argues against logical consistency as a rationality constraint on full beliefs because of, for example, the preface paradox. As indeed it does; see Weatherson (2003). I have discussed this in more detail in Huttegger (2013).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.3 Epistemic Approaches

15

It is well known that the connection between odds and beliefs can be distorted in many ways. For example, going back to the earlier quote by Jeffrey, your beliefs may be incomplete or not fully articulated. Forcing you to announce the precise odds at which you are willing or unwilling to bet does not, of course, say much about such a hazy state of opinion. Another limitation is that beliefs are measured on a monetary scale; this might distort our judgments because we usually don’t just care about money. This problem can be solved by moving to utility scales. More generally, starting with Ramsey, decision theorists have developed joint axiomatizations of utility and probability.16 In this framework, probabilities are embedded into a structure of consistent preferences among acts that are not restricted to betting arrangements. Partial beliefs are again required to be free of internal contradictions, which would result in inconsistent preferences.

1.3 Epistemic Approaches The two approaches of the previous section are pragmatic: they are based on the idea that belief manifests itself in action, and that, at least sometimes, an epistemic agent’s behavior can be used to say something about her opinions. Some probabilists, such as Savage, subscribe to the view that an agent’s beliefs are basically reducible to her choices or preferences: Revolving as it does around pleasure and pain, profit and loss, the preference theory is sometimes thought to be too mundane to guide pure science or idle curiosity. Should there indeed be a world of action and a separate world of the intellect and should the preference theory be a valid guide for the one, yet utterly inferior to some other guide for the other, then even its limited range of applicability would be vast in interest and importance; but this dualistic possibility is for me implausible on the face of it and not supported by the theories advanced in its name.17

Similarly, Ramsey stipulates that partial beliefs should be understood as dispositions to act.18 The pragmatism of Ramsey and Savage suggests that belief cannot remain a meaningful concept if it is separated from decision making. 16 Savage has worked out Ramsey’s ideas by combining them with the work of John von

Neumann and Oskar Morgenstern (von Neumann and Morgenstern, 1944; Savage, 1954). See also Jeffrey (1965). 17 Savage (1967, p. 599). See also Savage (1954). 18 Ramsey (1931).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

16

Consistency and Symmetry

There is a sense in which this view must be too narrow. There are nonpragmatic theories for representing partial beliefs by probabilities – that is, theories that don’t include preferences or choices as primitive concepts. One such theory, which was originally put forward by de Finetti, is based on qualitative probability. A qualitative probability order summarizes an epistemic agent’s judgments of likelihood in terms of the two-place relation “more probable than.” As Jeffrey has mentioned in the remark quoted earlier, qualitative probability does not always give rise to numerical probability; but it does so if it satisfies certain axioms, some of which, such as transitivity, are consistency conditions for partial beliefs. De Finetti himself thought of qualitative probability as less artificial than the Dutch book argument.19 The basic idea of another nonpragmatic approach to probabilism also goes back to de Finetti.20 According to this approach, partial beliefs are estimates of truth values. The truth value of a proposition A of the σ -algebra F can be represented by its indicator, IA , which is a random variable with / A). OverIA (ω) = 1 if A is true (ω ∈ A) and IA (ω) = 0 otherwise (ω ∈ all, the best estimate of IA is of course IA itself. But the truth value IA is often unavailable as an estimate, unless one knows whether A is true. In general, an epistemic agent should choose a best estimate of IA from among those estimates that are available to her. A best estimate is the agent’s best judgment, all things considered, as to the truth of A. The set of available estimates depends on the background information the agent has. In order to see which estimates may be best estimates one can use loss functions. Loss functions evaluate estimates by penalizing them according to their distance from indicators. Your best estimates are those that you think are closest to indicators. This idea gives rise to the central norm of accuracy epistemology, which requires you to have opinions that, in your best judgement, are as close as possible to the truth.21 The best-known loss function is the quadratic loss function, which penalizes the estimate DA of IA as (DA − IA )2 . However, as Jim Joyce has shown, the main result about best estimates holds for a large range of loss functions, and not only the quadratic loss function.22 This result 19 See de Finetti (1931). De Finetti was not the first one to study qualitative probability; see

Bernstein (1917). For an excellent general introduction to axiomatizations of qualitative probability and their representation, see Krantz et al. (1971). 20 See de Finetti (1974). 21 See Joyce (1998). For a book-length treatment of the epistemology of accuracy see Pettigrew (2016). 22 See Joyce (1998, 2009), where he argues that these loss functions capture the concept of epistemic accuracy.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.3 Epistemic Approaches

17

says that estimates which fail to be probabilities are strictly dominated by other estimates: that is to say, there are estimates with a strictly lower loss no matter how the world turns out to be. What this shows is that the very idea of non-probabilistic best estimates of truth values leads to contradictions: if you evaluate your estimates according to Joyce’s class of loss functions, non-probabilistic estimates cannot be vindicated as best estimates. Pragmatic and epistemic approaches are sometimes pitted against one another. As mentioned above, some pragmatic views regard beliefs to be meaningless outside a decision context. On the other hand, pragmatic considerations are sometimes dismissed as irrelevant for epistemic rationality. Hannes Leitgeb and Richard Pettigrew, for example, dramatize the pragmatic–epistemic divide in terms of a trade-off: Despite the obvious joys and dangers of betting, and despite the practical consequences of disastrous betting outcomes, an agent would be irrational qua epistemic being if she were to value her invincibility to Dutch Books so greatly that she would not sacrifice it in favor of a belief function that she expects to be more accurate.23

Similar sentiments are voiced by Ralph Kennedy and Charles Chihara and by Roger Rosenkrantz.24 Other proponents of epistemic approaches strike a more conciliatory tone. Joyce writes: I have suggested that the laws of rational belief are ultimately grounded not in facts about the relationship between belief and desire, but in considerations that have to do with the pursuit of truth. No matter what our practical concerns might be, I maintain, we all have a (defeasible) epistemic obligation to try our best to believe truths as strongly as possible and falsehoods as weakly as possible. Thus we should look to epistemology rather than decision theory to find the laws of rational belief. Just to make the point clear, I am not denying that we can learn important and interesting things about the nature of rational belief by considering its relationship to rational desire and action. What I am denying is the radical pragmatist’s claim that this is the only, or even the most fruitful, way to approach such issues.25

Like Joyce, I don’t see a fundamental conflict between pragmatic and epistemic approaches. They are conceptually different ways to move toward the same underlying issues. Epistemic accuracy takes rational beliefs to be best estimates of indicators. But the same is true for the Dutch book approach; the only difference is that indicators are given a decision theoretic 23 Leitgeb and Pettigrew (2010b, pp. 244–245). 24 See Kennedy and Chihara (1979) and Rosenkrantz (1981). 25 Joyce (1999, p. 90).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

18

Consistency and Symmetry

interpretation as stakes of bets (up to multiplicative constants).26 But this is not where the common ground ends. The quality of your decisions – in gambling as in more general choice situations – obviously depends on the quality of your beliefs. No decision maker would think of her choices as fully rational unless her beliefs are her best estimates of what is truly going on, just as required by epistemic accuracy. The epistemic norm of accuracy is implicit in what it means to make good decisions. Thus, the pragmatic analysis of rational beliefs is typically going to be fully compatible with the epistemic analysis of the very same beliefs. Another point of contact between epistemic and pragmatic approaches is the type of strategy they use for justifying probabilities. Beliefs are evaluated according to some – pragmatic or epistemic – standard. A minimal criterion of adequacy for a set of beliefs is that, with respect to that standard, they can in principle be at least as good as any other set of beliefs. For instance, under some states of affairs they should be more accurate or lead to better decisions. Non-probabilistic beliefs turn out to be self-undermining because some other set of beliefs is uniformly superior. Pragmatic and epistemic ways of thinking emphasize different, yet complementary, aspects of this basic insight. The two approaches have so much in common that I think a more catholic view is called for: all roads lead to Rome!27 The complementarity of the approaches also speaks to the issue of fruitfulness raised by Joyce. Whether a pragmatic or an epistemic approach is more fruitful depends on what our goal is. An accuracy approach is preferable if we want to understand what it means for beliefs to be epistemically rational. But for other purposes, accuracy is not of much help. I’m thinking in particular of measuring beliefs. An accuracy epistemology provides no guidance as to how we should assign numerical beliefs. Numerical assignments are simply assumed to be given. This is a strong assumption because these numbers are not a given for actual epistemic agents. Beliefs are often ambiguous and vague, and they may need some massaging to yield a numerical representation. Having no tool besides the guiding principle of epistemic accuracy can make this process as difficult as trying to drive a nail into a wall without a hammer. 26 Joyce appropriately calls his approach an “epistemic Dutch book argument” (Joyce, 1998, p.

588). 27 Many others have a similar point of view. For example, de Finetti (1974) develops the Dutch

book argument alongside an approach via losses which shows that both can be used to capture the geometry of convex sets. A similar idea is developed in Williams (2012).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.4 Conditioning and Dynamic Consistency

19

Fair prices of bets are one tool for measuring beliefs. Evaluating the fairness of betting arrangements – even if they are only hypothetical – can often give us a good sense of how strongly we believe something. Qualitative probability can also be used to this end, as long as the measurable space of propositions is sufficiently rich to allow for fine-grained comparisons. Tying belief to action has an additional advantage, though: because something is at stake, it can serve as an incentive to have a system of beliefs that represent one’s best judgments. Thus, besides providing a tool for measuring beliefs, a pragmatic setting also supports the epistemic norm of accuracy by encouraging norm-abiding behavior. Taken together, then, pragmatic and epistemic arguments give rise to a robust case for the claim that rational partial beliefs are probabilities. They also show that probabilistic models have a distinctively normative flavor. Our actual beliefs often are just snap judgments, but probabilism requires an epistemic agent to hold a system of beliefs that represents her best judgments of the issues at hand.

1.4 Conditioning and Dynamic Consistency Just as consistency puts constraints on rational beliefs, it also regulates learning – that is, how beliefs ought to change in response to new information. The best-known learning method is Bayesian conditioning or conditionalization. In the simplest case, conditioning demands that you update your current probability measure, P, to the new probability measure, Q, given by Q[A] = P[A|B], provided that B has positive probability and that it is the strongest proposition you have learned to be true. The rationality principle that underwrites conditioning is dynamic consistency. Since dynamic consistency is more controversial than the arguments of the two preceding sections, I am going to explain and defend dynamic consistency for conditioning in somewhat more detail here. I hope to dissolve some doubts about dynamic consistency right upfront, but I will also set aside some larger issues until we have developed a general theory of probabilistic learning (see Chapters 5 and 6). The conceptual difficulties associated with conditioning already appear in Ramsey’s essay “Truth and Probability.”28 On the one hand, Ramsey writes: 28 This point is discussed by Howson (2000, p. 145). My discussion is closely aligned with

Binmore (2009, p. 134).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

20

Consistency and Symmetry

This [the conditional probability of p given q] is not the same as the degree to which he would believe p, if he believed q for certain; for knowledge of q might for psychological reasons profoundly alter his whole system of beliefs.29

One way to understand this remark is that Ramsey describes a process in which learning the truth of a proposition changes an agent’s beliefs. What he seems to suggest here is that the new beliefs are completely unconstrained by the agent’s previous conditional beliefs. Yet, strangely, in a later passage Ramsey apparently arrives at the opposite conclusion: We have therefore to explain how exactly the observation should modify my degrees of belief; obviously if p is the fact observed, my degree of belief in q after the observation should be equal to my degree of belief in q given p before, or by the multiplication law to the quotient of my degree of belief in pq by my degree of belief in p. When my degrees of belief change in this way we can say that they have been changed consistently by my observation.30

Whereas in the foregoing quote he seems to say that anything goes after learning the truth of a proposition, here it looks as though Ramsey thinks of Bayesian conditioning as a form of rational learning. What is going on? Is Ramsey just confused? Or is there a way to reconcile the two passages? No answer will be forthcoming unless we understand what exactly the learning situation is that Ramsey had in mind. To this end, we may consider the best known justification of Bayesian conditioning, the dynamic or diachronic Dutch book argument, which is due to David Lewis.31 The epistemic situation underlying Lewis’s dynamic Dutch book argument is a slight generalization of the situation Ramsey mentioned in his essay. You are about to learn which member of a finite partition of propositions P = {B1 , . . . , Bn } is true. Let’s assume, for simplicity, that P is a partition of factual propositions whose truth values can be determined by observation. An update rule is a mapping that assigns a posterior probability to each member of the partition. Such a rule is thus a complete contingency plan for the learning situation given by P. Lewis’s dynamic Dutch book argument shows that conditioning is the only dynamically consistent update rule in this learning situation. For any other update rule there exists a set of bets that leads to a loss come what may, with all betting odds being fair according to the agent’s prior or 29 Ramsey (1931, p. 180). 30 See Ramsey (1931, p. 192), my emphasis. 31 See Teller (1973). Skyrms (1987a) provides a clear account of the argument.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.4 Conditioning and Dynamic Consistency

21

according to the posterior which is determined by the update rule. Because both the prior and the posteriors are probabilities, the update rule is the source of the economic malaise that befalls a dynamically inconsistent agent. Some critics of the dynamic Dutch book argument – notably Isaac Levi, David Christensen, and Colin Howson and Peter Urbach32 – maintain that the inconsistency is only apparent: there really can be no inconsistency. Fair betting odds can only be regarded as internally contradictory if they are simultaneously accepted as fair by an agent. Having two different betting odds for equivalent propositions at two different times is not selfcontradictory. Thus no dynamic Dutch book argument could ever succeed in showing that an agent is inconsistent. This line of reasoning misses how update rules are being evaluated in Lewis’s argument. For a correct understanding of the argument we need to distinguish between evaluations that are made ex ante (before the learning event) and those that are made ex post (after the learning event). For example, when making decisions we have to make a choice before we know which outcomes result from our choice. After we’ve chosen an act, the true outcomes are revealed. If it is not what we had hoped for, we regret our choice (the familiar “if only I had known . . . ”). That is, we tend to evaluate acts differently ex post, after we’ve experienced their consequences. However, that is of no help when making a choice; a choice must be made ex ante without this kind of ex post knowledge. In Lewis’s dynamic Dutch book argument, update rules are evaluated ex ante. An epistemic agent adopts an update rule before observing which member of the partition P is true. This is implicit in the setup: the agent is assumed to accept bets that are fair not just according to her prior, but also those bets that will be fair in the future according to her update rule. Thus, she does commit to all these fair odds simultaneously from an ex ante perspective. This is the point Howson and Urbach, Christensen, and other critics of dynamic Dutch book arguments deny because they think of fair odds (and the partial beliefs they accompany) from an ex post perspective. After the learning event we focus purely on the outcome of learning and not on how the outcome came about. From this point of view, the only requirement is that my beliefs represent my best judgments given everything I know after having observed which member of P is true. This explains why critics of dynamic consistency think of beliefs in a Lewisian learning situation just as beliefs at different times. 32 See Levi (1987), Christensen (1991), Howson and Urbach (1993) and Howson (2000).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

22

Consistency and Symmetry

Whether update rules should be evaluated ex ante or ex post is, I think, the main point of contention in the debate about dynamic consistency. Ex ante our future opinions are required to cohere with our present opinions, and ex post anything goes. What is the right point of view? There are good reasons to prefer the ex ante approach. In particular, it can be argued that for epistemically rational agents an ex post evaluation cannot differ from an ex ante evaluation. In order to explain why, let me start with a remark by de Finetti; in this remark, he refers to “previsions,” a term that includes partial beliefs but also more general types of opinions: If, on the basis of observations, and, in particular, observed frequencies, one formulates new and different previsions for future events whose outcome is unknown, it is not a question of correction. It is simply a question of a new evaluation, cohering with the previous one, and making use – by means of Bayes’s theorem – of the new results which enrich one’s state of information, drawing out of this the evaluations corresponding to this new state of information. For the person making them (You, me, some other individual), these evaluations are as correct now, as were, and are, the preceding one’s, thought of then. There is no contradiction in saying that my watch is correct because it now says 10.05 p.m., and that it was also correct four hours ago, although it then said 6.05 p.m.33

It is well known that de Finetti did not have a dynamic Dutch book argument for conditioning.34 But this quote shows quite clearly that he understood the underlying issues very well. He distinguishes an evaluation that coheres “with the previous one” from “correcting previous evaluations” based on what he later calls “wisdom after the event” – an ex post evaluation. This distinction, according to de Finetti, is “of genuine relevance to the conceptual and mathematical construction of the theory of probability.”35 I have chosen to quote this passage because it can be used to illustrate what is involved when ex ante beliefs and ex post beliefs diverge. An 33 De Finetti (1974, p. 208), his emphasis. 34 Hacking (1967). 35 See de Finetti (1974, p. 208). Many other authors have developed a view of consistent updating

along similar lines. One instance is Good’s device of imaginary observations (Good, 1950). Savage considers decisions as complete contingency plans that are made in advance of sequences of events (Savage, 1954). This prompts Binmore (2009) in his comments on Savage’s system to view a subjective probability space as the result of a “massaging” process of an agent’s beliefs where she already now considers the effect of all possible future observations. For the dynamic Dutch book argument, Skyrms (1987a), for instance, makes it very clear that an agent considers the effects of learning from her current point of view. It also seems to me that in the appendix of Kadane et al. (2008) the authors understand conditioning in essentially the same way as here. See also Lane and Sudderth (1984) for a particularly clear statement of coherence over time.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.4 Conditioning and Dynamic Consistency

23

epistemic agent’s ex ante beliefs include her prior probabilities before learning which proposition of the observational partition P is true. In particular, they include her probabilities conditional on the members of that partition. De Finetti speaks of “correct evaluations,” which is a little misleading because it suggests that there is something like a “true” system of probabilities that is adopted by the agent. Since de Finetti vigorously opposed the idea of true or objective probabilities throughout his career, this cannot be what he had in mind. What de Finetti calls a correct evaluation corresponds to what we have earlier referred to as “best judgments” or “best estimates.” A system of opinions represents an agent’s best judgments if it takes into account all the information she has at a time. In other words, nothing short of new information would change her beliefs. That her beliefs are an agent’s best judgments clearly is a necessary condition for epistemic rationality. Suppose now that the agent is epistemically rational: her ex ante beliefs are her best judgments before the learning event. If the agent updates to a posterior that is incompatible with her prior conditional probabilities, she cannot endorse that posterior ex post without contradicting the assumption that her prior probabilities are best judgments ex ante. To put it another way, if the agent is epistemically rational her probabilities conditional on members of the partition are her best estimates given what she knows before the learning event together with the information from the learning event; therefore a deviating posterior cannot represent her best estimates given the very same information as a matter of consistency (otherwise, she would have two distinct best estimates). This is what drives Lewis’s dynamic Dutch book argument, and this is also why de Finetti concludes that a rational epistemic agent – that is, an agent who makes best judgments before and after the learning event – is dynamically consistent. Since ex post evaluations cannot disagree with ex ante evaluations in a Lewisian learning event unless the agent is epistemically irrational, the ex ante point of view seems entirely adequate for analyzing rational update rules. This is not to say that ex post and ex ante perspectives can never come apart. The arguments above presuppose that nothing unanticipated happens. This is an especially stringent assumption for the dynamic Dutch book argument considered here, since the Lewisian learning event is restricted to an observational partition P. There is much besides learning which member of P is true that can happen: we might obtain unanticipated information or derive unanticipated conclusions from what has been observed. There are many ways in which learning the truth of a proposition can, in Ramsey’s words, “profoundly alter” a system of beliefs. So, returning to Ramsey’s views about belief change, in the first quote Ramsey plausibly

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

24

Consistency and Symmetry

referred to situations that go beyond a simple Lewisian learning event. The second quote, though, is clearly compatible with the ex ante perspective of Lewis’s dynamic Dutch book. At this point we do not have the conceptual resources to model situations that go beyond a Lewisian learning event. After having developed those resources over the next several chapters, we are going to see that dynamic consistency is not just crucially important for conditioning, but also for other classes of learning models. In fact, all probabilistic models of learning are dynamically consistent in the sense that they incorporate new information consistently into an agent’s old system of opinions, regardless of how “information” and “opinions” are being represented in the model. Dynamic consistency establishes a deep connection among probabilistic learning models. I now turn to another such connection: probabilistic symmetries.

1.5 Symmetry and Inductive Inference Learning from an observation is usually not an isolated event, but part of a larger observational investigation. When learning proceeds sequentially, observations reveal an increasing amount of evidence that allows an agent to adjust her opinions, which thereby become increasingly informed. Taking the conditioning model of the previous section as our basic building block leads to Bayesian inductive inference along a sequence of learning events. The sequence of learning events is often assumed to be infinite – not because the agent actually makes infinitely many observations, but in order to approximate the case of a having a large, but finite, sequence of observations. At this point, though, Bayesian inductive inference runs into a practical problem: while the conditioning model is in principle applicable to an infinite sequence of learning events, the specification of a full probability measure over the measurable space of all learning events is, in general, forbiddingly complex. In order to illustrate this point, consider the canonical example of flipping a coin infinitely often. This process can be represented by the set of all infinite sequences of heads and tails together with the standard Borel σ -algebra of measurable sets of those sequences, which is the smallest σ -algebra that includes all finite events. A probability measure needs to assign probabilities to all those sets, for otherwise conditional probabilities (and, hence, conditioning) will sometimes be undefined. Without any principles that guide the assignment of probabilities, it seems that finite minds

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.5 Symmetry and Inductive Inference

25

could never be modeled as rational epistemic agents when flipping coins infinitely often. What gets us out of this predicament is a time-honored strategy against oppressive complexity: the use of symmetries. Symmetry considerations have been immensely successful in the sciences and mathematics because they simplify a domain of inquiry by identifying those of its features that are invariant in some appropriate sense.36 In the example of flipping a coin infinitely often, the simplest invariances are the ones we are most familiar with: the coin is fair and the probabilities of heads and tails do not depend on the past – that is, coin tosses are independently and equally distributed. Giving up one invariance – equal probabilities – but retaining the other – fixed probabilities of heads and tails regardless of the past – leads to independently and identically distributed (i.i.d.) coin tosses, which may be biased toward heads or tails. These symmetries are very strong; in particular, they make it impossible to learn from experience: even after having observed a thousand heads with a fair coin, the probability of heads on the next trial still is only one-half. Inductive learning becomes possible for i.i.d. coin flips if the chance of heads is unknown. Since Thomas Bayes and Pierre Simon de Laplace such learning situations have been modeled in terms of chance priors – that is, distributions over the set of possible chances of heads.37 A chance prior can be updated by conditioning in response to observations. The chance posterior, then, expresses an agent’s new opinions about chances. Consider, for instance, the uniform chance prior, which judges all chance hypotheses to be equally likely. In this case, an agent’s new probability of observing heads on the next trial given that she has observed h heads in the first n trials is equal to h+1 . n+2 This inductive procedure is due to Laplace. John Venn, in an attempt to ridicule Laplace’s approach, called it “Laplace’s rule of succession,” and the name stuck.38 In modern parlance, the uniform prior is a special case of a beta distribution. The family of beta distributions is parametrized by two positive parameters, α and β, which together determine the shape of the distribution. By setting α = β = 1, we get the uniform distribution; 36 For a superb introduction to symmetry and invariance, see Weyl (1952). For a philosophical

treatise of symmetries, see van Fraassen (1989). 37 See Bayes (1763) and Laplace (1774). 38 Venn (1866).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

26

Consistency and Symmetry

other specifications express different kinds of prior opinions about chances. Given a beta distribution, Laplace’s rule of succession generalizes to the following conditional probability of observing heads on the next trial: h+α . n+α+β The parameters α and β regulate both the initial probabilities of heads and tails and the speed of learning. Prior to having made any observations, the α . The values of α and β determine how initial probability of heads is α+β many observations it takes to outweigh an agent’s initial opinions. However, regardless of α and β, in the limit of infinitely many coin flips the agent’s conditional probabilities of heads converge to its relative frequency. There is nothing special about coin flips. The Bayes–Laplace model can be extended to infinite sequences of observations with any finite number m of outcomes. Such a process is represented by an infinite sequence of random variables X1 , X2 , . . ., each of which takes on values in the set {1, . . . , m} that represents m possible outcomes. Let ni be the number of times an outcome i has been observed in the first n trials, X1 , . . . , Xn . If an epistemic agent updates according to a generalized rule of succession, there are positive numbers αj for each outcome j such that the agent’s conditional probabilities satisfy the following equation for all i and n: P[Xn+1 = i|X1 , . . . , Xn ] =

ni + α i  . n + j αj

(1.1)

Within the Bayes–Laplace framework, a generalized rule of succession is a consequence of the following two conditions: (i) Trials are i.i.d. with unknown chances p1 , . . . , pm of obtaining outcomes 1, . . . , m, and (ii) chances are distributed according to a Dirichlet distribution. The first condition defines Bayes–Laplace models; it says that they are the counterpart to classical models of statistical inference, which rely on i.i.d. processes with known chances. The restriction to Dirichlet priors in (ii) is necessary for conditional probabilities to be given by a generalized rule of succession. Dirichlet priors are the natural extension of beta priors to models with more than two outcomes. They are parametrized by m parameters, αj , 1 ≤ j ≤ m, which together determine the shape of the distribution. Both (i) and (ii) rely on the substantive assumption that chances are properties of physical objects, such as coins. But can chances be unequivocally ascribed to physical objects in an observationally determinate sense?

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.5 Symmetry and Inductive Inference

27

This seems to be difficult: flipping a coin finitely often is, after all, compatible with almost any chance hypothesis. Whatever one’s take is on this question, using objective chances to derive something as down-to-earth as a rule of succession is a bit like cracking a nut with a sledgehammer. It should be possible to derive rules of succession by just appealing to an agent’s beliefs about the observational process X1 , X2 , etc. De Finetti, who, as already mentioned, was a vigorous critic of objective probabilities, developed a foundation for Bayes–Laplace models that doesn’t rely on chances. His approach is based on studying the symmetry that characterizes the i.i.d. chance setup. This symmetry is generally called exchangeability. A probability measure is exchangeable if it is order invariant; reordering a finite sequence of outcomes does not alter its probability. Exchangeability is a property of an agent’s beliefs. It says that the agent believes the order in which outcomes are observed is irrelevant. Notice that exchanegability does not refer to unobservable properties, such as objective chances, and is thus observationally meaningful. Exchangeability is closely tied to Bayes–Laplace models. It is easy to see that i.i.d. processes with unknown chances are exchangeable. Less trivially, de Finetti proved that the converse is also true: If your probabilities for infinite sequences of outcomes are exchangeable, then they can be represented uniquely as an i.i.d. process with unknown chances. This is the main content of de Finetti’s celebrated representation theorem. The theorem also shows that relative frequencies of outcomes converge almost surely, and that they coincide with the chances of the i.i.d. process.39 De Finetti’s theorem is a genuine philosophical success. It constitutes a reconciliation of the three ways in which probabilities have traditionally been interpreted – namely, beliefs, chances, and relative frequencies. If your beliefs are exchangeable, then you may think of the observational process as being governed by a chance setup in which chances agree with limiting relative frequencies. Conversely, if you already think of the process as being governed by such a chance setup, your beliefs are exchangeable. The probabilist who does not wish to commit herself to the existence of objective chances is, by virtue of de Finetti’s theorem, entitled to use the full power of Bayes–Laplace models while regarding chances and limiting relative frequencies as mere mathematical idealizations. So de Finetti has shown that chance setups are, to some extent, superfluous. However, his representation theorem does not provide a fully satisfying 39 See de Finetti (1937). For a survey of de Finetti’s theorem and its generalizations, see Aldous

(1985).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

28

Consistency and Symmetry

foundation for generalized rules of succession. Exchangeability provides a foundation for i.i.d. processes with unknown chances, but it doesn’t say anything about Dirichlet priors. In fact, de Finetti emphasized the qualitative aspects of his representation theorem – that is, those aspects that don’t depend on a particular choice of a chance prior (which he denotes by  in the following quote): It must be pointed out that precise applications, in which  would have a determinate analytic expression, do not appear to be of much interest: as in the case of exchangeability, the principal interest of the present methods resides in the fact that the conclusions depend on a gross, qualitative knowledge of , the only sort we can reasonably suppose to be given (except in artificial examples).40

It would seem, then, that there is no principled justification for generalized rules of succession unless one is willing, after all, to buy into the existence of chances in order to stipulate Dirichlet priors. As it turns out, another symmetry assumption can be used to characterize the family of Dirchlet priors without explicit reference to the chance setup. This symmetry was popularized as “Johnson’s sufficientness postulate” by I. J. Good, in reference to W. E. Johnson, who was the first to introduce it in a paper published posthumously in 1931. The same symmetry was independently used by Rudolf Carnap in his work on inductive logic.41 Unlike exchangeability, which is defined in terms of unconditional probabilities for sequences of outcomes, Johnson’s sufficientness postulate is a symmetry of conditional probabilities for sequences of outcomes. In its most general form, which was studied by Carnap and Sandy Zabell, the sufficientness postulate requires the conditional probability of an outcome i given all past observations to be a function of i, the number of times ni it has been observed, and the total sample size n: P[Xn+1 = i|X1 , . . . , Xn ] = fi (ni , n).

(1.2)

This says that all other information about the sample – in particular, how often outcomes other than i have been observed or the patterns among outcomes – is irrelevant for i’s predictive probability. If an epistemic agent updates according to a generalized rule of succession, her beliefs clearly are both exchangeable and satisfy Johnson’s 40 See de Finetti (1938, p. 203). 41 See Johnson (1932). Johnson also introduced exchangeability before de Finetti did; see the

“permutation postulate” in Johnson (1924). On Carnap, see Carnap (1950, 1952, 1971, 1980). Kuipers (1978) is an excellent overview of the Carnapian program. Zabell (1982) provides a precise reconstruction of Johnson’s work.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.6 Summary and Outlook

29

sufficientness postulate. The converse is also true if we assume that the agent’s probabilities are regular (that is, every finite sequence of outcomes has positive prior probability). More precisely, suppose we substitute the following two conditions for (i) and (ii) above: (i ) Prior probabilities are exchangeable; and (ii ) conditional predictive probabilities satisfy Johnson’s sufficientness postulate (1.2). Then it can be shown that conditional probabilities are given by a generalized rule of succession, as in (1.1). Since the crucial contribution to this result is due to Johnson and Carnap, generalized rules of succession are often referred to as the Johnson–Carnap continuum of inductive methods in inductive logic. In the foregoing remarks I have tried to avoid technical details. But since the approach to inductive inference outlined here will play an important role in later chapters, I invite readers to take a look at Appendix A, where I provide more details on exchangeability and inductive logic. The upshot of this section is that conditionalization specializes to the Johnson–Carnap continuum of inductive methods if an agent’s beliefs satisfy two plausible symmetry assumptions – exchangeability and the sufficientness postulate. By de Finetti’s representation theorem, this is tantamount to assuming an i.i.d. chance setup that generates the sequence of observations, with chances being chosen according to a Dirichlet distribution.

1.6 Summary and Outlook The Johnson–Carnap continuum of inductive methods is rightly regarded as the most fundamental family of probabilistic learning models. What we have discussed so far shows that this model is a natural consequence of the basic elements of probabilism: (a) Probability measures, modeling partial beliefs. (b) Conditioning on observational propositions. (c) Symmetry assumptions on sequences of observations. Both (a) and (b) are, in my view, principles of epistemic rationality. How symmetry assumptions should be understood is one of the main topics of this book. Now, symmetries require certain events to have the same

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

30

Consistency and Symmetry

probability. Assignments of equal probabilities are sometimes justified by some form of the principle of indifference, which says that certain events have the same probability in the absence of evidence to the contrary. If a principle of indifference could support (c), then the Johnson–Carnap continuum would follow from rationality assumptions alone. However, there are serious reasons to doubt the cogency of principles of indifference. I shall return to this topic in Chapter 5. For now, let me just note that it would be surprising to have a principle for assigning sharp probabilities regardless of one’s epistemic circumstances. As an alternative, we can view symmetries as the inductive assumptions of an agent, which express her basic beliefs about the structure of a learning situation.42 On this understanding of symmetry assumptions, the Johnson–Carnap continuum follows from rationality considerations (the two consistency requirements (a) and (b)) together with substantive beliefs about the world. This approach offers us the kind of qualified and local justification of inductive inference mentioned at the beginning of this chapter. A method of updating beliefs, such as the Johnson–Carnap continuum, is never unconditionally justified, but only justified with respect to an underlying set of inductive assumptions. To put it differently, inductive reasoning is not justified by a particularly rational starting point, but from rationally incorporating new information into one’s system of beliefs, which is itself not required to be unconditionally justified. Richard Jeffrey pointed out that the model of Bayesian conditioning, referred to in (b), is often too restrictive.43 Conditioning is the correct way of updating only in what we have called Lewisian learning situations. Jeffrey’s deep insight was that changing one’s opinions can also be epistemically rational in other types of learning situations. His primary model is learning from uncertain observations, which is known as probability kinematics or Jeffrey conditioning. But Jeffrey by no means thought that probability kinematics is the only alternative to conditioning. New information can come in many forms other than certain or uncertain observations. This insight becomes especially important in the light of considerations of “bounded rationality.” The criticisms Herbert Simon has directed against classical decision theory apply verbatim to classical Bayesian models of learning, which also ignore informational, procedural, and other bounding 42 I borrow the term “inductive assumptions” from Howson (2000) and Romeijn (2004). 43 Jeffrey (1957, 1965, 1968).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

1.6 Summary and Outlook

31

aspects of learning processes.44 Weakening the assumptions of classical models gives rise to learning procedures that combine Jeffrey’s ideas with considerations of symmetry. In the following chapters, we explore some salient models and try to connect them to the classical Bayesian theory.

44 See Simon (1955, 1956).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.003

2

Bounded Rationality

It is surprising, and perhaps a reflection of a certain provincialism in philosophy, that the problem of induction is so seldom linked to learning. On the face of it, an animal in a changing environment faces problems no different in general principle from those that we as ordinary humans or as specialized scientists face in trying to make predictions about the future. Patrick Suppes Learning and Projectibility

32

This chapter applies the ideas developed in the preceding chapter to a class of bounded resource learning procedures known as payoff-based models. Payoff-based models are alternatives to classical Bayesian models that reduce the complexity of a learning situation by disregarding information about states of the world. I am going to focus on one particular payoff-based model, the “basic model of reinforcement learning,” which captures in a precise and mathematically elegant way the idea that acts which are deemed more successful (according to some specific criterion) are more likely to be chosen. What we are going to see is that the basic model can be derived from certain symmetry principles, analogous to the derivation of Carnap’s family of inductive methods. Studying the symmetries involved in this derivation leads into a corner of decision theory that is relatively unknown in philosophy. Duncan Luce, in the late 1950s, introduced a thoroughly probabilistic theory of individual choice behavior in which preferences are replaced by choice probabilities. A basic constraint on choice probabilities, known as “Luce’s choice axiom,” together with the theory of commutative learning operators, provides us with the fundamental principles governing the basic model of reinforcement learning. Our exploration of the basic model does not, of course, exhaust the study of payoff-based and other learning models. I indicate some other possible models throughout the chapter and in the appendices. The main conclusion is that learning procedures that stay within a broadly probabilistic framework often arise from symmetry principles in a way that is analogous to Bayesian models.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.1 Fictitious Play

33

2.1 Fictitious Play Learning can be seen as a good in itself, independent of all the other aims we might have. But learning can also help with choosing what to do. We typically expect to make better decisions after having obtained more information about the issues at hand. Taking this thought a little bit further, we typically expect to choose optimally when we have attained a maximally informed opinion about a learning situation. These ideas are simplifications, no doubt. But they bear out to some extent within the basic theory developed in the preceding chapter. Suppose there are k states, S1 , . . . , Sk , and m acts, A1 , . . . , Am . Each pair of states and acts, A&S, has a cardinal utility, u(A&S), which represents the desirability of the outcome A&S for the agent. This is the standard setup of classical decision theory as developed, for instance, by Savage.1 Let’s suppose, for simplicity, that the decision problem is repeated infinitely often. Let’s also assume that the agent has a prior probability over the measurable space of all infinite sequences of states. She updates the prior to a posterior by conditioning on observed states. At the next stage of the process she chooses an act that maximizes some sort of expected utility with respect to the posterior. What kind of expected utility is being maximized depends on how sophisticated the agent is supposed to be. A very sophisticated agent may contemplate the effects of choices on future payoffs and maximize a discounted future expected utility. At the other end of the spectrum, a myopic agent chooses an act that only maximizes immediate expected utility. The simplest implementation of this idea, known as fictitious play, combines myopic choice behavior with the Johnson–Carnap continuum of inductive methods.2 A fictitious player’s conditional probability of observing state Si at the (n + 1)st stage is given by a generalized rule of succession (ni is the number of times Si has been observed thus far): ni + α i  n + j αj Before the true state is revealed, she chooses an act A that maximizes expected utility relative to predictive probabilities: 1 Savage (1954). 2 Fictitious play was introduced in Brown (1951). For more information on fictitious play, see

Fudenberg and Levine (1998) and Young (2004).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

34

Bounded Rationality



u(A&Si )

i

ni + α i  . n + j αj

Any such act A is called a best response. If there is more than one best response, one of them is chosen according to some rule for breaking ties (e.g., by choosing a best response at random). Fictitious play is rather simple compared to its more sophisticated Bayesian cousins. It still is successful in certain learning environments, though. If the sequence of states of the world is generated by an i.i.d. chance setup – that is, if the learning environment is indeed order invariant – fictitious play will converge to choosing a best response to the chance distribution. This is an immediate consequence of the law of large numbers. For each state Si , if pi denotes the chance of Si , then any generalized rule of succession converges to pi with probability one. Since fictitious play chooses a best response to the probabilities at each stage, it converges with probability one to choosing an act A that maximizes expected utility with respect to the chances p1 , . . . , pn :  u(A&Si )pi . i

Fictitious play thus exemplifies the idea that inductive learning helps us make good decisions. Needless to say, it comes with inductive assumptions (the learning environment is assumed to be order invariant). On top of this, there are assumptions about choice behavior. A fictitious player chooses in a way that is consistent with maximizing expected utility. This commits us to consider the agent as conforming to Savage’s theory of preferences or a similar system.

2.2 Bandit Problems Fictitious play involves being presented with information about states of the world and choosing acts based on that information. This works because in Savage’s theory acts and states are independent. But not all repeated decision situations have this structure. Consider a class of sequential decision situations known as bandit problems. The paradigmatic example of a bandit problem is a slot machine with multiple arms (or, equivalently, multiple one-armed slot machines) with unknown payoff distribution. In the simplest case there are two arms, L (left) and R (right), and two payoffs, 0 (failure) and 1 (success). The success probability of L is p, and the success probability of R is q. In general, the values p and q are

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.2 Bandit Problems

35

Figure 2.1 A two-armed bandit with unknown success probabilities p (left) and q (right). unknown. The extensive form of the two-armed bandit problem is shown in Figure 2.1. Clearly, L is the better choice if p > q and R is the better choice if p < q. Bandit problems are not just relevant for gambling, but have also been investigated in statistics and in computer science.3 They are instances of a widely applicable scheme of sequential decision problems in which nature moves after a decision maker has chosen an act. One of the most significant applications of bandit problems is the design of sequential clinical trials: testing a new treatment is like choosing the arm of a bandit with unknown distribution of success.4 Bandit problems also have applications in philosophy of science, where they can be used to model the allocation of research projects in scientific communities.5 One way in which bandit problems differ from Savage decision problems is that states of the world are not directly observable. States are given by the possible distributions with which nature chooses payoffs. In the two-armed bandit problem of Figure 2.1, states are pairs of real numbers (p, q), 0 ≤ p, q ≤ 1, that represent the success probabilities for the first and the second arm, respectively. These states are only indirectly accessible through observed payoffs. There is another difference between Savage decision problems and bandit problems. Since observing the payoff consequences of an act is only possible after choosing that act, evidence about states and future payoff consequences can only be obtained by choosing the corresponding act. In order to illustrate this point, consider a method analogous to fictitious play.6 Let Ai be the ith act and πj the jth payoff (utility). The conditional probability of obtaining payoff πj given that Ai is chosen on the (n + 1)st trial is 3 In their present form bandit problems were introduced by Robbins (1952). Berry and Fristedt

(1985) is a canonical reference. 4 E.g., Press (2009). 5 See Zollman (2007, 2010). 6 For more information on how to derive this model, see Appendix B and Huttegger (2017).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

36

Bounded Rationality

nij + αij  , ni + j αij

(2.1)

where nij is the number of times πj was obtained after having chosen Ai , αij   is a positive parameter, ni = j nij , and n = j nj . Clearly, this quantity is updated only in case Ai is chosen. This has important consequences for learning from experience. The expected payoff of Ai is given by   nij + αij 1   πj = πj (nij + αij ). (2.2) ni + j αij ni + j αij j

j

When choosing an act that maximizes this quantity, an agent chooses what seems best on average given the payoffs thus far. In this regard, such an agent is similar to a fictitious player. If the bandit problem is repeated, the agent faces a tradeoff between exploiting her information by choosing a best response to her subjective estimates of success probabilities, and exploring payoff consequences to get more information. When choosing an act solely based on the evidence one has obtained – exploitation – one foregoes the opportunity to get more information about the payoff consequences of other acts – exploration. This tradeoff will strike most of us as a basic feature of the human condition. As a result of choosing what seems best to us now we give up on discovering something that might be even better; on the other hand, by always looking for something better we fail to choose what in our best judgment is good for us now. There is no escape. In the previous section, we saw that under certain conditions fictitious play converges to choosing the optimal act in infinitely repeated Savage decision problems. This result fails for bandit problems. Michael Rothschild has shown that even very sophisticated Bayesian learners converge with positive probability to choosing the suboptimal arm in the two-armed bandit problem of Figure 2.1.7 This happens in situations where the agent’s current evidence suggests that R is better, which can lead her to stop choosing L for good. As a consequence of choosing L only finitely often, the agent’s estimate of q does not converge to its true value. Her expectations are self-fulfilling, and R is chosen in the limit even though p < q. This result reveals that for fully rational Bayesian agents the tension between exploitation and exploration may persist indefinitely.8 On the one 7 Rothschild (1974). By sophisticated I mean that the Bayesian agent takes into account effects of

choices on future payoffs by properly discounting them. 8 A general strategy for choosing optimal acts in bandit problems is given by the Gittins index;

see Gittins (1979).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.3 Payoff-Based Learning Procedures

37

hand, this does not seem to be a big problem, since the agent chooses optimally given the evidence. This does not mean that her choices are in fact correct; it only means that they are correct in her current best judgment. However, from the long-run perspective there clearly is a problem: the agent may be unable to learn the true success probabilities and thus fail to choose optimally in the limit. In order to evade this result, an agent needs to take precautions against becoming too complacent based on the available evidence. One way is for her to choose nearly optimally – that is, to sometimes perturb choices so as to gather more information about acts that fail to maximize expected payoffs. Take our analogue of fictitious play as an example, and suppose that in each period the agent chooses the arm having lower expected payoff (2.2) with some small positive probability. In Appendix A, I explain that certain perturbations allow the analogue of fictitious play to almost surely converge to the correct success probabilities and to choosing optimal acts. The key property is that perturbations decrease to zero at an appropriate rate, which guarantees that every act is chosen infinitely often. Perturbing one’s choices in this way is not necessarily irrational, assuming we take into account the opportunity cost of failing to acquire information about payoff consequences. However, any general solution to the problem of how to perturb optimally would be immensely complex. By contrast, slightly perturbing a choice rule is less complex and seems to be good enough, even though it might not be optimal. These considerations are compatible with boundedly rational decision making, about which I shall say more in a moment. Notice that an agent who does not always choose optimally may still update her degrees of belief rationally. This suggests that there are other reasonable learning processes that exhibit dynamic rationality – in terms of learning consistently from experience – but don’t maximize expected utility. The models I have in mind first and foremost are the payoff-based learning models introduced in the next section.

2.3 Payoff-Based Learning Procedures A payoff-based model of learning incorporates information about acts and payoffs, but has no information about states of the world.9 In a bandit problem one cannot but adopt a payoff-based procedure in virtue 9 See Fudenberg and Levine (1998) and Young (2004).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

38

Bounded Rationality

of the problem’s structure. Besides bandit problems, a more important motivation for studying payoff-based procedures is because they speak to the worries that motivated Herbert Simon’s work on bounded rationality in decision making.10 Simon was a vigorous critic of the assumptions of classical decision theory, which focuses on what he has termed substantive rationality. Substantive rationality looks at a decision maker from an external point of view and tries to determine whether she can be thought of as maximizing expected utility in a given environment regardless of how she in fact arrives at a decision. While the substantive perspective is, in my view, important for understanding rational choice, Simon correctly points out that it fails to capture all relevant aspects of rationality in choice situations. In particular, if we wish to evaluate choices not just from an external, but also from the agent’s own perspective, we cannot avoid considering procedural aspects of how she arrives at a decision: Broadly stated, the task is to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist.11

Since agents are typically bounded in various ways, the processes by which they arrive at a decision will often be different from the optimizing procedures of classical decision theory. Simon’s work has resulted in a variety of models, but there is no unified theory of boundedly rational decision making.12 This should not be surprising: considerations of procedural rationality draw from many distinct sources and can accordingly affect models in many different ways. Independently of Simon, procedural limitations have also been pointed out by Bayesian authors, most notably I. J. Good, who has introduced a distinction between what he calls Type I and Type II rationality. Type I rationality is the ideal of Bayesian decision theory, while Type II rationality takes into account the time and costs of analyzing a problem and thus may depart from the Type I ideal. According to Good, Type II rationality often justifies a compromise between Bayesian and non-Bayesian methods in statistics. We will see something similar happening when learning is viewed through the lens of Type II rationality, although the compromise 10 Simon (1955, 1957, 1976, 1986). 11 Simon (1955, p. 99). 12 See Aumann (1997) and Rubinstein (1998).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.3 Payoff-Based Learning Procedures

39

will to some extent turn out to lead to a generalization of some Bayesian principles.13 Bayesian models of learning, including fictitious play, also ignore many of the bounding aspects of learning and reasoning processes.14 Payoffbased learning procedures relax the notorious assumption of having knowledge of states of the world. For instance, in order to implement a fictitious play process, a learner is assumed to know and observe the relevant states. This may fail for any number of reasons. The number of states might be too large, they might be difficult to identify, or too complex or costly to keep track of. In some situations, agents (animals or simpler organisms, but also us) might not possess the cognitive capacities to observe the relevant states of the world. In contrast, payoff consequences of acts are far more accessible; and because they represent how an agent is affected by her own choices, they provide information about the value of acts. This information can be used by boundedly rational learners. Many payoff-based learning procedures follow a reinforcement learning scheme. The basic idea of reinforcement learning goes back to Edward Thorndike’s law of effect,15 which says that acts are more likely to be chosen in the future if they have been successful in the past. The learning procedure (2.2) mentioned in the preceding section is one instance of this idea. The  success of an act, Ai , is identified with its cumulative payoff, j nij πj +αij πj , where the second term of the sum can be thought of as the initial propensity for choosing Ai . Dividing the cumulative payoff by the total number  of times Ai has been chosen plus its prior weight, ni + j αij , guarantees that an act does not seem good merely because it has been chosen a large number of times. Because (2.2) includes this type of averaging, I will refer to it as average reinforcement learning. Average reinforcement learning updates on realized payoffs: it only has access to information about payoffs that were actually obtained after an act has been chosen. The same is true of other payoff-based learning procedures. Fictitious play, on the other hand, can be thought of as a hypothetical reinforcement learning process: in addition to realized payoffs, it also updates on counterfactual payoffs.16 Fictitious play has access to counterfactual

13 See, for instance, Chapters 2 and 4 of Good (1983) on Type I and Type II rationality. Savage

was also concerned with procedural limitations, as shown in his discussion of small and large worlds in Savage (1954). We will consider this topic in Chapter 4. 14 Selten (2001) emphasizes the role of learning and deliberation for boundedly rational decision making. 15 Thorndike (1911, 1927). 16 See Camerer and Ho (1999).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

40

Bounded Rationality

payoffs because once the state of the world, S, has been revealed it can determine the payoff u(A&S) for every act A, and not just the one that was chosen. What this shows is that payoff-based learning models treat every decision situation as a bandit problem. As a result, they are more broadly applicable than fictitious play, because the latter requires additional information about counterfactual payoffs. On the other hand, fictitious play is not subject to the exploitation-exploration tradeoff. Adopting bounded learning procedures comes with both advantages and disadvantages. Much of the literature on learning in decisions and games focuses on the question whether bounded learning procedures can be as successful as traditional Bayesian methods. Results on when simple learning procedures converge in normal form games that are played repeatedly are one instance of this program.17 In several classes of games, boundedly rational learning procedures typically agree with Bayesian models with respect to the long run.18 This indicates that there are deep connections between different learning models. The new avenue for studying those connections that I want to propose here is to identify the fundamental inductive principles underlying different learning models. For fictitious play, which adopts the Johnson– Carnap continuum of inductive methods, this was done in the previous chapter. Average reinforcement learning can be treated similarly.19 The key concept is a generalization of exchangeability called partial exchangeability. De Finetti introduced partial exchangeability in order to capture inductive situations that are not governed by order invariance, but by more complex symmetries, which require exchangeability only within distinct types of observations and not across different types. In the context of average reinforcement learning, types are given by acts. Each act results in one of a number of payoff consequences. The sequence of payoffs is partially exchangeable if each subsequence of payoffs associated with an act is exchangeable. In other words, the payoff consequences of each act are exchangeable, whereas exchangeability of payoffs need not hold for different acts. Together with a modification of Johnson’s sufficientness postulate and a regularity assumption, partial exchangeability is sufficient for deriving the predictive probabilities, (2.1), of average reinforcement learning.

17 See, e.g., Samuelson (1997) and Hofbauer and Sigmund (1998) on some relevant results in

evolutionary game theory. 18 An excellent case study is Hopkins (2002). 19 See Huttegger (2017) for details.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.4 The Basic Model of Reinforcement Learning

41

The choice rule, (2.2), then follows from the usual assumption about maximizing behavior. (More information on partial exchangeability can be found in Appendix B.) What this shows is that the foundations of a bounded learning procedure – average reinforcement learning – can be developed along exactly the same lines as the foundations of more traditional Bayesian models of learning. Average reinforcement learning, however, is still very close to the classical Bayesian paradigm in that it maximizes expected utility with respect to predictive probabilities of payoffs. The question to which I turn now is whether we can do something similar for payoff-based models that depart more radically from Bayesian models.

2.4 The Basic Model of Reinforcement Learning The basic model of reinforcement learning, which is regarded by many as one of the main competitors of fictitious play for describing human decision behavior in games, is more robust in bandit problems than average reinforcement learning: it exploits and explores in a well-tempered way and converges to choosing the optimal act with probability one. On top of that, the basic model of reinforcement learning has as number of desirable convergence properties in normal form and extensive form games.20 In the basic model, each act, Ai , has an intrinsic propensity, Qi (n), of being chosen at time n. Propensities determine choice probabilities according to the following rule: Qi (n) . Pi (n) =  j Qj (n)

(2.3)

Thus, choice probabilities are proportional to propensities. Propensities are updated by the following reinforcement scheme:  Qi (n) + π(n) if Ai is chosen at time n (2.4) Qi (n + 1) = Qi (n) otherwise. The random variable π(n) denotes the payoff obtained at trial n. The payoff can depend on Ai , but also on other factors. These elements give rise to a stochastic process, provided that payoffs and propensities are positive. 20 The basic model goes back to Richard Herrnstein and made it into behavioral economics due

to the work of Alvin Roth and Ido Erev; see Herrnstein (1970), Roth and Erev (1995), and Erev and Roth (1998). Hopkins (2002) compares fictitious play and the basic model. With regard to convergence properties, see Laslier et al. (2001), Beggs (2005), and Hopkins and Posch (2005).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

42

Bounded Rationality

Whenever that is the case, each choice probability is positive in all periods n. This creates a kind of intrinsic randomness that makes the basic model successful in bandit problems and other learning situations. What the propensities and payoffs are, exactly, is unclear. They obviously cannot be von Neumann–Morgenstern utilities, like the payoffs involved in fictitious play and average reinforcement learning. Von Neumann– Morgenstern utilities are measured on an interval scale. For an interval scale the zero point and the unit can be fixed conventionally (like in the Fahrenheit and Celsius temperature scales). In terms of invariance, this means that interval scales are equivalent up to positive affine transformations (multiplication by a positive number and addition of a real number). If payoffs, π(n), and therefore propensities, Qi (n), were measured on an interval scale, choice probabilities, Pi (n), would not be invariant under scale transformations.21 Thus, the standard interpretation of payoffs cannot be applied to the basic model. In order for the definition of choice probabilities in equation (2.3) to be meaningful, payoffs and propensities must be measurable on a ratio scale. In contrast to interval scales, for a ratio scale only the choice of unit is conventional (like in the measurement of length or weight). In terms of invariance, this means that scales are equivalent up to multiplication by a positive number.22 To see why, suppose that Qi and Qi are both admissible representations of a choice probability Pi , and let S and S be the corresponding sums of propensities: Pi = Qi /S = Qi /S . Then there exists a positive real number a (namely a = S/S ) such that Qi = aQi for all i. Conversely, if there exists such a number a for all propensities, then aQ Q Qi = i = i . S aS S It follows that all admissible representations of propensities, Qi and Qi , must satisfy the equation Qi = aQi for some positive real a. If we take the ratio scale of propensities for granted, it is not too difficult to see that the principles underlying the basic model are similar to the symmetries of the Johnson–Carnap continuum of inductive methods. The Johnson–Carnap continuum is an instance of the reinforcement learning scheme given by (2.3) and (2.4); we just need to identify propensities with the number of times an act has been chosen (modulo prior parameters) by taking the payoff to be equal to one whenever an act is chosen. The choice probabilities of the basic model do not generally give rise to an 21 See Börgers and Sarin (1997) for a discussion of payoffs in reinforcement learning processes. 22 See Krantz et al. (1971) for more on the properties of interval scales and ratio scales.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.5 Luce’s Choice Axiom

43

exchangeable probability distribution, but a certain kind of order invariance does hold. Moreover, the choice probability of an act depends only on its propensity and the sum of all propensities, which is an analogue of Johnson’s sufficientness postulate. I do not follow this approach any further here, because it starts with the premise that propensities are already given on a ratio scale. That premise is problematic; in contrast to von Neumann–Morgenstern utilities, it is not clear where such a ratio scale should come from. To obtain an answer we need to look beyond the decision theories of von Neuman, Morgenstern, and Savage.

2.5 Luce’s Choice Axiom In his famous monograph Individual Choice Behavior,23 which has not received the attention in philosophy that it deserves, Duncan Luce developed a conceptually novel approach to choice behavior: A basic presupposition of this book is that choice behavior is best described as a probabilistic, and not an algebraic, phenomenon. That is to say, at any instant when a person reaches a decision between, say, a and b we will assume that there is a probability P(a, b) that the choice will be a rather than b. These probabilities will generally be different from 0 and 1, although these extreme (and important) cases will not be excluded. The alternative is to suppose that the probabilities are always 0 and 1 and that the observed choices tell us which it is; in this case the algebraic theory of relations seems to be the most appropriate mathematical tool.24

The most prominent decision theories – including Savage’s theory and Jeffrey’s logic of decision25 – are what Luce calls “algebraic.” He proposes to use choice probabilities instead of preference relations. To see what this amounts to, let T be the (finite) set of available alternatives and pT (i) the probability of choosing the ith element from T. The assumption that choices are probabilistic is summarized by the following requirements:  pT (i) = 1. pT (i) ≥ 0 for all i ∈ T, i∈T

This framework is sufficiently flexible to encompass not just decision theory, but also psychophysics (choices among stimuli), learning, and other 23 Luce (1959). 24 Luce (1959, p. 2). 25 Savage (1954) and Jeffrey (1965).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

44

Bounded Rationality

applications.26 Thus, the alternatives in the set T can be interpreted in a number of different ways. What is common to all these applications, though, is that choice probabilities are not arbitrary; they are an agent’s estimates of how “good” an alternative is according to standards that depend on the context. The requirement that choices be governed by probabilities imposes some consistency constraints on an agent’s decisions, but not enough to yield an interesting theory. Luce added a further constraint, which has become known as Luce’s choice axiom. The choice axiom requires that the probabilities of choosing from different subsets be consistent with each other.27 To state this more precisely, let p(R, S) be the probability of choosing an alternative from R when the set of alternatives S is available, where R ⊂ S ⊂ T, and let p(i, j) be short for p(i, {i, j}). Luce’s choice axiom has the following two parts: (i) If p(i, j)  = 0 for all i, j ∈ T, then p(R, T) = p(R, S)p(S, T)

(2.5)

for all R ⊂ S ⊂ T; (ii) If p(i, j) = 0 for some i, j ∈ T, then p(S, T) = p(S − {i}, T − {i}) for all S ⊂ T (where ‘−’ denotes set-theoretic difference). The second part just says that we can effectively ignore alternatives that have probability zero of being chosen in the presence of some other alternative. The first part is more substantive. It asserts that the probability of choosing from R is unaffected by what else is available. The connection between choice probabilities and conditional probabilities might be helpful for understanding this point. If, as required by case (i), p(i, j)  = 0 for all i, j ∈ T, then p(S, T) > 0 for all S ⊂ T. Thus, the conditional probability pT (R|S) =

p(R, T) p(S, T)

is well defined. By part (i) of Luce’s choice axiom, p(R, S) = pT (R|S). 26 See Luce (1977). 27 The consistency aspect is further explored in Saari (2005). The qualitative principles

underlying the choice axiom are developed by Louis Narens in Narens (2003, 2007).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.5 Luce’s Choice Axiom

45

This equation is reminiscent of conditionalization and says something similar: the probability of choosing from S is consistent with the “prior” probabilities of choosing from T; there is nothing about choices from subsets of T that is not already captured by how the agent evaluates choices from T. The most important consequence of Luce’s choice axiom is the existence of a ratio scale that represents choice probabilities. Suppose that p(i, j) > 0 for all i, j ∈ T. Then there exists a ratio scale q(i) such that p(i, T) = 

q(i) .28 j∈T q(j)

Obviously, p(i, T) is one such scale, but infinitely many others can be created by multiplying p(i, T) with a positive constant a. The significance of this representation lies in the fact that q only depends on i and not on any other alternatives. For this reason q(i) is the propensity (also called “intrinsic likelihood” and “response strength”) of choosing i. In decision problems it can be thought of as measuring the desirability of acts. This result provides a partial answer to a question raised in the previous section: what are the propensities and payoffs in the basic model of reinforcement learning? If the choice probabilities in the basic model, Pi (n), observe Luce’s choice axiom for each act Ai and each n, then each Ai has an intrinsic likelihood, Qi (n), of being chosen that only depends on i and n, and Qi (n) is a ratio scale just as required by (2.3). The propensities of the basic model can thus be viewed as expressing the intrinsic likelihood of choosing the corresponding alternatives. This is only a partial answer, though. Luce’s choice axiom says nothing about how propensities are altered by payoffs over time. I will develop this topic in the next section. Luce’s choice axiom is not completely uncontroversial. The reason is that it implies independence from irrelevant alternatives: the ratio of two choice probabilities is the same regardless of which other alternatives are present: q(i) p(i, S) = p(j, S) q(j) for all S ⊂ T. Independence from irrelevant alternatives is of course not universally valid: other alternatives are not always irrelevant.29 As an example, consider a choice between a blueberry pie on a white plate, a blueberry 28 In fact, this holds for any subset S of T whenever p(i, j) > 0 for all i, j ∈ S. 29 Luce was of course aware of this; see Luce (1959, V.B). See also Debreu (1960), on which the

counterexample is modeled.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

46

Bounded Rationality

pie on a blue plate, and a banana cream pie on a green plate. Suppose I am indifferent as to the color of plates, but also as keen on blueberry pie as I am fond of banana cream pie. Thus, in pairwise comparisons I would choose each plate with equal probability 12 . Luce’s choice axiom implies that the propensities for each plate are equal, from which it follows that the probability of choosing a plate when all three alternatives are offered to me is 13 . Intuitively, though, one would expect the choice probability for banana cream pie to be equal to 12 : the basic choice is between blueberry pie and banana cream pie, and plate color is irrelevant. Hence, the ratio of choice probabilities for the two plates of blueberry pie depends on whether or not the banana cream pie is available, and so independence of irrelevant alternatives, and for that matter Luce’s choice axiom, fails. As Luce and Suppes point out in their well-known survey paper on decision theory, such examples are not just directed against Luce’s choice axiom, but at a pervasive feature of most theories of choice.30 Luce’s choice axiom is similar to the independence assumption in the theory of von Neumann and Morgenstern, and to Savage’s sure thing principle. What these requirements have in common is that local evaluations don’t change with the context, so that they fully determine how prospects are evaluated globally. Independence assumptions sometimes fail, but they still provide the simplest possible relationship among the basic elements of a theory that guarantees overall consistency – especially when it comes to dynamic choice, where context independence is the key to dynamically consistent choice behavior.31 Thus, from the point of view of rational choice behavior, Luce’s choice axiom is a very plausible principle. In the more complex situations where – as in the example above – some alternatives share certain aspects, it can be generalized along the lines suggested by Amos Tversky in his theory of choice by elimination.32 A major motivation for a fully probabilistic treatment of choice behavior in the style of Luce is the fact that in many decision situations preferences are ambiguous and incomplete. We are often unable to make definite judgments as to which prospects are more or less desirable. This is certainly not a failure of rationality; sometimes goods are simply incommensurable; at other times our preferences are indefinite because we lack crucial 30 See Luce and Suppes (1965, p. 337). 31 This has been studied extensively for classical decision theory; see, e.g., Hammond (1988) and

Seidenfeld (1988). For a recent decision theory that relaxes independence in an insightful way, see Buchak (2013). 32 Tversky (1972).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.6 Commutative Learning Operators

47

information that would allow us to complete our preference ordering.33 Choice probabilities can be an appropriate model under these kinds of circumstances. They provide us with the resources to represent uncertainty with regard to preferences since, typically, the probability of choosing an act, Ai , will be larger the more an agent believes that Ai is the best among the alternatives in T. This connection between choice probabilities and preferences is made more precise by considering how Luce’s choice axiom is related to random utility models. These models go back to Louis Thurstone’s work. Thurstone treated utilities as random variables with a multivariate Gaussian distribution, with the probability of choosing an alternative being the probability that its utility is maximal.34 Thus, instead of choices being random as in Luce’s approach, utilities are subject to uncertainty. Despite this difference, the two approaches often make nearly the same numerical predictions. The reason is that Luce’s model is equivalent to a Thurstone model in which utilities of alternatives are determined by random variables that have a double exponential distribution.35 The potential usefulness of probabilistic choice theories can be illustrated with an example that has recently been put forward as an argument against standard decision theory by L. A. Paul.36 Suppose you have to choose whether or not to become a parent. According to Paul, this choice involves a transformative experience: the consequences of becoming a parent can only be fully understood by becoming a parent. In other words, prior to actually becoming a parent there is no way one could arrive at a fully considered and complete preference ordering. It appears, then, that classical decision theory has little advice to give on whether or not to become a parent. But since the consequences of becoming a parent are subject to uncertainty, Luce’s probabilistic choice theory may be applicable.

2.6 Commutative Learning Operators Luce’s choice axiom provides a foundation for the particular form assumed by choice probabilities in the basic model of reinforcement learning. But it 33 See Joyce (1999, pp. 98–103) for an extended discussion of these points. 34 See Thurstone (1927). For an overview of random utility models and its connection to choice

probabilities, see Luce and Suppes (1965). 35 A simple proof, due to E. Holman and A. A. J. Marley, can be found in Luce and Suppes

(1965). See also Block and Marschak (1960). See McFadden (1974) and Yellott (1977) for further developments. 36 Paul (2014).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

48

Bounded Rationality

does not identify the principles which give rise to the basic model’s additive update procedure, (2.4). To this effect, we consider the sequence of choice probabilities for each act Ai , Pi (1), Pi (2), . . . , representing how an agent updates choice probabilities, together with a sequence of random variables, X1 , X2 , . . ., representing the agent’s sequence of choices. We assume that the choice probabilities satisfy Luce’s choice axiom at each stage. The stochastic process so defined can be thought of as a basic reinforcement learning process if there exists a sequence of propensities, Qi (1), Qi (2), . . . , for each act Ai that satisfies (2.3) and (2.4) for some set of payoffs. The process of updating choice probabilities becomes more manageable when viewed in terms of propensities (propensities, unlike probabilities, remain unaffected by how other propensities are updated). This insight led Luce to study some simple models in which learning operators map old propensities to new propensities.37 The action of a learning operator, L, determines how a particular learning event modifies the propensity to choose an alternative. In Luce’s beta model, which he derives from his choice axiom and some other requirements, the new propensity, L(q), is obtained by multiplying the old propensity, q, with a positive real number, β: L(q) = βq. In a subsequent paper, Luce extended this model to a model of successive “quasi-multiplicative” updating.38 Although the basic model of reinforcement learning is not multiplicative, these two models share a feature of crucial importance: the order of updating has no effect on propensities. This idea is captured by the concept of commutative learning operators. Let O be a set of outcomes and {La }a∈O a family of learning operators indexed by outcomes. Each learning operator, La , acts on propensities, Qi (n), and associates with it a new propensity after choosing an alternative has resulted in outcome a: La (Qi (n)) = Qi (n + 1). 37 See Chapter 4 of Luce (1959). 38 Luce (1964).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.6 Commutative Learning Operators

49

The family of learning operators is commutative if La Lb (Qi (n)) = Lb La (Qi (n))

(2.6)

for all a, b ∈ O, acts Ai , and n. It follows that propensities are invariant under permuting the order of any finite sequence of outcomes. A quick look at the basic model of reinforcement learning confirms that commutativity is a necessary condition for the additive update rule (2.4). This indicates that commutative learning operators may give rise to additive models in the presence of other axioms – as, in fact, it does. The requirement that learning operators be commutative, initially studied by Luce, has subsequently been applied to quasi-additive models by A. A. J. Marley.39 The sequences of propensities, together with the set of outcomes, is quasiadditive if there exists a function h from the range of propensities into the reals, and a function k from the set of outcomes, O, into the reals, such that h is strictly increasing and h(La (Qi (n))) = h(Qi (n)) + k(a)

(2.7)

holds for all a, Ai , and n, where k is a ratio scale. By applying quasiadditivity sequentially, it follows that h(Qi (n + 1)) = h(La (Qi (n))) = h(Qi (0)) + k(a1 ) + · · · + k(an ), where ai , 1 ≤ i ≤ n, is the outcome obtained in the ith period. Quasi-additivity is not exactly what the basic model of reinforcement learning requires, since, in the basic model, h is the identity function. As I explain in more detail in Appendix C, this will be the case if the payoff function k maps into the set of propensities. If k behaves differently, h guarantees that updates do not lead outside the set of propensities. Quasiadditive models are thus equivalent to the basic model’s update rule up to the functions h and k. In Appendix C, I also review how quasi-additivity can be derived from a set of axioms that captures how a family of learning operators updates propensities. These axioms are due to Marley. A slight modification of Marley’s axioms, which is dictated by the basic model’s insistence on nonnegative payoffs, guarantees that k is nonnegative. Marley’s model is more general in that it allows reinforcements to decrease. Commutativity, (2.6), is the crucial axiom, though. Marley’s other postulates only guarantee that acts and learning operators fit the structure 39 Marley (1967).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

50

Bounded Rationality

given by real valued propensities. Commutativity is a symmetry condition that makes a substantive assumption about the learning process. According to the prior probability measure corresponding to an additive or quasi-additive reinforcement learning process, commutativity, (2.6), must hold with probability one. Such a probability measure judges the order of outcomes to have no effect on propensities. Commutativity is thus analogous to exchangeability; it transfers the idea of order invariance from sequences of observations to sequences of choice propensities. And just like exchangeability, commutativity expresses an inductive assumption about the learning process – the implicit or explicit judgment of an agent about the general structure of the learning situation.

2.7 A Minimal Model Reinforcement learning is one type of boundedly rational learning, since it requires no inputs about states of the world. What it does require, however, is memory of payoffs, even those in the distant past. In this section I briefly consider one more learning procedure that is much more limited. What I wish to indicate is that even such a severely bounded learning procedure is based on underlying inductive assumptions. The kinds of rules I have in mind sometimes try new acts, compare their payoffs, and choose acts with higher payoffs. There are a number of simple trial and error learning models that are instances of this qualitative scheme. The learning model I consider here is known as probe and adjust.40 Probe and adjust has two parts: during recreational periods it chooses the same alternative as in the previous round; otherwise, probe and adjust experiments by choosing a random alternative and comparing its payoff to the payoff obtained with the previous choice. If its payoff is higher, the new alternative is chosen in the next period; if its payoff is lower, the agent returns to choosing the old alternative; in case of a tie, one of the two alternatives is chosen at random. Probe and adjust can be thought of as implementing Simon’s idea of satisficing. A search process is satisficing if it aims at discovering alternatives that are good enough – for example, by being above a certain threshold – without necessarily being optimal. In probe and adjust, an alternative is chosen because it seems better than another and not because it appears to be the best one overall. 40 See Marden et al. (2009), Young (2009), Skyrms (2010), who introduces probe and adjust, and

Huttegger et al. (2014).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.7 A Minimal Model

51

For the analysis of probe and adjust, recreational periods can be ignored. The nth learning event Eijn compares two alternatives – the previously chosen one, Aj , and the new one, Ai – with respect to their payoffs – ψ(n − 1) and ψ(n), respectively. The probe and adjust process is given by two sequences: the sequence of choices, X1 , X2 , etc., and the sequence of payoffs, ψ(1), ψ(2), etc. As before, payoffs are random variables that might be influenced by many factors besides an agent’s choices, but they don’t need to be on a ratio or an interval scale; ordinal information is sufficient for probe and adjust. This makes probe and adjust more broadly applicable than any of the learning models we have considered thus far. I now attempt a derivation of probe and adjust in terms of choice probabilities. The following postulate says the choice behavior of probe and adjust depends only on the ordering of current payoffs (which I denote by ψi (n) for each alternative i): Choice. At the end of the learning event Eijn , the choice probabilities observe the following condition: ⎧ ⎪ ⎪ ⎨1 if ψi (n) > ψj (n) pi (n) = 0 if ψi (n) < ψj (n) ⎪ ⎪ ⎩ 1 if ψ (n) = ψ (n). 2

i

j

Moreover, pj (n) = 1 − pi (n). It follows that after a learning event Eijn , the choice probability of every alternative other than i or j is equal to zero. The choice postulate is reasonable if payoff relations don’t change from one period to the next. Notice, however, that the choice postulate makes use of counterfactual payoff information: a probe and adjust agent does not know the payoffs ψk (n) of alternatives k not chosen at time n; she only has access to ψ(n − 1), ψ(n) and Xn−1 , Xn . The gap between what the postulate requires and what a probe and adjust agent knows can be bridged by the following symmetry assumption: Uniformity. The payoffs for the learning event Eijn observe the following relations: (1) ψi (n) > ψj (n) if ψi (n) > ψj (n − 1) (2) ψi (n) < ψj (n) if ψi (n) < ψj (n − 1) (3) ψi (n) = ψj (n) if ψi (n) = ψj (n − 1).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

52

Bounded Rationality

Uniformity asserts that the learning environment is stable; payoffs don’t change radically from one period to the next. The two postulates immediately imply that, on the event Eijn , ⎧ ⎪ ⎪ ⎨1 if ψi (n) > ψj (n − 1) pi (n) =

0 ⎪ ⎪ ⎩1 2

if ψi (n) < ψj (n − 1)

if ψi (n) = ψj (n − 1).

This captures the mechanical learning part of probe and adjust. Together, the uniformity and choice postulates express stringent inductive assumptions about the learning environment. They require that payoff relations don’t change much. Whenever this is the case, probe and adjust is quite successful. Take a simple example where payoffs are independent of n, ψi (n) = ri for each i and all n. Suppose there is a unique maximum payoff. Given enough time, probe and adjust will discover the best alternative and stay with it nearly always. Bandit problems are examples where the inductive assumptions of probe and adjust fail. In a bandit problem payoff relations need not be stable over consecutive periods. As a consequence, probe and adjust is not very successful in this learning environment.41

2.8 Rationality and Learning The axiomatic methodology outlined in this chapter can also be applied to other learning models.42 I am not going to offer a comprehensive overview of how all these applications work, but in the next two chapters I will consider some models that differ in interesting ways from those we have looked at thus far. Before touching on new subjects, I want to conclude this chapter with some remarks on where we stand with respect to the project of developing a general theory of rational learning. In the preceding chapter, we saw how one of the paradigm cases of inductive learning – the Johnson–Carnap continuum of inductive methods – can be derived from a set of inductive assumptions – exchangeability and Johnson’s sufficientness postulate. The epistemological significance of this result is that it provides us with a position from which to make evaluative claims 41 Cf. Skyrms (2012). 42 An example is Gilboa and Schmeidler’s case-based reasoning (Gilboa and Schmeidler, 2001,

2003). They provide an axiomatic foundation that can be interpreted in terms of learning. Interestingly, some of their axioms also require order invariance. For more recent work, see Gilboa et al. (2013).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.8 Rationality and Learning

53

about epistemic agents. In particular, we would call an agent irrational if, on one hand, she fails to update according to the Johnson–Carnap continuum, but, on the other, she has beliefs that conform to the continuum’s inductive assumptions. The agent is irrational because her method of updating does not pull in the same direction as her fundamental assumptions about the learning situation. By contrast, an agent whose update procedure and inductive assumptions cohere with each other is, prima facie, rational. Identifying the principles underlying bounded learning procedures – such as the basic model of reinforcement learning or probe and adjust – allows us to make similar evaluative claims about bounded epistemic agents. Let’s focus on the basic model. Suppose an epistemic agent’s choice probabilities observe Luce’s choice axiom, and that her method for updating propensities can be captured by a family of learning operators. If those learning operators are commutative, such an agent would be irrational if she fails to alter propensities according to a (quasi) additive update rule; again, her inductive assumptions and her mode of updating would be incompatible. But she is, at least prima facie, epistemically rational if inductive assumptions and updating are in line with each other. Note that epistemic irrationality does in general not imply that an agent should change either her update procedure or her inductive assumptions. Epistemic attitudes, such as beliefs, cannot always be changed at will. Inconsistencies between updating and inductive assumptions tell us that an agent should change something only if it is in her power to do so; and even then she might choose not to, for instance because epistemic inconsistencies have negligible practical consequences.43 These normative implications only arise after having evaluated an agent’s epistemic rationality, however. The two concerns can thus be kept separate. Our emphasis on evaluating an agent’s epistemic rationality and setting aside the question of what follows normatively from the evaluation also allows us to dissolve a thorny issue that might arise otherwise. An advantage of models of bounded rationality is that they apply more broadly to humans and other organism. This means, in particular, that our discussion applies to low-level agents that don’t have full access, or no access at all, to their own updating procedures. Thus, such agents usually cannot be thought of as reflecting on and evaluating their procedures. It does not follow from this, however, that their procedures cannot be evaluated. The standards of evaluation proposed here – based upon updating and inductive assumptions – don’t require that the agent herself is performing an 43 Pettigrew (2016) draws a similar distinction between evaluative and normative claims.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

54

Bounded Rationality

evaluation. As long as we can describe an agent adequately, we can study whether, according to that model, she is epistemically rational. That being said, it is often convenient to think of an agent as if she were reflecting on her learning procedures, and there is no loss in doing so because we can focus on learning that is rational from the agent’s own perspective. In the basic model of reinforcement learning, for instance, we can view choice probabilities, Pi (n), as an agent’s subjective probabilities about her own choices at time n; this is justified if the agent behaves as if she had these choice probabilities. Consequently, propensities, Qi (n), can be thought of as subjective judgments as well. Such an agent could consider herself rational if her inductive assumptions about the learning situation cohere with the way she updates choice probabilities. Summing up, there clearly is an analogy between classical Bayesian models of learning and learning models of bounded rationality. But the analogy is not perfect. First, I wish to emphasize that there remain a lot of differences between Bayesian models and their alternatives; most importantly, the latter go without maximizing procedures and instead typically move to choosing what seems better from their present point of view.44 Second – and more importantly – Bayesian models are clearly models of learning. The Johnson–Carnap continuum, for example, updates on factual propositions – that is, propositions whose truth value can be verified by observational means. This provides us with a robust idea of what is being learned by Bayesian methods. Contrast this with the basic model of reinforcement learning (or probe and adjust, or average reinforcement learning). While I have tried to stipulate that choice probabilities represent what an agent has learned (they express how successful acts have been so far), there appears to be nothing in the model that would support this stipulation. We cannot say exactly what it is that a reinforcement learner has, in fact, learned. Thus, it is unclear whether the basic model of reinforcement learning and similar models should be called “learning processes.” In concrete cases we may have reasons to think that a process is a learning process. This is what Luce refers to when he requires that choice probabilities be linked to a particular comparative dimension along which alternatives are evaluated. Still, it would be desirable to have general principles for identifying learning processes. Dynamic consistency is, in my view, such a principle. Recall that dynamic consistency requires that new information be included consistently into 44 This adaptive property is shared by many learning dynamics; see Hofbauer and Sigmund

(1998).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

2.8 Rationality and Learning

55

one’s old beliefs: what an agent learns is thereby consistently incorporated into what she has learned before. A reason why one might think that dynamic consistency plays a role in capturing general learning processes is that our paradigm model of learning, Bayesian conditioning, is dynamically consistent (see Chapter 1). By suitably generalizing dynamic consistency, it can also be applied to choice probabilities and other quantities that represent a learning process. As a consequence, we are able to determine when generalized learning processes include new information in a way that is consistent with an agent’s old information. Developing this idea in all its nuances requires more space. I will present a detailed account of dynamic consistency and its significance for generalized learning in Chapters 5 and 6. Before I turn to dynamic consistency, I’d like to push the analysis begun in this chapter a bit further. The symmetries we have focused on so far – exchangeability, partial exchangeability, commutativity – are very simple inductive assumptions, for they require that order has no effects whatsoever on the learning process. In many learning situations this is clearly false. In addition, the models of this and the previous chapter suffer from another limitation: they operate within a fixed conceptual framework that cannot be changed by learning. In the next two chapters, we are going to lift these limitations.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:58:50, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.004

3

Pattern Learning

As I have already indicated, we may think of a system of inductive logic as a design for a “learning machine”: that is to say, a design for a computing machine that can extrapolate certain kinds of empirical regularities from the data with which it is supplied. Then the criticism of the so-far-constructed “c-functions” is that they correspond to “learning machines” of very low power. They can extrapolate the simplest possible empirical generalizations, for example: “approximately nine-tenths of the balls are red,” but they cannot extrapolate so simple a regularity as “every other ball is red.” Hilary Putnam Probability and Confirmation To approach the type of reflection that seems to characterize inductive reasoning as encountered in practical circumstances, we must widen the scheme and also consider partial exchangeability. Bruno de Finetti Probability, Statistics and Induction

One of the main criticisms of Carnap’s inductive logic that Hilary Putnam has raised – alluded to in the epigraph – is that it fails in situations where inductive inference ought to go beyond relative frequencies. It is a little ironic that Carnap and his collaborators could have immediately countered this criticism if they had been more familiar with the work of Bruno de Finetti, who had introduced a formal framework that could be used for solving Putnam’s problem already in the late 1930s. De Finetti’s central innovation was to use symmetries that generalize exchangeability to various notions of partial exchangeability for inductive inference of patterns. The goal of this chapter is to show how generalized symmetries can be used to overcome the inherent limitations of order invariant learning models, such as the Johnson–Carnap continuum of inductive methods or the basic model of reinforcement learning. What we shall see is that learning procedures can be modified so as to be able to recognize in principle any finite pattern.

3.1 Taking Turns

56

Order invariant learning rules collapse when confronted with the problem of learning how to take turns. Taking turns is important whenever a

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.1 Taking Turns

57

learning environment is periodic. Suppose you find yourself in an environment in which rainy and sunny days alternate: rain, sunshine, rain, sunshine, and so on. Each day you have to choose among acts whose consequences depend on the weather. For instance, you have to choose between putting on sunscreen or carrying an umbrella. Now, if you are a fictitious player, your predictive probabilities for sunshine converge to 1/2, as do your predictive probabilities for rain. As a consequence, you will fail to learn to put on sunscreen on sunny days and to carry an umbrella on rainy days (depending on what the payoffs are, you eventually either always choose to put on sunscreen, or to take an umbrella, or you choose an act at random). The reason for this result is that fictitious play, being based upon Carnapian inductive logic, assumes exchangeability. Exchangeability, however, is incompatible with an alternating sequence of observations. Consider the finite sequence of four states: rain, sunshine, rain, sunshine. According to our specifications, this sequence has probability one. The following reordering thus has probability zero: rain, rain, sunshine, sunshine. This is incompatible with exchangeability. Thus, whenever the ordering of outcomes seems to play a role, exchangeability is an inadequate inductive assumption. This deficiency of Carnapian inductive logic was originally pointed out by Hilary Putnam and Peter Achinstein.1 It is a deficiency that can be resolved quite easily, though. Before getting into the more abstract details, let me sharpen the issues at stake with an especially important application due to Peter Vanderschraaf and Brian Skyrms, who have studied the problem of how to take turns in interactive decision situations.2 Consider the following situation. Watching the stars is both Galileo’s and Maria’s favorite hobby. Unfortunately, they only have one telescope. Galileo’s most favored outcome is to look through the telescope himself. Maria’s most favored outcome is to look through the telescope herself. Can this conflict be resolved in a fair way? The obvious solution is the one that we try to teach our children (in my case, with mixed success): please, take turns. By taking turns, both Galileo and Maria get their most preferred outcome half the time. But what if there are no parents around? Without a central authority telling them what to do, a precondition for learning to take turns is for Maria and Galileo to be able to recognize a simple pattern: who looked through the telescope last time. 1 Putnam (1963) and Achinstein (1963). 2 Vanderschraaf and Skyrms (2003).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

58

Pattern Learning

Let’s look at this situation in more detail. Galileo and Maria are involved in a game – that is, a strategic interaction in which the outcome of a given agent’s choice depends on the choices of other agents. The Taking Turns game involves two players, Galileo and Maria. Each player has two strategies: looking through the telescope, or abstaining and letting the other player look. A strategy profile specifies a choice for each player. A player’s payoff is determined by the strategy profile; it represents the value or desirability of the outcome for the player. There are four strategy profiles in the Taking Turns game. The one where both players choose to look through the telescope is clearly not desirable (let’s suppose Maria and Galileo end up fighting in this case). The strategy profile where both players abstain from looking through the telescope is about as bad since neither player gets their preferred outcome. The remaining two strategy profiles are advantageous for one, but less advantageous for the other player. The strategic situation can be summarized by the payoff table shown in Figure 3.1. Which strategy should Galileo choose? If Maria chooses to watch, abstaining confers a higher payoff, so he should choose to abstain. In case Maria chooses to abstain, he should choose to watch because abstaining yields a lower payoff. The same is true from Maria’s perspective. It follows that the off-diagonal strategy profiles – abstain–watch and watch–abstain – are Nash equilibria: both Galileo and Maria choose optimally given the other player’s choice of strategy. In fact, they are strict Nash equilibria: unilaterally deviating from one’s equilibrium strategy results in a strictly lower payoff. (The Taking Turns game has a third Nash equilibrium in mixed strategies, where each player chooses both strategies with positive probability, but this equilibrium is irrelevant for our purposes.) The notion of a Nash equilibrium is the central solution concept of game theory. Nash equilibria are the only strategy profiles compatible with the players’ incentives. This can be expressed in different ways. For instance, it is possible to show that no strategy profiles other than Nash equilibria are consistent with certain epistemic conditions, in particular rationality and common knowledge of rationality.3 Alternatively, Nash equilibria can be

watch abstain

watch

abstain

0,0 1,2

2,1 0,0

Figure 3.1 Taking turns. The left entry is Galileo’s payoff and the right entry Maria’s. 3 Aumann and Brandenburger (1995).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.2 Markov Fictitious Play

59

shown to be the endpoints of adaptive learning processes. Fictitious play and the basic model of reinforcement learning are two prominent examples; they are both members of a class of adjustment methods that converge to Nash equilibria, assuming they converge at all. One reason is that they are payoff-driven: they adjust an agent’s dispositions toward choosing strategies that seem better from her current point of view. If such a process is not at a Nash equilibrium, it cannot stay where it is because at least one player is able to improve her payoff by deviating.4 Another reason for why fictitious play and similar processes can only convergence to Nash equilibria is order invariance. As we saw above, order invariance implies that fictitious play cannot establish patterns. As a result, fictitious play cannot alternate systematically between more than one strategy profile even if it would be desirable to do so. The game between Maria and Galileo is a case in point. Fictitious play converges to a Nash equilibrium of that game for almost all initial beliefs.5 In numerical simulations one typically observes the emergence of a strict Nash equilibrium after some initial haggling. Thus, the fictitious play process gets a particular player – either Galileo or Maria – to look through the telescope all the time. No other configuration is a possible endpoint of the process. It follows that fictitious play cannot reach the plausible configuration that has Galileo and Maria taking turns. The same is true for other order invariant learning processes; order invariance rules out patterns. Being able to detect patterns, however, is essential for learning to take turns.

3.2 Markov Fictitious Play As a solution to this problem, Vanderschraaf and Skyrms have introduced Markov fictitious play.6 Markov fictitious play chooses an act that maximizes expected utility relative to predictive probabilities which are taken conditional on the immediately preceding state. This slightly more sophisticated way of learning from observations is capable of extrapolating simple patterns. 4 The learning rules I am referring to here have a qualitatively similar behavior to evolutionary

dynamics, in particular the replicator dynamics, which is the basic model of a selection process in evolutionary game theory (Taylor and Jonker, 1978). The basic model of reinforcement learning even has a version of the evolutionary replicator dynamics as its mean field limit (Beggs, 2005; Hopkins and Posch, 2005). That the replicator dynamics – and with it many qualitatively similar dynamics – can only converge to a Nash equilibrium is part of the folk theorem of evolutionary game theory (Hofbauer and Sigmund, 1998). 5 Fictitious play converges in two-player, two-strategy games (Fudenberg and Levine, 1998). 6 Vanderschraaf and Skyrms (2003).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

60

Pattern Learning

The learning part of Markov fictitious play is Theo Kuipers’s Markov inductive logic.7 Consider a sequence of random variables X1 , X2 , . . . taking values in some finite set, which can be states of nature, or the strategy profiles of a game. According to Kuipers’s Markov inductive logic, the predictive probability of observing state i at time n + 1 given the immediately preceding state j at time n is given by P[Xn+1 = i|X1 , . . . , Xn−1 , Xn = j] =

nij + αij  , nj + i αij

(3.1)

where nj is the number of times state j has been observed in the first n trials, and nij is the number of times i has been observed immediately after j. The alpha parameters are positive and represent an agent’s initial beliefs. Markov fictitious play maximizes expected utility with respect to these predictive probabilities. Let me illustrate Markov fictitious play in the context of two examples. In the first, two states of nature, A and B, are chosen according to the transition probabilities pAA , pAB , pBA , pBB after a first state has been chosen at random; pij is the probability of going to state i from state j. This model is a Markov chain. Markov chains are the simplest kinds of models in which the order of observations is relevant. Let’s assume the chain is recurrent (both states occur infinitely often with probability one). It is then easy to see that the Johnson–Carnap continuum converges to the relative frequencies of A and B. It fails, however, at learning the transition probabilities. These can be learned by Markov inductive logic.8 The recurrence assumption and the strong law of large numbers imply that relative frequencies, nij /nj , converge to chances, pij , with probability one as n goes to infinity. Since the predictive probabilities of Kuipers’s Markov inductive logic (3.1) are equal to relative frequencies (up to prior parameters), they also converge to the transition probabilities with probability one. It follows that Markov fictitious play converges almost surely to choosing an optimal act conditional on the previous state whenever a repeated decision problem is considered on top of the Markov chain (as in the rain-or-sunshine example of the previous section). The second example is the Taking Turns game. Vanderschraaf and Skyrms apply Markov inductive logic to strategy profiles. In two-player games, strategy profiles are pairs of strategies (i, j), i being a strategy of the first player and j a strategy of the second player. Markov fictitious play 7 Cf. Kuipers (1988) and Skyrms (1991). 8 See Skyrms (1991).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.2 Markov Fictitious Play

61

keeps track of the number of transitions from (i, j) to (s, t). Let pn(i,j),(s,t) be the first player’s predictive probability at time n that strategy profile (s, t) is observed at time n + 1 provided that (i, j) is the immediately preceding profile (the second player can be treated analogously). The first player’s predictive probability that the other player chooses strategy t in period n + 1 is then given by the marginal probability pn(i,j),t =



pn(i,j),(s,t) ,

s

where the sum ranges over all of the first player’s strategies. The probability pn(i,j),t is determined by the number of times strategy t is observed immediately after strategy profile (i, j) (up to prior parameters). If both players maximize expected utility relative to these predictive probabilities, the resulting process is a Markov fictitious play process. Vanderschraaf and Skyrms have analyzed the Markov fictitious play process for a number of games, showing that it gives rise to a wide range of qualitatively new types of learning behavior.9 Taking Turns is of particular interest. Vanderschraaf and Skyrms have a proof that the taking turns pattern – alternating between Galileo and Maria using the telescope – is stable: once the process has run through it one time, it will keep alternating forever. Simulation results demonstrate, in addition, that the taking turns pattern is reached by Markov fictitious play with positive probability (it is not reached with probability one because the process may still converge to a strict Nash equilibrium). Markov fictitious play thus can reach a fair configuration in the Taking Turns game. We can gain a deeper understanding of these results by observing that the Markov fictitious play process can be decomposed into fictitious play processes. Consider for each strategy profile of the Taking Turns game those periods which immediately follow a choice of that strategy profile. If it is chosen infinitely often, restricting the Markov fictitious play process to those periods defines a fictitious play subprocess since both players update conditional on the previous strategy profile. This makes it possible to apply results that hold for fictitious play to the Markov fictitious play process. In particular, since fictitious play converges to one of the strict Nash equilibria in the Taking Turns game, this indicates how Markov fictitious play can 9 See Vanderschraaf and Skyrms (2003), who note the relationship between the convergence

behavior of the Markov fictitious play process and correlated equilibria (Aumann, 1974, 1987).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

62

Pattern Learning

find its way to an alternating equilibrium: by converging to different strict Nash equilibria along distinct subprocesses.10

3.3 Markov Exchangeability Let’s try to identify the probabilistic symmetries underlying Markov inductive logic. Let X1 , X2 , . . . be an infinite sequence of states. A probability measure for the sequence is Markov exchangeable if any two finite initial sequences of states have the same probability whenever the following two conditions hold: they have (i) the same initial state and (ii) the same transition counts between states.11 As an example, suppose we have three states, A, B and C. For Markov exchangeable degrees of belief, the two sequences AAACBABABACCB and AABABACCBAACB have the same probability: both have A as initial state and the same number of transitions between states, which are shown in Figure 3.2. Markov exchangeability is a type of partial exchangeability. We have already encountered another kind of partial exchangeability in our discussion of average reinforcement learning, which required states to be A

B

C

A

2

2

2

B

3

0

0

C

0

2

1

Figure 3.2 The number of transitions from states to their successor states in the sequences AAACBABABACCB and AABABACCBAACB. 10 A proof of this requires more care. Let me give a sketch for a randomized version of fictitious

play generally called stochastic fictitious play (Fudenberg and Levine, 1998; Hofbauer and Sandholm, 2002), which is more amenable to a probabilistic analysis than fictitious play. Predictive probabilities are the same in fictitious play and stochastic fictitious play. But while in fictitious play small changes in a player’s beliefs can have large effects on her choices, in stochastic fictitious play, payoffs are slightly perturbed so as to smooth the players’ choice behavior. Under quite generic conditions on perturbations, the structure of the perturbed Taking Turns game will be very similar to the unperturbed game. Stochastic Markov fictitious play can be defined in the same way as Markov fictitious play. It also can be decomposed into stochastic fictitious play processes. Each of the subprocesses converges to a perturbed strict Nash equilibrium with positive probability (Benaïm and Hirsch, 1999). This implies that, with positive probability, each strict Nash equilibrium is played infinitely often. Along the corresponding two subprocesses, there is convergence to the other perturbed Nash equilibrium, yielding an approximately alternating pattern. 11 On Markov exchangeability, see de Finetti (1959), Freedman (1962), Diaconis and Freedman (1980a,b), and Fortini et al. (2002).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.3 Markov Exchangeability

63

exchangeable within certain types of exogenously given outcomes (see Chapter 2). Markov exchangeability is based on the same idea, but types are given endogenously in terms of the state observed in the previous period: all outcomes that immediately follow a particular state are exchangeable. Thus, Markov exchangeability is a generalization of exchangeability. Based upon Markov exchangeability one can develop a strikingly rich theory of inductive inference. To begin with, Persi Diaconis and David Freedman proved a de Finetti representation theorem for recurrent Markov exchangeable sequences of random variables12 taking on values in a countable set. Any such sequence can be represented by a unique mixture of Markov chains.13 This allows us to think of Markov exchangeable sequences in terms of chance setups. Chance parameters are chosen according to a chance prior that expresses a person’s uncertainty about the Markov chain generating the process. If chance priors are Dirichlet distributions, Kuipers’s Markov inductive logic (3.1) can be derived from the representation theorem.14 Markov inductive logic can, however, also be derived from first principles that do not appeal to chance setups.15 The derivation is based on a modification of Johnson’s sufficientness postulate. The modified postulate says that the predictive probability of a state depends only on itself and the previous state, the number of transitions from the latter to the former, and the previous state’s total number of observations: P[Xn+1 = i|X1 , . . . , Xn−1 , Xn = j] = fij (nij , nj ). Together with Markov exchangeability and a regularity condition, the modified sufficientness postulate implies Kuipers’s predictive probabilities. This shows that the foundation of Markov fictitious play is as well developed as the foundation of fictitious play. Markov exchangeability and the modified sufficientness postulate are its inductive presuppositions. Together they say that period-to-period order matters in a learning situation, but nothing else does. An agent whose beliefs conform to these 12 Recurrent here means that the sequence returns to the initial state infinitely often. 13 A Markov chain is determined by an initial state and transition probabilities p which ij

constitute a stochastic matrix. The representation theorem says that there exists a unique distribution μ on the set of all stochastic matrices such that for any n P[X1 = x1 , . . . , Xn = xn ] =

n−1

pxi ,xi+1 dμ(x1 , p)

i=1

(Diaconis and Freedman, 1980b). 14 See Skyrms (1991). 15 Zabell (1995).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

64

Pattern Learning

principles is bound to adopt a Markov inductive logic to incorporate new observations into her degrees of belief.

3.4 Cycles Markov inductive logic can be generalized from conditioning on the previous period to conditioning on any finite number of preceding periods. The corresponding symmetries are generalizations of Markov exchangeability that consider transitions from finite sequences of states. In this way, inductive logic can make valid inferences about arbitrary finite patterns. Are there situations that are too complex and unstable to be covered by these symmetries? There is a tradition in philosophy that studies questions like this one in terms of the worst case scenario of being deceived by a malevolent powerful being, a demon who determines what you observe and is allowed to deceive you in any conceivable way.16 But we don’t need to resort to demons; strictly competitive strategic interactions between players may also give rise to highly unstable environments. The most famous game of this sort is due to Lloyd Shapley.17 Its payoffs are shown in Figure 3.3. The Shapley game is a zero-sum game (the players’ payoffs sum to zero no matter which strategy profile they choose) with a unique Nash equilibrium that has both players choose each strategy with equal probability. Because the Nash equilibrium is unique, it would seem plausible that a learning process should converge to it. This is not true for fictitious play, as Shapley has shown. The rate at which fictitious players switch strategies decreases in such a way that the relative frequencies of strategies approach a limit cycle around the Nash equilibrium.18 This means that relative frequencies never settle down into a stable pattern. The inductive foundations of fictitious play can help us understand what happens in the Shapley game. The sequence of choices by an opponent cannot be exchangeable; for if it was, de Finetti’s theorem would imply that relative frequencies converge with probability one. As we have just observed, A B C

A

B

C

0, 0 −1, 1 1, −1

1, −1 0, 0 −1, 1

−1, 1 1, −1 0, 0

Figure 3.3 The Shapley game. 16 This approach is developed, e.g., in formal learning theory; see Kelly (1996). 17 Shapley (1964). 18 See Gaunersdorfer and Hofbauer (1995) for more information.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.4 Cycles

65

however, relative frequencies fail to converge. As a consequence, the fictitious play process is not exchangeable in the Shapley game. Exchangeability does not just fail in the short and medium term – symmetry assumption of course usually don’t hold before players have settled into a regular pattern; it even fails to hold approximately in the long run. Since its inductive assumptions are violated in this way, fictitious play is not an appropriate learning process for the Shapley game. The Shapley game is a complex epistemic situation. Perhaps learning models based on more sophisticated symmetries will do better than fictitious play. Vanderschraaf and Skyrms’s Markov fictitious play might seem like a promising candidate. Since it is based on Markov exchangeability, it can make inferences about simple patterns. Unfortunately, though, Markov fictitious play also converges to a limit cycle in the Shapley game. This can be confirmed by numerical simulations. It can also be seen by using the method of decomposing the Markov fictitious play process into fictitious play processes mentioned above. Suppose a strategy profile is chosen infinitely often. It follows from this that the corresponding embedded process is just like standard fictitious play. Thus, each embedded process converges to Shapley’s limit cycle. So there is also convergence to a limit cycle in the full process. This raises the question whether there is a learning model that converges in games like the Shapley game. An important result, due to Sergiu Hart and Andreu Mas-Colell, appears to suggests that the answer is no.19 Hart and Mas-Colell define a class of dynamic learning processes called uncoupled dynamics. A learning process is uncoupled if a player’s update rule does not depend on other players’ payoff functions. This is a very plausible informational requirement: information about other players’ payoffs is usually unavailable even for very sophisticated agents.20 All learning processes we have considered so far are uncoupled. Hart and Mas-Colell prove that there are large classes of games for which no uncoupled dynamics converges to a Nash equilibrium. The family of games used in the proof is a three-player version of the Shapley game. It should be emphasized, however, that Hart and Mas-Colell’s theorem makes another implicit, but substantive assumption.21 Besides being uncoupled, the learning dynamics are assumed to have only information 19 Hart and Mas-Colell (2003). 20 If dynamics don’t need to be uncoupled, it is simple to give an example of a dynamic process

that always converges to the set of Nash equilibria of any game. Just let the direction of change point at the set of Nash equilibria. For details see Hart and Mas-Colell (2003). 21 This point was first observed by Shamma and Arslan (2005).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

66

Pattern Learning

about frequencies of strategy choices. This assumption clearly applies to fictitious play and arguably also to other order invariant processes. Indirectly, it also applies to Vanderschraaf and Skyrms’s Markov fictitious play because it can be decomposed into fictitious play processes. But Hart and Mas-Colell’s assumption does not hold sway over all models of learning, as the following two examples show. The first example is a modification of Vanderschraaf and Skyrms’s Markov fictitious play. Instead of keeping track of transitions between strategy profiles, in the new model a player keeps track of the other player’s transitions between strategies. In two-player games this works as follows. Let’s call the row player Peter and the column player Brian. Peter observes how often Brian chooses strategy i after having chosen j in the previous trial, for all pairs of strategies i, j; let nij be that number, and nj the number of times Brian has chosen strategy j. Given that Brian has chosen j in the previous trial, Peter’s predictive probability for him choosing i on the next trial is equal to nij + αij  . nj + i αij Brian follows the same procedure when observing Peter’s choices. Both players choose a strategy that maximizes expected payoff with respect to these predictive probabilities. In the Shapley game, numerical simulations of the resulting process invariably converge to the following cycle of strategy profiles: (A, B) → (A, C) → (B, C) → (B, A) → (C, A) → (C, B) → (A, B). (3.2) This says that Peter and Brian move along the off-diagonal strategy profiles of the Shapley game. It follows that the limiting relative frequencies of strategies are equal to the Nash equilibrium.22 There is actually more going on than just convergence to the Nash equilibrium. The process spends about one-sixth of its time at each of the cycle’s strategy profiles. Peter and Brian are thus taking turns with getting the good 22 I don’t have an analytical proof of this result. What is fairly easy to see is that the process

doesn’t leave once it enters the cycle. After enough evidence has accumulated to overcome the prior alpha parameters, Peter and Brian follow the cycle if the number of transitions from a strategy to itself (nAA , nBB , nCC ) is higher than the other transition counts. In the sequence (3.2) there are no transitions from A to C, from B to A or from C to B; so their numbers are not increased along the cycle. In addition, a transition of the form AA always happens before a transition AB. So if nAA is larger than nAB to begin with, it will stay larger as soon as the players have entered the cycle. The same is true for nBB and nCC . Hence, once the counts nAA , nBB , nCC are the frontrunners, they stay in front.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.4 Cycles

67

payoff. This corresponds to a correlated equilibrium. Correlated equilibria are generalizations of Nash equilibria. The latter require players to choose strategies independently, whereas at a correlated equilibrium their choices may be probabilistically dependent. Correlations might arise, for example, via some kind of public signal; think of an umpire choosing one of the strategy profiles at random and announcing to each player her part of the profile.23 If no player has an incentive to deviate from that part, the configuration is a correlated equilibrium. In the Shapley game, choosing each of the six strategy profiles (3.2) with equal probability constitutes a correlated equilibrium. The second example of a learning dynamics that converges in the Shapley game is due to Jeff Shamma and Gürdal Arslan.24 The fictitious play process we have discussed in this and the preceding chapter has a continuoustime counterpart, which can be obtained by taking appropriate limits. Shamma and Arslan modify the continuous-time fictitious play process so that it involves information about the derivative of relative frequencies (the instantaneous rate at which relative frequencies change) in addition to information about observed relative frequencies. The derivative term is a short-term prediction of the direction in which opponent’s choices are going. It’s not immediately clear, though, how derivatives can be observed. For that reason, Shamma and Arslan modify their dynamics so that derivatives are estimated from how relative frequencies change over time. The original as well as the modified process converge to the Nash equilibrium of the Shapley game. Both Shamma and Arslan’s dynamics and the modified Markov fictitious play process are examples of uncoupled learning processes. They manage to escape Hart and Mas-Colell’s theorem because, on top of relative frequencies, they also predict how relative frequencies of opponent strategies change with respect to a current estimate of the opponent’s choice behavior. As a consequence, Hart and Mas-Colell’s theorem does not put in place fundamental limitations on learning in games. What the above considerations don’t show, however, is that our modification of Markov fictitious play or Shamma and Arslan’s dynamics converge in all games. Games give rise to highly complex learning situations. The way a player chooses based on what she has learned has a feedback effect on the strategic environment she will face in the future since her opponents also update on what they learn. Thus, if a learning environment exhibits 23 Aumann (1974). 24 Shamma and Arslan (2005).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

68

Pattern Learning

stable symmetries over a period of time, those symmetries might be of indeterminate duration due to how other players respond to what they observe. As a result, besides creating cycles as in the Shapley game, strategic environments may even give rise to chaotic dynamical behavior.25 Let’s return to the question raised at the beginning of this section – namely, whether learning situations can be too complex for any Markov fictitious play process. The question is ambiguous: it is not clear what it means for a learning situation to be too “complex.” That’s why the question has more than one answer. As we have seen, there is no finite pattern that could not in principle be addressed by Markov fictitious play. So if we distinguish among learning environments only in terms of finite patterns, no learning environment is too complex for Markov fictitious play. But, as illustrated by game theoretic examples, the patterns in a learning environment may be transient, and it is not clear whether there is an overall, long-term regularity that could be detected. Hence no particular Markov fictitious play process may always be adequate.

3.5 Markov Reinforcement Learning As we saw in Chapter 2, the basic model of reinforcement learning is different from fictitious play in a number of ways; most notably, the basic model is payoff-based. But there are also similarities. In particular, both models assume order invariance. Just like fictitious play, then, the basic model is unable to extrapolate patterns. Let’s illustrate this point in terms of a recurrent Markov chain with two states, A and B. Suppose there are two acts, a and b, and that a and b are the appropriate acts for A and B, respectively. This situation is summarized in Figure 3.4. In the simplest case the Markov chain is periodic: ABABABAB . . . Despite the obvious pattern, the basic model only tracks the frequencies of how often a and b are chosen successfully. Because the limiting relative frequency of A is 1/2, and since a has a higher asymptotic expected payoff than

a b

A

B

2 0

0 1

Figure 3.4 Payoffs for two acts, a and b, and two states, A and B. 25 See, e.g., Sato et al. (2002) and Wagner (2012, 2013).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.5 Markov Reinforcement Learning

69

b, the basic model chooses a in the long run with probability one. Something similar happens if the Markov chain is not strictly periodic. Thus, irrespective of whether one state is more probable given a previous state, the basic model converges to choosing the act that maximizes payoffs with respect to the long-run relative frequencies of states.26 In this example, an agent who is just a bit more savvy could exploit the patterns of the Markov chain. In the periodic case, such an agent would aim at learning to choose a in state A and b in state B. In general, she would aim at learning to choose the act that maximizes expected payoffs conditional on the previous state. The question is how the basic model can be adjusted to yield this kind of behavior. Modifying the basic model in the same way as fictitious play by conditioning choice probabilities on the previous state goes against the payoffbased spirit of reinforcement learning: Recall that the main motivation for considering payoff-based learning rules is that they can be adopted by bounded agents who have limited information about their environment. The most obvious payoff-based modification of the basic model takes choice probabilities to be conditional on the agent’s past choice. Let qij (n) be the propensity for choosing j on trial n given that i has been chosen at trial n − 1. Then the choice probability of choosing j, supposing i has been chosen previously, is given by qij (n) pij (n) =  . k qik (n) Propensities are updated according to the following scheme:  qij (n) + π(n) if i was chosen at n − 1 and j at n qij (n + 1) = 0 otherwise.

(3.3)

(3.4)

(As in the basic model, the random variable π(n) is the payoff obtained at trial n.) As an extension of the basic model, this type of Markov reinforcement learning is very appealing. Unfortunately, however, the model cannot successfully track the periodic states of a recurrent Markov chain. Suppose, for the sake of argument, that the process given by (3.3) and (3.4) learns to correctly respond to a periodic pattern: . . . ABABABAB . . . ...a b a b a b a b ... 26 Proofs can be found in Rustichini (1999) and Beggs (2005).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

70

Pattern Learning

As with the basic model of reinforcement learning, this type of Markov reinforcement learning chooses each act infinitely often immediately after a given act with probability one.27 It follows that whenever an agent has a successful streak of following the pattern, she will make a mistake with probability one. For example, she chooses b twice in a row: . . . ABABABAB . . . ...a b b a b a b a ... After two successive choices of b, the probability of continuing the earlier pattern is very high because of past reinforcements (pab and pba are higher than paa and pbb ). But this leads to an extended period where states and acts are mismatched. The mismatch continues until the agent again chooses the same act twice in a row (which of course happens with probability one). However, extended periods of mismatch persist in the long run. The reason is that in successful periods reinforcements build up in a way that supports mismatches after a mistake happens, and mistakes are bound to happen.28 We have to be cautious as to what this example does and does not show about the Markov reinforcement learning process defined by (3.3) and (3.4). What it does not show is that this process is generally inadequate. Just think of a situation in which the payoff at time n depends on the act chosen at time n − 1. Suppose, for instance, that whenever a was chosen the payoff for b is 1 and the payoff for a is zero, and vice versa if b was chosen. In this example, the Markov reinforcement learning process can be decomposed into those periods immediately following a choice of a and those immediately following a choice of b. Since each act is chosen infinitely often with probability one, both sequences are almost surely infinite. Along the first sequence the process chooses b in the limit, and along the second sequence it chooses a in the limit. Thus, the agent learns to converge to the alternating payoff structure. The problem is not that Markov reinforcement learning cannot detect patterns, but that it cannot detect the patterns of a recurrent Markov chain of states: the way payoff regularities track state regularities is not robust to small amounts of noise. The above argument – that Markov reinforcement learning makes mistakes which lead to mismatches – is therefore robust; it only requires that mistakes are made and that mismatches are supported 27 See Beggs (2005) on the basic model. The corresponding result for the Markov process follows

immediately since, given a, both acts are chosen infinitely often with probability one; the same is true given b. 28 In numerical simulations, after an initial period of exploration the process divides its time roughly equally between matching states and acts and mismatching them.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.5 Markov Reinforcement Learning

71

by past experience. As a result, the argument extends beyond the particular variant of Markov reinforcement learning considered here to many other models that don’t condition on states. What this indicates is that it is very difficult, if not impossible, to incorporate state regularities into payoff-based learning processes; regularities among states seem to require less limited models of learning. One solution that immediately suggests itself is to modify the basic model by conditioning choice probabilities and propensities on the previous period’s state. Formally, the new model is the same as the one given by (3.3) and (3.4), the only difference being that i now represents states instead of acts. The resulting model is a minimal departure from the philosophy of payoff-based learning: it does take information about states of the world as input, but not about the full history of states, and only insofar as they affect propensities. The new kind of Markov reinforcement learning gets the job done in our Markov chain example (Figure 3.4). This can again be shown by the method of decomposing the process. Since the Markov chain is recurrent, each state, A and B, occurs infinitely often with probability one. Thus, for each state there is a subprocess of acts chosen in the period immediately following an occurrence of A and B, respectively. By (3.3) and (3.4), each subprocess is a basic reinforcement learning process. Standard results show that each subprocess converges to choosing the optimal alternative with probability one.29 Thus the full process learns to follow the Markov chain pattern. The same method can be used to show that the new Markov reinforcement learning process converges to the alternating pattern in the Taking Turns game with positive probability. Consider the subprocesses that we obtain by restricting Markov reinforcement learning to those periods immediately following the choice of a particular strategy profile. A little reflection shows that each subprocess is formally equivalent to a process where both players use the basic model of reinforcement learning. Standard results then show that each subprocess converges to a strict Nash equilibrium with probability one, and to each one with positive probability.30 Thus, at least one strict Nash equilibrium is chosen infinitely often. The subprocess associated with that equilibrium converges to the other strict Nash equilibrium with positive probability, and vice versa. Hence, the event that every subprocess converges to a strict Nash equilibrium and 29 See Beggs (2005) and Hopkins and Posch (2005). 30 See Hopkins and Posch (2005).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

72

Pattern Learning

that the subprocesses associated with the strict Nash equilibria converge to an alternating pattern has positive probability. Let me summarize this section. The basic model of reinforcement learning can be extended in various ways to integrate patterns that might be present in a learning situation. Payoff-based extensions don’t seem to have the resources to detect patterns about states of the world, such as the strategy profiles of a game. Relaxing the requirements imposed by payoff-based learning procedures leads to models of reinforcement learning that are able to recognize the same patterns as Markov fictitious play.

3.6 Markov Learning Operators Earlier in the chapter we saw that the inductive foundation of classical inductive logic extends very smoothly to Markov inductive logic. In this section, I wish to indicate why going from the inductive foundation of the basic model to Markov reinforcement learning models also involves no serious bumps. In order to capture the appropriate range of possibilities, it is useful to introduce the notion of a signal. In the preceding section, we considered three examples of signals: the act chosen in the previous period, the previous state of the world, or the last strategy profile. In general, a signal is a partition of events that are measurable at each period. Thus, a signal might provide information not just about the process’s previous period, but also about other events of the past. For simplicity, we assume that the signal partition is finite. The reinforcement learning process then updates a matrix of choice probabilities, psi (n), s being a signal and i an alternative. Since signals form a partition, conditional choice probabilities cover every eventuality. Recycling the axiomatic foundation of the basic model is now quite straightforward. Recall its basic elements, Luce’s choice axiom and commutative learning operators. If Luce’s choice axiom holds at every period for each element of the signal partition, then there exist conditional propensities qsi (n) for all n and s such that qsi (n) 31 psi (n) =  . j qsj (n) 31 The relevant axiom is: for any signal s and any two subsets of alternatives R, S of the basic set of

alternatives T: if R ⊂ S ⊂ T, then ps (R, T) = ps (R, S)ps (S, T).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.7 The Complexity of Learning

73

A necessary requirement for sequences of choice probabilities to give rise to a Markov reinforcement learning process is that conditional choice probabilities, psi (n), only change if their associated signal event, s, has occurred. This implies that each sequence of choice probabilities corresponding to a signal can be analyzed on its own. From this point on, the axiomatic theory is identical to the one presented for the basic model of reinforcement learning in the previous chapter and in Appendix C. That is to say, Markov reinforcement learning processes are grounded in a theory of conditional commutative operators for which Marley’s axioms hold together with the requirement that the conditional propensity of an act changes only if that act is chosen. As you might expect, the crucial inductive assumption is conditional commutativity – that is, the postulate that learning operators on propensities be commutative conditional on signals. Adopting a Markov reinforcement learning procedure thus involves a commitment to order invariance conditional on signals. This expresses the same idea at the level of propensities that Markov exchangeability expresses for sequences of observations.

3.7 The Complexity of Learning Conditioning a learning process on past events is obviously not restricted to fictitious play and the basic model, but also pertains to average reinforcement learning, probe and adjust, and other learning procedures (such as the regret learning model reviewed in Appendix B). This results in a large variety of new learning models which have the resources of detecting different types of patterns in virtue of abandoning order invariance. What unifies the new and the order invariant models is the significance of symmetries as inductive assumptions. The symmetries of a learning model determine which patterns, if any, can be recognized by the process. What we also see at this level, then, are the deep connections between Bayesian learning models, like Markov fictitious play, and the learning models of bounded rationality that started to emerge in the previous chapter. These connections, as we have seen now, are not restricted to order invariant epistemic situations. One point that I have already mentioned for Markov fictitious play also holds for other probabilistic models of learning: there are no finite bounds on the patterns that can be considered by a learning process. Not only

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

74

Pattern Learning

can they apply to different elements of a learning situation (states, acts, payoffs), they also might encompass larger segments of the past than just the previous trial. I have sketched this for generalizations of the basic model in terms of signals, but they apply to other models as well. Vanderschraaf and Skyrms discuss this point at length.32 They note that their Markov fictitious play process is able to approximate any correlated equilibrium if given enough memory. The same point applies to Markov reinforcement learning and other bounded learning procedures. Think again of the Taking Turns game. If both Maria and Galileo condition on the outcomes of the previous two trials, for example, the resulting process may converge to an alternating configuration where Maria uses the telescope two times in a row, and Galileo uses it only once. This corresponds to another correlated equilibrium of the Taking Turns game. Further correlated equilibria can be obtained in a similar way. This suggests that having more memory allows a learning model to capture all kinds of possible patterns that might be missed otherwise. As attractive as this idea might seem, there is a tradeoff involved. Increasing the memory of a learning model might expand its epistemic reach, but it also makes the updating protocol more complex. This steers us back into the realm of bounded rationality, with its emphasis on the procedural aspects of learning and decision making. According to Herbert Simon, there are two sources for procedural bounds: limited informational inputs and bounds on the computational capacities of an agent.33 This gives us two dimensions along which learning models can be distinguished. So far we have mostly focused on the first one, which classifies learning models in terms of informational inputs. Pattern learning confronts us with the task of considering computational complexity. Learning models clearly differ with regard to their computational requirements. By enlarging the conditioning events of Markov learning rules (Markov fictitious play, Markov reinforcement learning), the number of calculations that have to be performed at each step is increased. This suggests there is a hierarchy according to which learning models are partially ordered with respect to their computational complexity. The idea of ordering learning models according to computational effort can be discussed more precisely for a special class of update rules known as finite state automata. A finite state automaton consists of a finite number of states, inputs, and outputs, an initial state, a transition function that 32 Vanderschraaf and Skyrms (2003, pp. 324–327). 33 Simon (1955).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

3.7 The Complexity of Learning

75

goes from states and inputs to states, and an output function that goes from states and inputs to outputs. The memory of finite state automata is, unlike the memory of Turing machines, restricted to be finite. Finite state automata thus provide us with simple examples of resource-bounded models of computation.34 Finite state automata have also been used as models of bounded rationality learning.35 In these applications, the states of an automaton represent information about the past; they serve as the automaton’s memory. Its inputs represent new pieces of information. Inputs and states determine a new state (representing how the automaton’s memory changes). They also determine an output, which in a repeated decision problem represents the choice the automaton makes in the next period. This corresponds roughly to the scheme of the learning models considered thus far, except for the fact that finite state automata are deterministic. There is a well-developed theory of computational complexity for finite state automata.36 The complexity of each automaton can be related to the complexity of a corresponding logic circuit. A logic circuit is a directed, acyclic graph whose vertices represent binary inputs or Boolean functions (logic gates). Each logic circuit computes a binary function (it maps binary inputs to a finite number of binary outputs). One measure for the computational complexity of a logic circuit is its size. The size of a logic circuit is its number of inputs and logic gates. A binary function can be computed by different logic circuits, some of which will be of smaller size than others. One reasonable measure of its complexity is the size of the smallest logic circuit that is capable of computing that function. There is no loss of generality in supposing that the finite state automaton is a binary machine for which states, inputs, and outputs are represented by finite binary sequences; this can always be achieved by a suitable coding. Since a finite state automaton computes the next state and the output, it is a binary function. The complexity of the automaton can then be associated with the size of the smallest circuit that computes the state transition function and the output function. According to this measure of computational complexity, finite state automata are more or less complex depending upon the size of the smallest circuits that represent them.37 34 A Turing machine can simulate any finite state automaton. 35 Cf. Binmore and Samuelson (1992) who study finite state automata in the context of

evolutionary game theory. See also Abreu and Rubinstein (1988). 36 See, e.g., Savage (1998). 37 There are other measures of complexity such as the depth of the circuit. The smallest size logic

circuit can be approximated from above by different constructions, such as the equivalent

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

76

Pattern Learning

By using the size of the smallest logic circuit or some other measure of computational complexity, it is possible to partially order finite state automata. The order will in general be influenced by the number of states and inputs of the automaton, as well as by how involved its calculations are. The number of states and inputs is particularly interesting in relation to pattern learning, since the states of the automaton represent its memory. The number of states is thus related to the complexity of the patterns an automaton can recognize in a learning situation. Suppose that no state is redundant for how automata determine outputs (otherwise they could be replaced by an automaton with a smaller number of states). Then the number of states will exert a considerable influence on the computational complexity of automata that otherwise perform similar calculations. Expanding the set of states of an automaton will, other things being equal, increase the automaton’s computational complexity. Thus, for finite state automata, there is a quite precise sense of why the detection of more complex patterns requires more computational effort. What this shows is that agents face a tradeoff between, on the one hand, the range of potential applicability of a learning model and, on the other, its computational complexity. Having a large memory can help detect more complex patterns, but it comes at the cost of higher computational effort.

number of logic operations a finite state automaton performs, which is the size of a particular circuit. For details see Savage (1998, Chapter 3).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 07:59:46, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.005

4

Large Worlds

Let us imagine to ourselves the case of a person just brought forth into this world, and left to collect from his observation of the order and course of events what powers and causes take place in it. The Sun would, probably, be the first object that would engage his attention: but after losing sight of it the first night he would be entirely ignorant whether he would ever see it again. He would therefore be in the condition of a person making a first experiment entirely unknown to him. Richard Price Appendix to Bayes’ Essay

So far, we have considered learning models that operate within fixed conceptual frameworks. Fictitious play assumes that states, acts, and outcomes are known; reinforcement learning assumes the same for acts and outcomes. In many situations this is implausible, as Bayes’ friend and curator Richard Price has observed in his reflection on “a person just brought forth into this world.” One may not know the basic constituents of a new environment; accordingly, there may be no fixed conceptual framework for learning. While this observation is not a knockout criticism of the learning models studied in previous chapters, it does put the spotlight on one of their inherent limitations. In this chapter, we will investigate some ways to overcome that limitation. To set the stage, I will give some context to the question of how much knowledge a learning model presupposes by discussing Savage’s distinction between small worlds and large worlds in the context of learning. In a small world, the structure of an epistemic situation is fully known; in a large world, one does not know or anticipate every aspect of the epistemic situation that might be relevant. The distinction between large and small worlds will add yet another layer to our running discussion of bounded rationality. The second section of this chapter looks at models based on flexible conceptual frameworks that take the epistemic incompleteness of a learning situation into account. They do so by keeping the learning procedure open to conceptual changes. There is an inductive logic, based on what statisticians call the sampling of species problem, that can be used to develop a

77

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

78

Large Worlds

conceptually open version of fictitious play. I’m going to introduce similar ideas for reinforcement learning. For both types of models, Luce’s choice axiom will turn out to be of crucial importance.

4.1 It’s a Large World (After All) Savage introduced the distinction between small and large worlds in order to assess the normative validity of his decision theory.1 Roughly speaking, a small world decision problem contains states, acts, and outcomes known to the agent. A large world, in contrast, does not have this feature. In a large world the decision maker must face the possibility that unknown states, acts, or outcomes could change the description of her decision situation in a way that affects her preferences or beliefs. Savage uses the slogan “look before you leap” for describing decision making in small worlds. In large worlds, he continues, “you can cross the bridge when you come to it.”2 The first slogan emphasizes the role of planning in decision making, while the second holds that not everything is part of a plan. The look-before-you-leap-slogan could lead us to think that we only have to make one decision in our life – namely, deciding on one strategy that determines which act to choose in every conceivable future decision situation. But, as Savage observes, This is utterly ridiculous, not – as some might think – because there might later be cause for regret, if things did not turn out as had been anticipated, but the task implied in making such a decision is not even resembled by human possibility. It is even utterly beyond our power to plan a picnic or to play a game of chess in accordance with the principle, even when the world of states and the set of available acts to be envisaged are artificially reduced to the narrowest reasonable limits.3

Notice that in this passage, Savage shows an acute awareness of the procedural bounds on decision makers emphasized by Herbert Simon. Our decisions are made in a small fragment of the large world. Example: If you contemplate acquiring a blue chip stock portfolio, there are several possible future events you take into consideration (mergers of companies, the development of overseas markets, etc.); but no one would claim to have the ability of anticipating all future events that might conceivably 1 See Savage (1954). Savage talks of “grand worlds” instead of “large worlds.” 2 Savage (1954, p. 16). 3 Savage (1954, p. 16).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.1 It’s a Large World (After All)

79

influence the value of your portfolio (eruption of Icelandic volcano you’ve never heard of, Earth invaded by Klingons, etc.). Savage knew that large worlds created a problem for his theory. A choice could be rational in the small world of a decision problem without being rational in the large world – the world that ultimately counts. The view that there is no principled solution to this problem within Savage’s theory is put forward by Jim Joyce.4 I am going to provide an overview of Joyce’s argument since it turns out that several parts of his discussion are relevant for our discussion of rational learning. Although small and large worlds can be explained within Savage’s theory, their role becomes particularly clear in the framework of Jeffrey’s Logic of Decision.5 One of Jeffrey’s conceptual innovations was to think of states, acts, and outcomes as propositions and to apply expected utilities to all propositions. (Savage viewed states, acts, and outcomes as conceptually distinct.) If we view states, acts, and outcomes as propositions, then they determine partitions. A partition is a set of mutually exclusive and exhaustive descriptions of the world: one, and only one, proposition in a partition is true.6 Partitions may be thought of as knowledge structures: they express the information an agent might receive in terms of an exhaustive set of elements. Let’s look at some examples. If a coin is tossed twice, there are four states of the world: heads twice, tails twice, tails following heads, heads following tails. This is a partition (if we are prepared to ignore other possibilities of what might happen with the coin). In a similar way, a set of acts (e.g., buying n shares of a company’s stock, n = 0, . . . , 100) and a set of outcomes (getting $ m, m = 0, . . . , 100) usually constitute a partition. In a decision problem, the finest partition that can be expressed by states, acts, and outcomes is generated by combining the corresponding three partitions. This is the knowledge partition associated with the decision problem. Its basic constituents are conjunctions of one state, one act, and one outcome. Partitions can be partially ordered by the refinement relation. A partition P2 is a refinement of a partition P1 (equivalently, P1 is a coarsening of P2 ) if every element of P1 is the union of elements in P2 . The knowledge partition P2 is more fine-grained – it allows us to make more distinctions than the knowledge partition P1 . Joyce uses refinements of partitions for determining whether one world is larger than another. Consider the 4 Joyce (1999). 5 Jeffrey (1965). 6 In the case of infinitely many propositions, we might have a σ -algebra instead of a partition.

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

80

Large Worlds

knowledge partition of a particular decision problem. Any coarsening of the knowledge partition gives rise to a smaller world, and any refinement may give rise to a larger world. The large world is the finest relevant version of the decision problem; refining acts, outcomes, or states any further would not make any difference to the agent’s preferences or beliefs. Thus, the large world is a fully considered description of the decision situation.7 A small world doesn’t have the conceptual resources to capture all states, acts, or outcomes of the large world. For that reason, large world concepts cannot be used for decision making in small worlds. With this in mind, Savage’s problem of large worlds can be put as follows. Suppose that you choose a “best” act in the small world (e.g., a “best” one in the sense of being optimal in Savage’s decision theory). Then there is prima facie no reason to think that your choice would also be the “best” in some larger world, or even in the large world. Choosing optimally in the small world confers a kind of local rationality on your choice. But in order to turn a locally rational choice into a globally rational choice, it needs to be the “best” choice in the fully considered large world decision problem, too. Joyce invites us to think of a decision maker as being involved in a process of deliberation regarding the elements of her decision problem prior to applying decision theory. You can think of this process as moving along increasingly refined small worlds; each small world spells out more details about the decision problem than its predecessors. Ideally, the deliberation process would halt at the large world. As an example, think again of the problem of acquiring a blue chip stock portfolio. Which states of the world should you take into consideration? There is virtually no limit to the distinctions among events you can make. Should you go all the way down to the most fine-grained physical descriptions of the world? Everyone will stop short of that in order not to introduce more conceptual resources than necessary. Still, in this process the location of the large world – the most fine-grained among the relevant knowledge partitions – seems to be elusive. For finite beings like us, the large world is unreachable in nearly all cases. So the process will halt, instead, at some small world where choices can only claim to be locally rational. The question is: are there circumstances which would allow us to think of them as globally rational, even if they are not made in the large world? One answer that suggests itself involves claiming that the process of refining small worlds is itself a decision problem. Stopping the process at some small world is an act in this higher order decision problem. If that act is 7 For a more precise definition of the large, or grand, world, see Joyce (1999, p. 73).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.1 It’s a Large World (After All)

81

globally rational, the best act in the proper decision problem is, arguably, globally rational as well. Joyce points out, entirely correctly, that this won’t work because it leads to an infinite regress.8 If the deliberational process is itself part of a decision situation, that situation can also be specified in more or less considered ways, and so the large world problem reappears. The justification for why a small world decision also holds in the large world must come from a source other than decision theory, if it comes from anywhere at all. Joyce maintains that for a small world decision to be fully rational it has to meet two requirements. First, it should be rational in the small world; and second, the decision maker must be committed to the view that she would make the same decision in the large world.9 Based on this, Joyce argues that there is a fairly principled reason why such a justification is not forthcoming for Savage’s theory: the small world evaluations of a Savage decision maker need not be partition-invariant. Partition invariance means that a decision maker’s evaluations of elements of the small world remain the same if we view those elements as unions of elements of the large world. For example, partition invariance requires the expected utility of an act to stay the same under refinements of the set of states. There is nothing in Savage’s theory that would guarantee this. Even if an agent respects Savage’s axioms in both the small world and the large world, her evaluations in the different settings may not cohere with one another. Jeffrey’s decision theory, on the other hand, exhibits this sort of partition invariance. The main reason is that Jeffrey’s theory does not assume states and acts to be probabilistically independent – which is one of the core assumptions of Savage’s theory. In the logic of decision, the choice of an act might make states more or less probable. From this it follows quite straightforwardly that, in Jeffrey’s theory, the expected value of choosing an act is independent of how states, acts, and outcomes are partitioned.10 Partition invariance is arguably a reason in favor of Jeffrey’s theory; it suggests that a Jeffrey agent, unlike a Savage agent, has a principled reason to think that her evaluations of acts would be the same in the large and the small world. The small world problem is, essentially, a consistency problem. It raises the question whether a given small world is consistently embeddable in the large world. Partition invariance shows that any model using Jeffrey’s 8 Joyce (1999, pp. 73–74). 9 Joyce (1999, pp. 74–77). 10 Joyce (1999, p. 121).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

82

Large Worlds

decision theory is, in a certain sense, consistently embeddable in the large world, whatever the large world may be. There is, of course, no way we can be sure in the small world that we are going to make the right decision in the large world, as Joyce points out: The agent might well judge that A is optimal among her small-world options, be firmly convinced that she would retain this opinion even if she reflected on the matter more fully, and yet it still might be true that she would come to see her choice as misguided if she actually were to deliberate further. The most we can reasonably ask of an agent is that she reflect on the decision problem she faces until she has good reason to think that further reflection would not change her views (at least not change them enough to affect her decision).

Consistent embeddability of a small world model thus does not entail the correctness of small world evaluations. What it does say is that small world decisions are best estimates of their large world counterparts.

4.2 Small World Rationality As mentioned above, Savage’s problem of large worlds is closely connected to Herbert Simon’s worries about the rather strict knowledge assumptions of standard economic theory, which effectively presupposes a fully informed agent with no limits on how to process that information.11 Considerations of bounded rationality are therefore already involved in classical decision theory. Since choices are always made in a small world, an agent’s decisions ideally are best estimates of large world decisions. This is the sense in which they can achieve full rationality. But often we will fall short of this ideal even if we follow classical decision theory, making a choice without claiming that it is a best estimate. In those cases, our choices may nonetheless be locally (that is, boundedly) rational. The main topic of this book is not the evaluation of choices, but the evaluation of learning processes. Nevertheless, considering small and large worlds can help in understanding the extent to which a learning procedure is rational in a given learning situation. In particular, as with choices, the rationality of a learning procedure in a small world does not in general entail its rationality in the large world. Let me explain this point by first revisiting the basic model of reinforcement learning.12 The basic model’s fundamentals consist of acts 11 The connection between large worlds and bounded rationality is not a new one. It is discussed

by Binmore (2009), Gigerenzer and Gaissmaier (2011), or Brighton and Gigerenzer (2012). 12 My discussion here draws on a similar discussion for average reinforcement learning in

Huttegger (2017).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.2 Small World Rationality

83

and payoffs: in the language of the previous section, acts and payoffs constitute its knowledge partition. The knowledge partition defines a sequence of partitions for the learning process: for each period n, take all finite sequences of payoffs and acts that represent the possible ways of how the process might unfold in the first n trials. Suppose, for example, that there are two acts, A and B, and two payoffs, π1 and π2 . The basic partition is given by the four elements (A, π1 ), (A, π2 ), (B, π1 ), and (B, π2 ). After two trials there are 16 scenarios of what might happen, including (A, A, π1 , π2 ), (B, A, π2 , π2 ), (A, B, π2 , π1 ), . . . These represent the maximal information a reinforcement learner might have after two periods. Basic reinforcement learning regards some of them as equivalent, such as (A, A, π1 , π2 ) and (A, A, π2 , π1 ); we know from Chapter 2 that commutativity – the assumption that learning experiences are invariant under reordering – is responsible for these identifications. Thus, the partition representing the information of basic reinforcement learning at time n is, in fact, coarser than the one above; it does not take into account the order in which payoffs are obtained. These partitions are the “worlds” of the basic model. Are they small worlds or large worlds? The answer partly depends on the kind of agent we are modeling. If the conceptual system of the basic model fully exhausts the conceptual abilities of an agent, then the partitions can be thought of as large worlds. Let us say, in this case, that an agent is learning at full capacity. Looking at such an agent from the outside, there might be a lot of relevant information about the learning situation – for example, states of the world – that she ignores. We, the outside observers, might say that the agent could learn more successfully by exploiting this information. But this does not appear to be relevant for the internal rationality of the agent – she is learning at full capacity, after all. What is important for the internal rationality of such an agent, according to the approach I take in this book, is the consistency of the learning procedure. As we have seen, a reinforcement learner can update in a way that is consistent with her inductive assumptions about the process. And, as I shall explain more thoroughly in the next chapter, a reinforcement learner can also integrate new information in a dynamically consistent manner. Thus, a full capacity reinforcement learner can be fully rational in the sense of processing all available information consistently. But what if our agent is not learning at full capacity? Such an agent’s conceptual abilities may allow her to adopt more or less fine-grained descriptions of a learning situation. The conceptual system of her learning procedure may, therefore, be less than maximally fine-grained (that is, her

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

84

Large Worlds

conceptual abilities would render a richer conceptual system feasible). This is important because a conceptual system partially determines the range of learning rules one can adopt. For instance, if the conceptual system of an agent in a decision situation includes payoffs and acts, but not states of the world, only payoff-based learning rules are within her reach. The large world problems this leads to can again be made more precise by considering knowledge partitions. Let’s return to the example of basic reinforcement learning. An act may be subdivided into alternative ways of performing it (subdivide “turning to the left” into turning to the left with or without wiggling one’s ears). Payoffs are based on outcomes; drawing finer distinctions between outcomes can yield a new payoff structure (subdivide “having a cup of tea” into having it with or without milk). Furthermore, we can always have a generic act that stands for “none of the other acts.” Subdividing such a generic act allows us to introduce new acts. The same is obviously possible for outcomes. In addition to refinements of already existing categories, the knowledge partition of a learning rule may become refined by adding categories of a different type. The salient example is adding states of the world to a knowledge partition that consists of acts and payoffs. This constitutes a refinement because, instead of conjunctions of acts and payoffs, the new knowledge partition has conjunctions of states, acts, and payoffs as basic elements. Consider now an agent who is not learning at full capacity. The learning process takes place, then, in what is from the agent’s perspective a small world. In contrast, the learning process of a full capacity learner takes place in what is, from her perspective, the large world. Both agents are bounded: they don’t exploit all the information one possibly could. But they are bounded in different ways. The small world learner, unlike the full capacity agent, could adopt a more fine-grained conceptual framework. What does this mean for rational learning? The crucial question in decision theory is, as we have seen, whether a decision theory’s evaluations of acts in the small world are best estimates of how they would be evaluated in the large world. Learning models make inferences about events, choices, or other objects in the light of new information and an agent’s old system of epistemic states. In the context of learning, the crucial question thus is whether these inferences are best estimates of the inferences the agent would make in the large world. If they are, the agent’s inferences are not necessarily correct in the large world; rather, the agent does expect that having more information about the learning situation would have no effect on her small world inferences.

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.3 Learning the Unknown

85

In general, small world inferences will not be best estimates of the inferences the agent would make in the large world. The large world is a maximally considered description of the learning situation that leaves out nothing that could be of relevance. Saying that small world inferences are best estimates of their large world counterparts thus presupposes that we have a sense of what learning procedure an agent would adopt in the large world. In some cases this might be possible. For instance, if the large world of a reinforcement learner only consists of more acts and outcomes, she would plausibly also be a reinforcement learner in the large world. In analogy to how acts are evaluated in Jeffrey’s logic of decision, there might in this case be principles which guarantee that small world inferences can be consistently embedded into the large world. (At the end of this chapter, we will see that, indeed, Luce’s choice axiom is one such principle.) If, however, the large world of a reinforcement learner includes states of the world, it will in general be unclear what the agent’s large world learning procedure might be. As a consequence, there typically is nothing that would allow us to say that small world inferences hold up to large world standards. So this is what we have in general: if the large world is not too different from an agent’s small world, there might be a way to identify learning procedures with inferences that are best estimates of large world inferences. Otherwise, if the large world gets too large, this is not possible in general. In the latter kinds of cases a learning procedure cannot be said to be fully rational. This does not mean, however, that it is irrational. Simon’s bounded rationality puts things in the right context, I think. Even though a learning procedure might fail to be fully rational, it can be rational in a small world context as mentioned above. This is a kind of local rationality that does not look beyond the agent’s immediate circumstances. But there is a lot of room between this minimal type of bounded rationality and large world rationality. A learning procedure might be consistently embeddable in some worlds that are larger than the small world without being consistently embeddable in the large world. Boundedly rational learning comes in degrees. These degrees constitute another aspect of evaluating learning procedures in addition to the most basic one where we only consider dynamic consistency and symmetry.

4.3 Learning the Unknown The distinction between large and small worlds raises the following question: can models of learning be modified so as to allow for learning in

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

86

Large Worlds

large worlds? In the rest of this chapter, I wish to indicate how learning models might be extendible to larger worlds. I start with fictitious play and continue with reinforcement learning in later sections. The inductive systems of Bayes, Laplace, de Finetti, Johnson, and Carnap assume that the categories of observations (states of the world) on the basis of which inductive inferences are made are known in advance and are held fixed throughout the investigation. Their theories have nothing to say about situations where unanticipated events may happen. Learning from experience is restricted to small worlds. But it seems obvious that we should expect the unexpected; the world is large, and we cannot hope to identify all relevant observational categories in advance. On the face of it, a Bayesian solution to this problem might seem impossible. Bayesian inductive inference is a theory of consistency between old probabilities and new probabilities. The observation of a completely unexpected category contradicts the prior setup of the Bayesian model. How, then, should an observation of such a category be included consistently into our old probabilities? The task sounds paradoxical. Thus, it appears we have to give up on consistency; observing the unexpected can only be dealt with by building a new probability space, one that does not need to cohere with the previous one. In statistics, this problem is known as the sampling of species problem.13 Imagine yourself to be a field biologist on the verge of studying a new ecosystem. Prior to making observations, you may not know all the species you could encounter; thus you are barred from using standard Bayesian inference. But there are solutions to the sampling of species problem that allow us to consistently include unanticipated observations into our old beliefs, and to form predictive probabilities based on evidence we cannot yet describe. In fact, as Sandy Zabell has shown in two important papers, a powerful theory of inductive inference can be developed for these types of situations.14 Our epistemic setting is one where categories (states, species, etc.) might be unknown prior to making observations. Augustus de Morgan already proposed a rule of succession for this scenario.15 Recall Laplace’s rule of succession, P[Xn+1 = i|X1 , . . . , Xn ] =

ni + 1 , n+t

13 The sampling of species problem goes back at least to Fisher et al. (1943). 14 Zabell (1992, 1998). I follow Zabell’s development in what follows. 15 De Morgan (1838).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.3 Learning the Unknown

87

where n is the total number of observations, ni is the number of i’s observed so far, and t is the number of categories, which is fixed in advance. In the new learning situation, the total number of categories is unknown. If, now, t denotes the number of categories known initially, de Morgan’s rule of succession says that P[Xn+1 = i|X1 , . . . , Xn ] =

ni + 1 , n+t+1

where i can be any category that was observed up to and including the nth trial. Therefore, the probability of observing a new category is 1 . n+t+1 Initially (n = 0), there is a uniform distribution over the t known categories and the “unknown” category. Thus, de Morgan’s rule, unlike Laplace’s rule, assigns a probability to an additional category, “type not yet observed.” This becomes especially clear when t = 0: initially, no type is known. In the Laplacian setting this results in incoherence; but for de Morgan’s rule it makes sense if we modify it to P[Xn+1 = i|X1 , . . . , Xn ] =

ni . n+1

The modification is necessary because no category is known a priori, so we cannot have prior weights of 1 for known types. As an example, suppose you are an exobiologist with the United Federation of Planets, on a mission to study life forms on an utterly unknown planet. Before arrival, you have no knowledge about the life forms you might encounter (t = 0). Hence, your degrees of belief cannot be represented by Laplace’s rule of succession. But they can be represented by de Morgan’s rule. You will observe a new species on the first trial (there are no old species). On the second trial, you either observe the first species or a new one, each with probability 1/2. If you observe the first species, you may observe it again on the third trial, or you may observe a new species. If you observe a new species on the second trial, you may do so again on the third trial, or observe one of the two known species. The de Morgan rule assigns conditional probabilities to all these events for any trial n and any number of species you may have observed before n. De Morgan’s rule shows that probabilistic models exist for certain large world situations in which categories are not fixed in advance. Since it is based on Bayesian conditioning, it is a consistent way of incorporating new information about unknown categories into one’s old beliefs.

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

88

Large Worlds

4.4 Exchangeable Random Partitions Given our reservations about the possibility of a Bayesian model for learning the unexpected, de Morgan’s rule might seem like magic. We will now see that, quite to the contrary, it is based on principles analogous to the principles of Bayesian inductive logic we considered back in Chapter 1. Since in the present context we cannot put names to categories in advance, the observational process is not given by sequences of categories. The key idea for solving this problem is to refer to categories as they come along. In our example, the first species you observe is referred to as “the first species,” the second one as “the second species,” and so on. These names are not rigid designators, to use Kripkean parlance; that is, they are not invariant across possible worlds. In another run of the experiment, the new first species might be different from the actual one. This ambiguity doesn’t cause any problems because what we register are the periods in which we observe the first species t11 < t21 < t31 , . . . , the periods in which we observe the second species, t12 < t22 < t32 , . . . , and likewise for any other species that might come along. It is over these sequences of time periods that we have beliefs. How can this be achieved? Suppose we make ten observations. Registering the times at which we observe a species creates a partition of the numbers 1, 2, . . . , 10. For example, if we observed four species in the following order: 1213321422, the partition is {1, 3, 7}, {2, 6, 9, 10}, {4, 5}, {8};

(4.1)

the elements of the first set are the times the first species is observed, and of the second set are the times the second species is observed, etc. If species are not known in advance, we cannot have probabilities over sequences of species, but we can have probabilities over these partitions. This idea can be extended to any number of finite or even infinitely many observations. If the process of observing new species is extended to infinity, registering the times at which distinct species are observed creates a partition of the natural numbers. Exchangeable random sequences are the starting point of classical inductive logic. The analogous concept for the present scenario is called an exchangeable random partition. Random partitions correspond to random

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.4 Exchangeable Random Partitions

89

variables. A random partition n of the first n natural numbers takes on partitions of the set {1, . . . , n} as values. In the example above, where n = 10, the partition (4.1) is one possible realization of the random partition 10 ; so are {1, 2, . . . , 10}(one species), {1}, {2}, . . . , {10}(ten species), {1, . . . , 5}, {6, . . . 10}(two species), and many others. How many? The number of partitions of a set of n elements is given by the Bell number Bn . The Bell number B10 is 115975. So, in general, we would need to assign 115974 probabilities in order to express beliefs for observations of species in ten trials. As in the case of exchangeable sequences, a modified notion of exchangeability simplifies matters considerably. Recall that a random sequence is said to be exchangeable if any two of its realizations with the same frequencies of categories have the same probability. For partitions we obviously don’t have frequencies of categories; what we do have are frequencies of frequencies: for any sequence of n observations we can determine how many species are observed once, how many are observed twice, etc., up to how many species are observed n times. Let us denote the number of species that are observed m times by am . Then the frequencies of frequencies are summarized by the partition vector (a1 , . . . , an ). For example, the partition (4.1) has one species each that was observed four times, three times, two times, and one time; therefore, its partition vector is (1, 1, 1, 1, 0, 0, 0, 0, 0).16 A random partition n is called exchangeable if any two partitions π1 and π2 have the same probability, P[ n = π1 ] = P[ n = π2 ], whenever they have the same partition vector. This means, basically, that the order of observations and the identity of species do not affect probabilities. Instead of partition (4.1) we might consider the following partition: {1, 2, 3, 4}, {5, 6, 7}, {8, 9}, {10}

(4.2)

Because both partitions have the same partition vector, they have the same probabilities (conditional on the assumption of partition exchangeability). The move from (4.1) to the new partition (4.2) corresponds to 16 Alan Turing appears to have been the first to recognize the importance of frequencies of

frequencies for statistical inference in his cryptoanalytic work during World War II (Zabell, 1992).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

90

Large Worlds

reordering and relabeling observations. We have seen that (4.1) is associated with the sequence of observations 1213321422. If order shouldn’t affect probabilities, we may convert this sequence into 2222111334. If the labeling of species is irrelevant, we may convert the latter sequence into 1111222334. This sequence corresponds to the partition (4.2). Hence, partition exchangeability is tantamount to saying that observations are invariant under reordering and relabeling. Exchangeable random partitions are comparable to finite exchangeable sequences. Infinite sequences of random partitions 1 , 2 , . . . correspond to infinite sequences of observations. Of course, we have to make sure that successive partitions are consistent. A random partition can extend the immediately preceding random partition in one of two ways: we can observe a species we’ve already encountered, or we can observe a new one. A consistent infinite sequence of random partitions is exchangeable, then, if every random partition n is exchangeable.

4.5 Predicting the Unpredictable Independent trials are chance setups that generate exchangeable sequences. There also are chance setups that give rise to exchangeable random partitions. The two best known are the Chinese restaurant process and the Hoppe urn.17 In the Chinese restaurant process, we imagine a Chinese restaurant with infinitely many linearly arranged tables. In each period a new costumer arrives, who is seated either at an occupied table or at the next new table. The first costumer is seated at the first table. The second customer is seated at the first table or the second table, each with probability 1/2. The nth customer is seated at an occupied table with probability proportional to the number of customers m who are already at that table, m/n, and at the next new table with probability 1/n. A little thought shows that the Chinese restaurant process creates an infinite exchangeable sequence of random partitions. After the nth costumer is seated at a table, we get a partition of 1, . . . , n by lumping together those patrons sitting at the same table. The probabilities with which costumers are seated guarantees that all arrangements of customers with the same partition vector have the same probability. 17 Aldous (1985) reports that the Chinese restaurant process was conceived by Lester Dubins and

Jim Pitman. The Hoppe urn was introduced in Hoppe (1984).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.5 Predicting the Unpredictable

91

The Hoppe urn, an important generalization of the Polya urn scheme in mathematical population genetics, is an equivalent representation of this process. An urn initially contains one black ball, the so-called “mutator.” It is chosen on the first trial and returned to the urn together with a ball of a different color. On each following trial a ball is chosen at random. Whenever the mutator is chosen, it is replaced with a black ball and a ball with a new color; if a colored ball is chosen, it is replaced with two balls of the same color. The Hoppe urn and the Chinese restaurant process generate predictive probabilities that are compatible with de Morgan’s rule. The probability of choosing a new category (new table, mutator) after the nth trial is equal to 1 . n+1 The probability of observing a known category i (being seated at table i, or observing a known color i) is equal to ni . n+1 Thus, since the Hoppe urn and the Chinese restaurant process give rise to infinite exchangeable random partitions, there must be a connection between exchangeable random partitions and de Morgan’s rule. The connection is brought to light by the deep and beautiful representation theorem of the British mathematician J. F. C. Kingman.18 Recall that de Finetti’s theorem shows infinite exchangeable sequences to be mixtures of independent trials. Kingman’s representation theorem demonstrates that the general infinite exchangeable random partition is a mixture of yet another chance setup: paintbox processes. Paintbox processes are more general than the Chinese restaurant process or the Hoppe urn. Unlike the latter two, they are capable of generating all infinite exchangeable random partitions.19 Paintbox processes are constructed with the help of the following vectors:  (p1 , p2 , . . .), p1 ≥ p2 ≥ p3 ≥ · · · ≥ 0, pn ≤ 1. (4.3) n 18 See Kingman (1975, 1978a,b). Kingman summarizes his work in the monograph The

Mathematics of Genetic Diversity (Kingman, 1980). Zabell (1992) is an elegant and accessible introduction to Kingman’s representation theorem and its connection to inductive inference. 19 Consider the probability space where the partition {1}, {2}, {3}, . . . has probability one. This probability is partition exchangeable, but it can neither arise from the Chinese restaurant process nor from the Hoppe urn.

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

92

Large Worlds

Each such vector has infinitely many nonnegative elements that are ordered according to their values and sum to a value no greater than one. Now select points y1 , y2 , . . . from the unit interval (it doesn’t matter which). Given any vector (4.3), denote by μ the probability distribution on the unit interval that assigns probability pn to yn , and a uniform continuous probability  p0 = 1 − n pn to the whole interval. The graph of μ has a line with height p0 over the unit interval with spikes of height pn at points yn . You can think of the unit interval as a paintbox with uncountably many colors. The color y1 is chosen most frequently, followed by y2 , and so on. Each color that has positive probability of being chosen is observed infinitely often; if p0 = 0, no other colors than those are observed (with probability one). If p0 > 0, other colors are chosen, but they are loners: they are observed at most once (again with probability one). We now let Y1 , Y2 , . . . be an infinite sequence of independent random variables taking values in the unit interval with common distribution μ. Based on this process, we can construct partitions by collecting, for each real number y in the unit interval, all trials n where Yn takes on the value y. This gives rise to a partition of the natural numbers by grouping together all trials at which the same color is observed. Each ordered vector (4.3) gives rise to a well-defined distribution over the sequence of random partitions generated by Y1 , Y2 , . . . The resulting process is a paintbox process. If we think of colors as species, the paintbox process associated with a particular ordered vector (4.3) can be viewed as the real process underlying our observations of partitions. While we, the exobiologists, enter a planet’s ecosystem without any knowledge of its species, mother nature knows her onions: she throws species our way according to an unknown distribution over an unknown number of species. The part of the underlying process that is accessible to us is given by the partitions. Thus, we can have beliefs over what the frequency of the most abundant species, the second most abundant species, and so on, is. More precisely, we can have beliefs over vectors of the form (4.3). That beliefs of this sort are equivalent to infinite exchangeable random partitions is the content of Kingman’s representation theorem. Since the sequence Y1 , Y2 , . . . is identically and independently distributed, any paintbox process is an exchangeable sequence of random partitions, and so is any mixture of paintbox processes. Kingman’s representation theorem asserts the converse: every sequence of exchangeable random partitions is a mixture of paintbox processes. Whenever our beliefs over infinite random partitions are exchangeable, they can be represented

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.5 Predicting the Unpredictable

93

by beliefs over the set of all paintbox processes, which are given by all vectors of the form (4.3). If we again think of a paintbox process as the real process underlying our investigation, partition exchangeable beliefs can be constructed from beliefs over all possible underlying processes. Kingman’s theorem is thus an expression of de Finetti’s idea that symmetric degrees of belief can be translated into the language of chance setups. In de Finetti’s case, chance setups are independent and identically distributed trials; in Kingman’s case, chance setups are paintbox processes, and the distribution over paintbox processes is a chance prior. There is another parallel between Kingman’s theorem and de Finetti’s theorem: in both cases we have a simple family of chance priors that gives rise to an equally simple family of inductive rules. In Kingman’s case the family of priors is known as the Poisson–Dirichlet distribution (also called “Dirichlet process”): the Poisson–Dirichlet distribution is the limiting case of a Dirichlet prior as the number of categories goes to infinity (provided that the limit is taken in a clever way). Let X1 , X2 , . . . denote our observations of species based on the observed partitions. At each time, there is a number t of species observed to date, each one with a positive frequency ni (i denotes the ith species in order of appearance). The Poisson–Dirichlet prior gives rise to the following predictive probabilities: P[Xn+1 = i|X1 , . . . , Xn ] =

ni . n+θ

(4.4)

The parameter θ is positive and related to the prior probability of observing a new species. If θ = 1, we have de Morgan’s rule, which can thus be seen to emerge from sequences of exchangeable random partitions with a Dirichlet-Poisson prior. In the Hoppe urn and the Chinese restaurant process, θ can be regarded as the weight attached to observing a new color or being seated at a new table, respectively. This is not the end of the story. Sandy Zabell has shown how to derive the predictive probabilities (4.4) without appealing to the Kingman representation theorem. His argument extends Carnapian inductive logic to random partitions.20 Besides partition exchangeability, there are three assumptions for random partitions: (1) Every finite partition has positive prior probability. (2) The predictive probability of observing the ith category (labeled in order of appearance) on the next trial depends only on the number ni of times i has been observed thus far, and the total sample size. 20 Zabell (1998).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

94

Large Worlds

(3) The probability of observing an entirely new category depends only on the number of species thus far observed and the total sample size. The first assumption, a regularity condition, requires that no scenario is ruled out prior to observations. The second assumption is an appropriately modified version of Johnson’s sufficientness postulate. The third assumption is a symmetry requirement for predictive probabilities of new species. Zabell’s three assumptions entail a somewhat more complex family of inductive rules than the one given in (4.4), which is included as a special case. There is not only one parameter, θ, which governs the prior likelihood of observing a new species, but also a parameter regulating the effect of subsequent observations on this likelihood. In addition, a third parameter influences predictive probabilities for as long as only one type is observed. As a final remark, it should be noted that Carnap recognized the importance of investigating the case of inductive reasoning when categories are not known in advance. His idea was to use the relation “being of the same type as” instead of fixing types.21 In a sense, this relation is what underlies the approach through random partitions. This approach presupposes the ability to categorize observed individuals according to whether they are like or unlike. It is understandable that Carnap didn’t work out this idea: the heavy artillery that this would have required was not available at the time he was working on inductive logic.

4.6 Generalizing Fictitious Play We can now use the predictive probabilities (4.4) (or Zabell’s generalization) to design a version of fictitious play for applications in which states of the world are unknown. There are obviously many ways to model such a process. I’m aiming for a simple and salient procedure. Fictitious play has a procedural part – updating probabilities – and a choice part – maximizing expected utility. The procedural part is fixed by what was said in the previous section. If our agent’s degrees of belief meet certain assumptions, and if she is consistent, then (4.4) or a similar rule describes how she incorporates new information into her beliefs. What we need to specify is the choice part: how does an agent best respond to predictive probabilities? 21 This information, due to Richard Jeffrey, is reported by Zabell (1992, 1998).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.6 Generalizing Fictitious Play

95

Let’s assume there is a finite number, m, of acts. The agent is supposed to choose an act that maximizes expected payoffs with respect to her present beliefs. After choosing an act, a state is observed, which might either be a state that was observed before or an entirely new state. The state together with the act determines a payoff. For this procedure to work we need to specify payoffs. Whenever a new state, i, is observed, there are m possible payoffs πi1 , . . . , πim , one for each state–act pair, which are assumed to be bounded above and below. These payoffs are random variables, with values that are determined once the state i is known. Here it is assumed that the agent can engage in hypothetical reasoning as in the original fictitious play process: immediately after the ith state has been observed for the first time, she does not only know the actually experienced payoff; she can also figure out the payoffs she would have obtained if she had chosen a different act. The payoffs πi1 , . . . , πim are given by the underlying process. You may, for example, be in a game without knowing all of your opponents’ strategies. Instead, they are gradually revealed to you, together with the associated payoffs. Knowing the payoffs πi1 , . . . , πim , though, is not enough for calculating expected payoffs. The inductive logic introduced in the previous section also has, at each stage, a positive probability for observing an entirely new category (e.g., a new strategy chosen by the other player). If we wish to calculate expected payoffs relative to predictive probabilities, we need to have payoffs for the unknown category. These payoffs also are random variables, but we don’t know their values: by definition, we haven’t observed the new state yet. So, in order to carry out our calculations we need estimates of the unknown payoffs. Our estimates ought to have a rational basis. This calls for another learning process. As a first pass, we may consider average reinforcement learning, one of the learning processes discussed in Chapter 2. According to average reinforcement learning, the best estimate conditional on choosing an act, A, is the average payoff obtained whenever A was chosen, modulo prior estimates. In the present model we use average reinforcement learning in the following way. Each time A is chosen and a state is observed for the first time, the payoff is recorded. We assume there is only a finite number of payoffs. In order to estimate the payoff of A and the category “new state,” we take the predictive probability of the kth payoff to be equal to nk + αk  , n + j αj where nk is the number of times the kth payoff was obtained immediately after act A is chosen and a previously unknown state has occurred, and n is

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

96

Large Worlds

the total number of times an unknown state has been observed. As usual, the α parameters determine initial probabilities. The estimated payoff for A and the category “new state” is, then, given by   j πj nj + j πj αj  . n + j αj These estimates together with the payoffs for observed states can be used to calculate expected utilities of acts. What are the learning environments in which this learning dynamics is appropriate? By Kingman’s representation theorem, the process governing observations of states needs to be a paintbox process. For the learning process on payoffs, payoffs need to be independently distributed (see Chapter 2 and Appendix B). Assumptions about the distribution of payoffs are not relevant in the long run, though; the probability of observing a new state decreases to zero on the order of 1/n (n being the total number of observations). Thus, in the long run the model is going to essentially choose a best response to the probabilities of observed states. The basic learning scheme presented here is open to all kinds of modifications. For example, payoffs in the payoff learning dynamics may not be known in advance. This would require us to introduce a sampling of species learning procedure for predicting the probabilities of payoffs. Another limitation of the present learning model is the assumption of having a finite number of acts. We are not going to lift this restriction for fictitious play; instead, I will indicate in the next section how it can be relaxed for the basic model of reinforcement learning.

4.7 Generalizing Reinforcement Learning Reinforcement learning models assume a fixed number of choice alternatives. This is a good assumption for some learning situations. But in general, unanticipated alternatives may present themselves during a learning process, transforming the situation into a large world. Consider, for example, the recurring choice of what to make for dinner. The list of alternatives is clearly open ended and includes genuinely new recipes. Is it possible to modify reinforcement learning accordingly? I am going to explain the leading idea in terms of the basic model of reinforcement learning. Consider the following modification of the Hoppe urn. Colored balls represent known alternatives, and the black ball represents a new alternative

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.7 Generalizing Reinforcement Learning

97

that has never before been chosen. Initially, no alternatives are known; they are introduced along the way, whenever the black ball is chosen. After a ball is chosen from the urn, the alternative associated with its color is chosen, and a payoff is obtained. The payoff gets attached as a weight to a new ball of the same color, and both balls are put back into the urn. Balls are chosen with a probability proportional to their weight. Thus, the weight represents the intrinsic likelihood added to an alternative. Formally, this process is very similar to the basic model. The main difference is the presence of the category “new alternative.” In the modified basic model, this category has a fixed weight, q0 , throughout the learning process. The other alternatives, introduced whenever the new alternative category is chosen, are labeled in order of appearance. Each alternative, i, receives an initial weight, qi (n), after its introduction in period n, which is equal to the payoff, π(n), received in that period. Afterwards, the weight of i is updated as in the basic model:  qi (n) + π(n) if i is chosen in period n (4.5) qi (n + 1) = qi (n) otherwise. Choice probabilities also have the same form as in the basic model, apart from the presence of the new alternative category: p0 (n) =

q0 +

q0 t

j=1 qj (n)

,

pi (n) =

q0 +

qi (n) t

j=1 qj (n)

.

(4.6)

The left equation is the probability of choosing a new alternative, which is always proportional to q0 . The right equation is the probability of choosing a known alternative i. The axiomatic foundations of this model are nearly the same as the foundations of the basic model. Luce’s choice axiom continues to be a basic requirement for choice probabilities of any alternative, including the ever-present “new alternative” category. At each stage, there are t known alternatives in addition to the new alternative. For that stage, we say that Luce’s choice axiom holds if p(R, T) = p(R, S)p(S, T). Here T denotes the set of all t+1 options available at a time, and R, S denote two subsets of T with R ⊂ S. If Luce’s choice axiom holds at each stage, then there exists a ratio scale representing the intrinsic propensities for choosing alternatives at that stage, realizing the representation (4.6). Once an appropriate scale has been fixed, the additive update rule (4.5) can be developed in the same way as for the basic model. Commutativity remains the central

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

98

Large Worlds

inductive assumption. The only additional postulate says that the propensity for choosing the new alternative is constant, reflecting the fact that it is a placeholder for newly discovered alternatives.

4.8 Learning in Large Worlds with Luce’s Choice Axiom Neither the large world fictitious play process nor the large world reinforcement process are well understood.22 Applying these processes to decision problems and games is of considerable interest. It would be particularly important to study asymptotic properties of these learning rules in decisions and games. Keeping to the spirit of this book, however, I would like to pursue a different question here: what allows a learning model to be extended to one that can operate in certain large worlds? An answer to this question will help us understand why fictitious play and reinforcement learning are extendible in this way; it will also help us see why other learning models might not be. In the general discussion at the beginning of this chapter, I followed Joyce’s approach in decision theory and portrayed the problem of learning in large worlds as a problem of consistent embeddability: the inductive inferences drawn in the small world are robust under being put in a larger context. Luce’s choice axiom gives rise to this kind of robustness since it implies that every alternative is associated with a propensity which is independent of other alternatives. Thus, extending the set of currently available alternatives has no influence on those propensities. This makes a probabilistic choice structure extendible to situations with a larger number of alternatives, provided that all choice probabilities are governed by Luce’s choice axiom. Let me explain this in more detail. In Chapter 2, we mentioned that Luce’s choice axiom entails a familiar consistency condition, independence from irrelevant alternatives. Independence assumptions basically say that comparisons between two objects do not depend on whether certain other objects are absent or present.23 A good candidate for a probabilistic analogue is the ratio of probabilities. If p(a, S) and p(b, S) are the choice probabilities of choosing alternatives a and b, respectively, from the set of available alternatives, S, then their ratio 22 An exception is the application of the reinforcement learning process outlined in the previous

section to signaling games; see Skyrms (2010) and Alexander et al. (2012). 23 As discussed by Luce and Raiffa (1957).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.8 Learning in Large Worlds with Luce’s Choice Axiom

p(a, S) p(b, S)

99

(4.7)

measures how much more or less likely a is to be chosen over b. Luce’s choice axiom implies that this ratio is equal to the ratio of their propensities, q(a)/q(b). So it is invariant with respect to which other alternatives are present or what their choice probabilities are. Now consider a universal set U, which may be finite or countably infinite, representing the universe of possible alternatives. We assume for simplicity that for all pairs of alternatives in U, i, j, we have p(i, j) > 0: no alternative is chosen over another with probability one.24 If Luce’s choice axiom applies to all subsets of alternatives of U, then there is an overall scale of propensities which together fully determine choice probabilities. That Luce’s choice axiom applies to U in this way expresses the conviction that no matter which alternatives are being presented, one’s choice probabilities will observe the axiom. In this case, the ratio (4.7) is invariant over all subsets S of U because it is given by the ratio of propensities of a and b. As a consequence, it will be the same whatever the currently available set of alternatives is and no matter how it might be augmented in the future. Luce’s choice axiom allows reinforcement learning to be consistently embeddable into a large world of arbitrarily many acts given by a universal set U. This is the reason why we can generalize reinforcement learning to a large world process. In the basic model of reinforcement learning, propensities encode past experiences with choosing acts and thus represent what is being learned. Another way of representing the information acquired by the process is given by the current ratios of choice probabilities, which tell us how acts trade off against one another. Now, in the generalized model every newly introduced act has an associated intrinsic propensity of being chosen. Thus, no matter which alternatives are known at a time, the propensities and ratios of choice probabilities remain the same whenever a new alternative is introduced. In this way, generalized reinforcement learning can consistently incorporate new acts without upsetting the information the process has obtained so far. Luce’s choice axiom plays the same role for the generalized fictitious play process. In the associated continuum of inductive methods, alternatives are states of the world. The choice probability of a state at time n is its predictive probability given the past n trials. The propensity of a state, i, is the number of times, ni , it has been observed in the first n trials. Predictive probabilities 24 Dropping this assumption complicates matters, but it is still possible to prove that an overall

scale of propensities exists; see Luce (1959, pp. 24–27).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

100

Large Worlds

are proportional to propensities, as required by Luce’s choice axiom. Furthermore, the ratio of predictive probabilities for state i and state j is equal to ni /nj regardless of which other states are present. The Johnson–Carnap continuum of inductive methods is thus consistently embeddable in a large world that contains a countable number of unknown states. As a result, we can extend it to Zabell’s inductive logic for large worlds. Luce’s choice axiom entails that certain quantities in learning processes are partition-invariant. Recall that partitions may be thought of as representing the basic knowledge an agent has about a learning situation. Refining a partition is tantamount to moving from a smaller world to a larger world, where more distinctions among events can be expressed. As we have seen, Joyce has argued that one advantage of Jeffrey’s logic of decision over Savage’s theory is its partition invariance: the evaluations provided by the logic of decision do not depend on which partition is used. This makes it possible to consistently embed these evaluations into larger worlds. We will now see that Luce’s choice axiom does something similar for models of learning. A sampling of species process can be understood as a sequence of increasingly refined partitions. Initially, we may not know anything about the learning situation, which is properly reflected by the trivial partition where no distinctions are being made. After having observed the first species, we have a more refined partition with two elements, “first species” and “species not yet observed.” The partition given by “first species,” “second species,” and “species not yet observed” is a further refinement, where the element “second species” is included in the “species not yet observed” category of the two-element partition. Partitions are refined in this way as we encounter more species by splitting the “species not yet observed” category in two. Observing new alternatives is typically going to change the probabilities of alternatives simply because probabilities add up to one. So we cannot expect probabilities of alternatives to be partition-invariant. But what can be partition-invariant are the propensities of alternatives. In the Johnson–Carnap continuum the intrinsic propensity of a state is the number of times it has been observed (plus an initial propensity). The propensity does not depend on the underlying partition of world states. Similarly, the propensity of choosing an act in the basic model does not depend on the partition of acts. This kind of partition invariance is what allows us to extend fictitious play and the basic model to large worlds. Requiring that Luce’s choice axiom hold with respect to a universal set U is therefore analogous to evaluating acts in accordance with the logic

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

4.8 Learning in Large Worlds with Luce’s Choice Axiom

101

of decision. It guarantees that the relevant small world inferences won’t change just because they are placed in larger worlds. This suggests that Luce’s choice axiom, or a suitable generalization, will also apply to other large world models of learning. It also indicates that some models might not be extendable to large worlds. Consider, for example, the Bush–Mosteller model of reinforcement learning (a forerunner of the Rescorla–Wagner model, which is well known in psychological learning theory).25 In the Bush–Mosteller model, learning operators are linear functions acting on choice probabilities, not propensities. Since alternatives don’t need to be associated with intrinsic likelihoods, there is nothing in their model that would allow it to be consistently extended to situations with a larger number of alternatives. To what extent does Luce’s choice axiom provide a solution to the problem of learning in large worlds? At the beginning of this chapter, we discussed levels of bounded rationality that lie between small world and large world rationality. Learning processes that may be consistently extended to increasingly refined knowledge partitions can be thought of as providing valid evaluations outside their immediate circumstances. This does not mean that such processes can be extended to the large world. The large world of a reinforcement learner might go beyond the conceptual resources of reinforcement learning processes. We have seen that propensities of acts may be invariant to introducing new alternatives; but propensities might be affected by information about, for example, states in ways that are impossible to specify unless we know more about what the reinforcement learner would do with this qualitatively distinct kind of information. Thus, Luce’s choice axiom, or similar principles, are not magic wands that guarantee the invariance of our evaluations in all large worlds. They do, however, provide us with a key to rational learning in some large worlds.

25 See Bush and Mosteller (1955) and Rescorla and Wagner (1972). For an application in game

theory, see Börgers and Sarin (1997).

Downloaded from https://www.cambridge.org/core. University of Florida, on 03 Nov 2017 at 08:01:35, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.006

5

Radical Probabilism

Radical Probabilism doesn’t insist that probabilities be based on certainties; it can be probabilities all the way down, to the roots. Richard Jeffrey Radical Probabilism

102

In Chapter 1, we introduced the two main aspects of Bayesian rational learning: dynamic consistency and symmetry. The preceding chapters focused on symmetries. In particular, we have seen that learning models other than Bayesian conditioning agree that updating on new information should be consistent with one’s overall inductive assumptions about the learning situation. In this chapter we return to the issue of dynamic consistency. Recall that Bayesian conditioning is the only dynamically consistent rule for updating probabilities in a special, and particularly important, class of learning situations in which an agent learns the truth of a factual proposition. The basic rationale for dynamic consistency is that an agent’s probabilities are best estimates prior and posterior to the learning experience only if they cohere with one another. This chapter explores dynamic consistency in the context of learning models of bounded rationality. Since these models depart rather sharply from Bayesian conditioning, it is not immediately clear how, or even whether, they can be dynamically consistent. The relevant insights come from Richard Jeffrey’s epistemological program of radical probabilism, which holds that Bayesian conditioning is just one among many legitimate forms of learning. After introducing Jeffrey’s main ideas, we will see that his epistemology provides a large enough umbrella to include the probabilistic models of learning we have encountered in the preceding chapters, and many more. Two principles of radical probabilism, in particular, will assume a decisive role: Bas van Fraassen’s reflection principle and its generalization, the martingale principle, which is due to Brian Skyrms. The two principles extend dynamic consistency to generalized learning processes and thereby allow us to say when such a process updates consistently on new information, even if the content of the information cannot be expressed as an observational proposition.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.1 Prior Probabilities

103

5.1 Prior Probabilities Recall from Chapter 1 that probabilism holds degrees of belief to the standard of consistency. This standard entails that they are probabilities whenever they are sufficiently sharp. Probabilism maintains in addition that learning from experience should proceed, at least in some situations, by conditioning on observational propositions. Some narrow varieties of probabilism significantly restrict the range of permissible probability measures, or they require that Bayesian conditioning be the only legitimate learning procedure. Proponents of the logical view of probability, such as John Maynard Keynes and Rudolf Carnap, restrict both probabilities and rules for updating probabilities. The logical view’s cookbook for rational learning has only one recipe: start with a rational prior, which is either a unique probability distribution or at least a severely restricted set of probability distributions; then, after having made observations, update the logical prior by conditioning on the total evidence.1 The idea of an “objective prior” reaches back to the work of Bayes and Laplace. Such a prior would go a long way toward solving Hume’s problem of induction. For if there was a unique (or almost unique) rational epistemic starting point, opinions would agree (or almost agree) in the distinguished group of rational individuals whenever they update on the same observations; deviants could be branded as simply being irrational. What seems to make objective priors attractive is the fact that they appear to bring fairly unique standards of rational opinion within our reach. Even though objective priors enjoy some support among philosophers and economists, it is very difficult to see how they could ever be fully justified.2 There are plenty of rather strong arguments against objective priors that I won’t repeat here because they have already been put forward very convincingly by others.3 The fundamental problem is the following. In the presence of very strong evidence, such as having observed a large number of coin flips, it is plausible that many of us would largely agree in our opinions about the coin (I will discuss this topic further in Chapter 8). But in the absence of such evidence, there is typically a large number of admissible priors – that is, priors that are fundamentally compatible with the evidence. 1 See, e.g., Keynes (1921) or Carnap (1950). 2 Besides Keynes and Carnap, proponents of objective views include, e.g., Harsanyi (1967), Jaynes

(2003), Paris and Vencovská (2015), Williamson (2002), Williamson (2010), White (2010). 3 See Levi (1980) or Seidenfeld (1979). Howson and Urbach (1993) provide an overview. See also

van Fraassen (1989) and Zabell (1988, 1989).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

104

Radical Probabilism

If one has no evidence that a coin is not biased in a certain way, for instance, it seems that the class of legitimate priors includes at least all probability distributions that don’t rule out any bias.4 Now, there simply seem to be no generally valid principles for picking out a unique prior from among such a typically large class of legitimate priors. The most famous principle of this sort, the principle of indifference, displays the shortcomings of objectivist approaches quite lucidly. Roughly speaking, the principle of indifference says that in the absence of any evidence the basic constituents of a learning situation ought to have equal prior probabilities. For example, prior to flipping a coin about which you know nothing, you should assign equal probabilities to all chance hypotheses; in other words, your prior should be the uniform distribution over the unit interval (which represents all possible biases). The rationale for the principle of indifference is an essentially negative one: any other distribution, it is claimed, expresses some kind of bias; but that bias is not backed up by any evidence (since one has no evidence). There are many well-known problems with this type of reasoning. It depends, for instance, on how exactly the basic constituents of a learning situations are described: equivalent descriptions can yield incompatible recommendations. But the most obvious point against the principle of indifference is, I think, decisive. The absence of evidence for any bias of the coin is taken as an argument against nonuniform prior probabilities. But in the absence of evidence, there is also no evidence for a uniform bias. So the principle’s basic idea only seems to gain traction by privileging equal distributions right from the start. Being the last prior standing is hardly an achievement if that prior also fails the standards to which other distributions are held. In other words, we could justify any prior with that kind of reasoning – that is, by observing that there is no evidence for other priors. It follows that the principle of indifference fails to single out uniform distributions as the most plausible default priors. Approaches that try to privilege certain prior probability assignments are opposed by less dogmatic views, of which Richard Jeffrey’s radical probabilism is one of the most prominent examples.5 Radical probabilism is a thoroughly probabilistic epistemology that is committed to some basic principles of probability theory, but avoids being overly restrictive. It thus rejects logical or objective priors. Instead, it allows for a range of epistemic 4 Such distributions are absolutely continuous with respect to Lebesgue measure on the unit

interval. 5 See the essays in Jeffrey (1992).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.1 Prior Probabilities

105

attitudes in situations of full or partial ignorance. Any prior in the class of legitimate priors is admissible, in line with Jeffrey’s dictum that “it can be probabilities all the way down, to the roots.” This means that a rational agent’s beliefs need not be grounded on some bedrock of certainties and unquestionable principles. This concerns, in particular, what we have called inductive assumptions – the agent’s opinions about the general structure of a learning situation. Inductive assumptions (exchangeability, commutativity, Markov exchangeability, etc.) may be backed up by strong evidence, but radical probabilism does not insist upon this kind of epistemic warrant. Prior beliefs and inductive assumptions may be what you currently happen to believe, all things considered. They can be based on strong or weak evidence, or on intuitions, or just on the pragmatic necessity that in order to get a process of learning off the ground we have to start somewhere, even if that starting point does not rest on an infallible foundation. Radical probabilism is thus no foundationalist epistemology. But it does share some features of what Gilbert Harman has called a general foundations theory.6 According to general foundations theories, at least some of an agent’s beliefs at a time are taken to be justified by default or until proven incorrect. Belief revision changes these initial beliefs only in the face of sufficiently strong evidence. The inductive assumptions of an agent can be taken as beliefs of this sort whenever they represent her fully considered opinions. Being justified by default means that as long as the evidence does not speak against them, probabilities are revised by new observations so as to be consistent with those inductive assumptions. In the face of countervailing observations, however, inductive assumptions are subject to revision.7

6 See Harman (2002) and Harman and Kulkarni (2007). Harman traces general foundations

epistemology back to Goodman (1955) and Rawls (1971). 7 When exactly evidence does speak decisively against a set of inductive assumptions is a

surprisingly subtle question. Suppose my beliefs are exchangeable over sequences of coin flips. This implies that my conditioned beliefs will also be exchangeable, regardless of the observations I make. Therefore I never need to revise my judgment. If I observe a long alternating sequence of heads and tails, though, I would give up exchangeability and perhaps use Markov exchangeability instead. Is that switch from one set of inductive assumptions to another one compatible with Bayesian principles? Not on the face of it, since the old inductive assumptions need not be consistent with the new ones. We can, however, think of the first model as representing only a fragment of my real system of beliefs, which is more thoroughly represented by a higher-order model that includes other inductive assumptions. I think that’s quite typical for our beliefs. I. J. Good referred to the behavior of excluding certain possibilities even though one thinks they are live possibilities as “provisional dogmatism”; see Good (1983, Chapter 4).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

106

Radical Probabilism

5.2 Probability Kinematics Demoting prior beliefs to a more modest position is of course not all there is to radical probabilism. Radical probabilism emphasizes updating beliefs on new information; it thereby shifts the focus away from seeking a privileged set of initial beliefs to improving one’s beliefs. This is what radical probabilism has in common with the probabilism of Savage and de Finetti. Jeffrey extends probabilism even further, though, in that he does not think of Bayesian conditioning as the only legitimate way to update one’s degrees of belief. As a counterpoint to this aspect of Jeffrey’s epistemology, consider David Lewis’s view. Like Jeffrey, de Finetti, and Savage, Lewis is a liberal with regard to prior probabilities. He is, however, a champion of Bayesian conditioning as the only rational form of belief change. The following quote of Lewis brings to the fore a number of issues that will help in motivating Jeffrey’s view of updating: For what it is worth, I would insist that the ideally rational agent does conditionalize on his total evidence, and thereby falls into irregularity. He never does mistake the evidence, wherefore he may and he must dismiss the possibility that he has mistaken it. Else there is a surefire way to drain his pockets: sell him insurance against the mistakes he never makes, collect the premium, never pay any claims.8

Here, Lewis restricts things to perfectly rational agents, who never make mistakes. He also raises two other issues. The first one – regularity – is mentioned explicitly. The second one lurks in the background: not only is it assumed that an ideally rational agent makes no mistakes, but she is also situated in an ideal world where evidence can always be expressed by a statement in one’s language.9 Let’s take a more detailed look at these issues. A probability measure is regular if it assigns zero probability only to the impossible event. Thus, regularity is a sign of open mindedness: each possible event gets some, perhaps very small, positive probability. In the absence of evidence to the contrary, regularity certainly is a desirable feature for probability measures. However, in probability theory regularity has not been accepted generally because it cannot always be in general be implemented. Just think of the unit interval (the set of all real numbers between zero and one), and an experiment where every point in the unit interval 8 Lewis (1986, p. 586). 9 Lewis discusses this more explicitly in Lewis (1999, Chapter 23), where he says that the ideally

rational agent already has made all conceptual discoveries.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.2 Probability Kinematics

107

is a possible outcome (an infinitesimally fine pointer specifies a number in the unit interval). A standard result of probability theory tells us that at most denumerably many points can have positive probability. It follows that uncountably many points must have probability zero, contradicting regularity. Regularity plays an important role in Carnap’s inductive logic, where it was originally introduced. In the 1950s Abner Shimony proved that regularity follows from strict coherence, a more severe kind of coherence than Dutch book vulnerability.10 The Dutch book arguments discussed in Chapter 1 are based on the idea that bets which together ensure a net loss are not admissible; a set of degrees of belief is coherent if this cannot happen, provided that we identify degrees of belief with fair betting odds. A set of degrees of belief is said to be strictly coherent if, when taking the degrees of beliefs as fair odds, there is no family of bets where the agent cannot win and where she loses in at least one eventuality. So it is harder to be strictly coherent than to be coherent; if you think that an event has probability zero, your fair betting odds for that event are equal to zero, so you cannot win. Thus, in contrast to coherence, strict coherence implies that every event other than the impossible event has positive probability.11 Given the problems regularity faces in general probability spaces, one might wonder whether it is wise to insist on strict coherence. Be that as it may, in the setting of Carnapian inductive logic, which has only finitely many categories of outcomes, it was taken to be a rationality postulate. But there is still a problem. In his dissertation, Richard Jeffrey pointed out that strict coherence is not compatible with Bayesian conditioning.12 The argument is very simple. Bayesian conditioning on evidence E implies that the posterior probability of E is equal to one.13 Hence, even if the prior probability is regular, the posterior probability is necessarily irregular (unless E is a tautology). In the light of strict coherence it would seem that Bayesian conditioning is an irrational way to update beliefs. As can be seen in the passage quoted above, Lewis’s response to this result is that an ideally rational agent is allowed to be irregular because she never makes mistakes. Jeffrey’s response is very different: unlike ideally rational agents, our evidence is strictly speaking never certain: 10 Shimony (1955). See Hájek (2010) for more information on regularity. 11 For recent developments based on using weak dominance instead of strict dominance in the

context of probability, see Pedersen (2014). 12 See Jeffrey (1957). This point has also been widely discussed in philosophy of science; see, e.g.,

Earman (1992). 13 The posterior probability of E is equal to the conditional probability P(E|E) = P(E)/P(E) = 1.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

108

Radical Probabilism

But from a certain strict point of view, it is rarely or never the case that there is a proposition for which the direct effect of an observation is to change the observer’s degree of belief to 1 . . . For if we care seriously to distinguish between 0.999 999 and 1.000 000 as degrees of belief, we may find that, after looking out the window, the observer’s degree of belief in the proposition that the sun is shining is not quite 1, perhaps because he thinks there is one chance in a million that he is deluded or deceived in some way.14

According to Jeffrey, claiming that we have observed something for certain is only a figure of speech; if asked to be more specific, more often than not we would reply that we are only close to being certain. What distinguishes Jeffrey from Lewis, then, is the degree of idealization he is comfortable with. Modeling humans or other organisms in terms of probabilities always involves idealizations. Jeffrey maintains that idealized models should reflect certain basic features of our epistemic limitations, whereas Lewis is willing to leave behind genuine restrictions in investigating ideal rationality. This allows Lewis to give up on regularity and retain the conditioning model of learning. Jeffrey may hold on to regularity, but at the price of giving up conditioning. While Jeffrey’s view is attractive since it is not restricted to an extremely demanding type of epistemic agent, it is not clear how to replace the conditioning model of learning. I will return to Jeffrey’s solution in a moment. Let me indicate first a further motivation for radical probabilism that is more important than regularity. Lewis’s implicit assumption in the earlier quotation is that the ideally rational agent has an incredibly rich language at her disposal. The conditioning model requires this language to have a Protokollsatz for each possible piece of evidence. A Lewis agent, in other words, is always in the enviable situation where experiences are delivered in the form of propositions. This epistemic situation, as Jeffrey observes, is far from being universal: However, there are cases in which a change in the probability assignment is clearly called for, but where the device of conditionalization cannot be applied because the change is not occasioned simply by learning of the truth of some proposition E. In particular, the change might be occasioned by an observation, but there might be no proposition E [. . . ] of which it can correctly be said that what the agents learned from this observation is that E is true.15

There are “ineffable” learning situations in which you cannot express what you have learned by a statement in your language. Suppose you go for a run 14 Jeffrey (1992, p. 35). 15 Jeffrey (1965, p. 165).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.2 Probability Kinematics

109

in the hills of Southern California just before sunrise. A little distance away you see a big cat making its way up a hill through dry grass. You know there are only two species of big cats in Southern California: mountain lions and bobcats. In the light of dawn and against a background of brown grass, you are not sure whether you have seen a mountain lion or a bobcat. It seems to you that the cat has a long tail, which would indicate it is a mountain lion; but you are not entirely certain. In this situation it is very reasonable to change your probabilities for the two propositions “The animal is a mountain lion” and “The animal is a bobcat.” Before the observation your probability for seeing a bobcat might have been higher than the probability for seeing a mountain lion, based on your knowledge that bobcats are more common close to residential areas. After the observation your probabilities may reverse, because it seems to you that the cat has a long tail, or because it appears to be rather big. We would say that you have changed your probabilities in response to an observational interaction. But your learning experience cannot be represented by conditioning.16 Bayesian conditioning presupposes that your new probabilities need to come from conditioning on some proposition E that conveys all relevant aspects of your learning experience. But what should E be? The statements which come to mind – like “The animal looked a little bit more like a mountain lion than a bobcat” – are too vague to provide an accurate description of the observation. Our language seems to be too limited to express the precise content of learning experiences of this sort. But even for Lewis’s ideally rational agent it is unclear whether she has propositions in her repertoire of statements that would convey the evidence of any learning experience in full detail. The example illustrates that evidence can be uncertain because the content of a learning experience cannot be expressed in terms of an observational proposition. Our example is in no way special. Once you start thinking about it, uncertain evidence is everywhere. Something familiar about a stranger might prompt you to change some of your beliefs without you being able to say exactly what is familiar about that person. A witness thinks it likely that the accused had a knife in his hand when he left the house, but an eyesight test reveals that the witness sometimes mistakes a Mickey Mouse doll for a knife in similar circumstances; this does perhaps not utterly discredit the witness’s report, but it renders the evidence provided by the witness fairly uncertain. Or consider a doctor trying 16 At least not without additional structure. See the “hidden variable” model of conditioning in

Diaconis and Zabell (1982).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

110

Radical Probabilism

to determine whether a tumor is present based on an X-ray photograph. Uncertain evidence is of course also found in the sciences, for instance in experiments regarding processes that can only be observed indirectly; in such cases we receive noisy signals of the underlying process. Our perceptual experiences, finally, are interpretations of physical processes and thus fraught with uncertainty. Bayesian conditioning is thus applicable really only to a narrow class of learning situations. Sometimes it may be used as an approximation to learning situations where evidence is almost certain (probability so close to one as makes no odds).17 But many learning situations cannot be captured in this way. In any of the above examples, the new probabilities of the observational propositions in question do not need to be close to one, but can take on any value between zero and one. Jeffrey’s alternative model of updating is generally called probability kinematics or Jeffrey conditioning.18 It can be used for some kinds of uncertain evidence. Let’s look at the simplest case in which we have a proposition, E, ¯ is taken to constitute the learning situathat together with its negation, E, tion. For example, E might denote the proposition that the animal in front of you is a mountain lion, and E¯ the proposition that it is a bobcat (any other scenario is considered impossible). The observational interaction ¯ = 1 − P[E] convinces you to update P[E] to Q[E] and, accordingly, P[E] ¯ = 1 − Q[E]. If you wish to have consistent probabilities after the to Q[E] learning experience, you need to adjust your probabilities for propositions ¯ But how? other than E and E. Suppose that A is the proposition that you won’t be attacked by a big cat looking for breakfast. If the learning situation is one in which observations only affect your probability for E and E¯ and nothing else, then the ¯ should be the same after the conditional probabilities P[A|E] and P[A|E] learning experience: P[A|E] = Q[A|E]

¯ ¯ = Q[A|E]. and P[A|E]

(5.1)

¯ It is applicable This is probability kinematics on the two propositions E, E. only if observations change the probability of E and leave everything else unchanged. Although Jeffrey always emphasized this point, it is sometimes forgotten that probability kinematics, like conditioning, should be adopted only in a particular class of learning situations, namely those given by (5.1). 17 These cases are discussed in Jeffrey (1965, 1968). 18 Jeffrey (1957, 1965).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.3 Radical Probabilism

111

It is easy to show that the two equations in (5.1) are equivalent to ¯ ¯ Q[A] = P[A|E]Q[E] + P[A|E]Q[ E].

(5.2)

In this way probability kinematics on E, E¯ allows us to calculate the new probability of any proposition A as being equal to a weighted average of the old conditional probabilities of A. Presumably, the conditional probability of being attacked by a big cat looking for breakfast (A) is higher given E (the cat is a mountain lion) than the conditional probability given E¯ (it’s a bobcat). If your observation raises P[E] to Q[E], then probability kinematics also raises the probability for A, which is entirely correct. The kinematical formula (5.2) makes it clear that conditionalization is a special case of probability kinematics. If you simply learn that E is true, ¯ = 0 (you are certain that what you have observed then Q[E] = 1 and Q[E] is a mountain lion). In this case, the new probability Q[A] of A is equal to the conditional probability P[A|E], as required by conditioning. There are no special obstacles to extending Jeffrey conditioning to more than two exhaustive propositions. A slightly more general case involves a finite partition of propositions, exactly one of which must be true. If the learning experience is one that affects the probabilities for elements of the partition and leaves everything else unchanged, then the new conditional probabilities given the elements of the partition are the same as the old conditional probabilities, as in (5.1). The new probability of any proposition, then, is a weighted average of the old conditional probabilities, with the new probabilities of members of the partition as weights, as in (5.2). Probability kinematics can be generalized to other settings along the same lines.19

5.3 Radical Probabilism Jeffrey traces the ideas underlying probability kinematics back to Ramsey’s criticism of Keynes’s view of knowledge and probability.20 According to Keynes, probabilistic reasoning is based on certainties – more precisely, on those propositions that are known to be true by direct experience. Probability theory can be used to transform known truths into inductive inferences. For Keynes, these inferences have an objective status; first, because we start with certainties, and second because Keynes thinks of probability as 19 The extension of Jeffrey conditioning to denumerable partitions is equally straightforward. It

is also possible to generalize Jeffrey conditioning to σ -algebras; see Diaconis and Zabell (1982).

20 Jeffrey (1985).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

112

Radical Probabilism

expressing a logical relationship of partial entailment between evidence propositions and hypotheses. This view, which is similar to Carnap’s inductive philosophy and to other versions of objective Bayesianism, was vehemently rejected by Ramsey.21 In the course of criticizing Keynes, Ramsey also introduces probability kinematical ideas: I think I perceive or remember something but am not sure; this would seem to give me some ground for believing it, contrary to Mr Keynes’ theory, by which the degree belief in it which it would be rational for me to have is that given by the probability relation between the proposition in question and the things I know for certain. He cannot justify a probable belief founded not on argument but on direct inspection.22

In Keynes’s theory, as in other logical or objective ones, probable belief is derived from certainties by following the rules of the probability calculus. Ramsey, on the other hand, allows probable belief to come from direct inspection. What prompts the probable belief may be an ineffable experience. Ramsey’s example are auditory experiences, for which uncertain evidence is clearly possible (philosophy lecture: did I just hear “blue” or was it “grue”?). In addition to the examples mentioned above, there are further relevant situations. Instead of being due to observations, a belief change can be caused by thinking and deliberating. I. J. Good has called attention to such learning situations, subsuming them under the term of dynamic probability.23 Your probabilities are dynamic whenever you think about a problem and change probabilities according to the new insights this process generates. Good’s main example of dynamic probability is the analysis of a chess position and its probabilistic evaluation. The ideas of Jeffrey, Ramsey, and Good constitute the core of radical probabilism: learning may proceed in many ways other than by conditioning on evidential propositions. Probability kinematics is one alternative model. But once we allow learning experiences to be captured by how they affect probabilities, we are led to a variety of generalized learning processes. In the most extreme case, a learning process can be thought of as a black box.24 The black box takes as inputs the agent’s present probabilities, which undergo changes as the agent makes observations or ponders 21 Ramsey (1931). 22 Ramsey (1931, p. 190). 23 See Good (1983, Chapter 10). 24 Black box learning is discussed by Good (1983, Chapter 10) and Skyrms (1990).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.3 Radical Probabilism

113

her situation; the outputs of this hidden process are her new probabilities. Probability kinematics is a partial black box: observations affect the probabilities of members of the observational partition, but all other propositions are updated with respect to the partition. General black box learning is more extreme: it is a hidden process that may alter all of one’s probability assignments directly. The central concept in this context is Jeffrey’s notion of judgmental probability. Judgmental probabilities are those probabilities that register the effect of a learning experience on beliefs. Conditionalization is a special case according to which the probability of a particular proposition is judged to be equal to one. But judgmental probabilities can take on any value. The experience of seeing a big cat shortly before sunrise may affect one’s judgmental probabilities in various ways, depending on one’s eyesight, one’s background knowledge, and so on. Probability judgments are the result of considering one’s new information and integrating it with one’s old information. Sometimes this process is separable into prior judgmental probabilities and a proposition that expresses the new information; but in general it need not be. Some people have expressed the worry that judgmental probabilities will push us into the abyss of arbitrariness where anything goes. Rudolf Carnap articulated this concern in his correspondence with Jeffrey on Jeffrey’s dissertation.25 In his letters Carnap complained about the lack of an explicit rule for determining the judgmental probabilities involved in probability kinematics. Timothy Williamson has raised a similar objection to Jeffrey conditioning: there are, he claims, no clear standards for how probability judgments are formed. Williamson suggests as an alternative to probability kinematics something that is very close to Keynes’s or Carnap’s logical probability: an evidential (objective) prior probability that is updated by conditioning on those propositions that constitute knowledge.26 Jeffrey is of course aware of these concerns.27 What he emphasizes in response is that radical probabilism is a non-foundational epistemology: it is not founded on a bedrock of certainty, be it the observational database of 25 Jeffrey (1975). 26 Williamson (2002, Chapter 10). To be fair, Williamson is aware of the problem of uncertain

evidence. He thinks he can deal with it by denying that we always have higher-order knowledge, as in “I know that I know.” We may know something without knowing that we know it. I’m not sure whether this allows Williamson to come to terms with the cases of uncertain evidence that motivate probability kinematics. In the mountain lion example, for instance, it seems to me that we have a “I know that I don’t know” rather than a “I don’t know that I know.” 27 See, e.g., Jeffrey (1992, p. 11).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

114

Radical Probabilism

Keynes’s or Carnap’s empiricism, or Williamson’s knowledge propositions. Whenever observational or knowledge propositions are available, conditionalization ought to be used. But an adequate epistemology cannot just hang out in the luxury penthouse; it needs to acknowledge the limitations agents like us are subject to. Radical probabilism provides a flexible framework for the rich variety of learning situations that would be considered arbitrary and insignificant under a narrow understanding of what constitutes legitimate ways to updating beliefs. It allows us, in particular, to take into account the gradational nature of evidence (evidence comes in different degrees of certainty) and its context dependence.28 The concept of judgmental probability plays a crucial role for these evidential considerations. It is sufficiently flexible and general to model the effect of evidence in all kinds of contexts. This does not mean that there is no possibility of further analysis. In specific situations probability judgments may be analyzed and justified further with regard to standards pertaining to those situations. But radical probabilism, being non-foundational, does not insist on this. At some point all justifications end; somewhere we have to start with judgmental probabilities as inputs. Radical probabilism thus holds that at some level probability judgments are an irreducible feature of learning. This does not mean that they are arbitrary, though; your judgments may be fully considered and responsible even if you cannot make your learning experiences and your background information wholly explicit.29 The considerations so far show that radical probabilism’s key innovation is a broad understanding of probabilistic learning, which includes conditionalization, probability kinematics, and black box learning. Conditionalization is transparent: its judgmental probabilities are based on evidential propositions, which express the content of the learning experience. Black box learning is opaque: there are probabilistic inputs and outputs, but no description of the structure of the learning event. Probability kinematics is neither fully transparent nor completely opaque: the observational partition undergoes a black box, but every other proposition is updated with respect to it.

5.4 Dynamically Consistent Models What I wish to argue now is that there are many more legitimate models of learning which, like probability kinematics, fall somewhere between 28 See Joyce (2004). 29 Compare the insightful remarks of Good on the irreducibility of probability judgments in

artificial intelligence, which he summarizes by saying that “if we knew how we made judgments, we would not call them judgments” (see 1983, Chapter 3 and Chapter 10).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.4 Dynamically Consistent Models

115

conditionalization and black box learning. The models I have in mind are the ones we studied in the previous chapters. A question left hanging there also arises for probability kinematics and black box learning, namely: why should we consider these models to be models of learning? They go beyond conditioning on observational propositions, and so it’s unclear what the content of the learning event is supposed to be. But if we cannot say what has been learned, how can we claim that something has been learned? In the most extreme case, that of black box learning, this question is particularly urgent. Why should the agent’s probabilities upon leaving the black box be the result of a learning event? For all we know, she could have changed her beliefs for completely arbitrary reasons. The problem is that we cannot peek inside the black box – for then it would cease to be a black box. As long as we don’t know what’s going on inside the black box, though, the change in beliefs could be due to all kinds of events besides learning some new information. There is no direct way out of this conundrum. One successful indirect approach goes back to Michael Goldstein and Bas van Fraassen. The principle on which it rests is commonly known as the reflection principle.30 It says, in brief, that the output probability judgments of the black box should cohere with its input probability judgments. To be more precise, let P be your prior probability measure, and let A be a proposition in your probability space. Suppose that Y is your new probability for A after the black box learning event; Y is a measurable random variable that can assume any value between zero and one. Let’s suppose for now that you contemplate only a finite number of values, r, and that each event of the form Y = r has positive probability for you. Then the most basic version of the reflection principle states the following: P[A|Y = r] = r.

(5.3)

In words, the probability of A conditional on your anticipated future probability of A should coincide with the latter. To see what this means, it is instructive to observe that conditionalization satisfies (5.3). Let’s focus on the simplest case in which we have a finite observational partition P = {E1 , . . . , Em } (think of the elements of P as the outcomes of an experiment). The value of the conditional probability of a proposition A given the outcome of the experiment depends on which element of the partition is observed in the experiment; thus it can be thought of as a random variable. Let P[A|P] be that random variable – that is, the conditional probability of A given P: P[A|P] is equal to P[A|E] 30 See Goldstein (1983) and van Fraassen (1984, 1995).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

116

Radical Probabilism

if E is the true member of the partition. The reflection principle (5.3) now takes on the following form: P[A|P[A|P] = r] = r. To see why this is true, suppose first that P[A|E1 ]  = P[A|E2 ] for any pair of members E1 , E2 of P. Then the proposition that P[A|P] = P[A|E] is equivalent to E, since P[A|P] takes on that value if and only if E is the case. Hence, P[A|P[A|P] = r] = P[A|E] = r. Now suppose that E1 , . . . , Em are members of P with P[A|Ei ] = r, 1 ≤ i ≤ m, and that the conditional probability takes on a different value on all other members of P. Then P[A|E1 ∪ · · · ∪ Em ] = r.31 In this case, P[A|P] = r is equivalent to E1 ∪· · ·∪Em . It again follows that P[A|P[A|P] = r] = P[A|E1 ∪· · ·∪Em ] = r. The upshot is that identifying the anticipated future degree of belief in A with A’s conditional probability given some observational partition yields a special case of the reflection principle.32 One of the important messages in Chapter 1 was that conditioning is rational because it is dynamically consistent in certain learning situations. That the reflection principle is a generalization of conditioning indicates that it also expresses dynamic consistency, but in a more general way. A careful discussion of this crucial point requires more space and will be given in the next chapter. In the meantime, let’s take for granted that the reflection principle indeed expresses dynamic consistency. This means that the information acquired by going through the black box is consistently included into the agent’s old system of beliefs. Something we noted for conditioning in Chapter 1, then, also applies here. The agent’s prior and posterior cannot both be best estimates if they are dynamically inconsistent. Thus, the reflection principle specifies when an agent’s probabilities are best judgments before and after the learning event. The reflection principle is very general; it applies whenever anticipated future degrees of beliefs are given by measurable random variables. As a result, instances of the reflection principle will govern any rational learning process – any learning process that assimilates new information consistently – regardless of its structure. We have observed this for conditioning. The same is true for probability kinematics, where the new probabilities of the elements of the observational partition are the result of a black box 31 See Lemma 1.2 of Zabell (1995). 32 On this, see also van Fraassen (1995). For a discussion of implicit assumptions see Weisberg

(2007).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.4 Dynamically Consistent Models

117

learning event. But the reflection principle is also applicable to the learning models we discussed in previous chapters. That conditioning satisfies the reflection principle (5.3) immediately implies that the principle holds for the Johnson–Carnap continuum of inductive logic, too. Let X1 , X2 , . . . be an observational process. The event in question, A, is that Xn+1 = i, where i is a state. The predictive probability after the nth trial is given by the random variable Pi (n) = P[Xn+1 = i|X1 , . . . , Xn ] =

ni + αi  . n + j αj

The reflection principle, then, reads as follows: P[Xn+1 = i|Pi (n) = r] = r. This clearly is true by what we have observed for conditioning. The equation says, in effect, that the predictive probabilities, Pi (n), are sufficient for determining conditional probabilities; they represent all of the relevant information about what has been observed so far. The reflection principle also pertains to the basic model of reinforcement learning. The events of interest here are the choices made by the agent, which we again denote by X1 , X2 , . . .. The agent chooses an alternative, i, in period n + 1, Xn+1 = i, in accordance with the following choice probabilities: Qi (n) Pi (n) =  . j Qj (n) More specifically, the basic model requires the conditional probability of the event Xn+1 = i to be equal to Pi (n): P[Xn+1 = i|Pi (n) = r] = r. This is again an instance of the reflection principle (5.3). In order to see what it says in the present context, let’s think of each choice probability, Pi (n), as a judgmental probability that represents an agent’s best estimate in period n for the event that she chooses alternative i in the next period. That is, having fully considered the effect of what she has learned so far, she assigns probability Pi (n) to the event that she will choose i. Now, if the choice probability Pi (n) is a best estimate, it should be reflected coherently in the agent’s prior conditional probability assignment. Otherwise, i’s choice probability and the probability for choosing i would diverge. This may indicate a number of things. The agent may not consistently include the new information generated by the process into her prior epistemic

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

118

Radical Probabilism

state. Alternatively, the process might generate information besides what is captured by the propensities of the basic model, which suggests that the reinforcement learning model needs to be supplemented or modified in some way. In this case, choice probabilities would not be best estimates. The same would happen if propensities don’t just encode new information, but are also influenced by other factors. Thus, given that the basic model fully captures the information acquired by the agent, a violation of the reflection principle indicates some kind of epistemic irrationality. The reflection principle can often be sharpened by considering the issue of sufficiency more closely. Take as an example the basic model of reinforcement learning (the same applies to Carnap’s inductive logic and to other models). The basic model respects the following strengthening of (5.3): P[Xn+1 = i|P1 (n), . . . , Pi (n) = r, . . . , Pt (n)] = r. That is, the conditional probability of choosing i is not affected by choice probabilities other than Pi (n). This can be further strengthened to include all measurable events prior to period n. This means that conditional on Pi (n) the probability of choosing i is independent of past choice probabilities and choice probabilities of alternatives other than i. Choice probabilities, then, are in fact sufficient for choosing alternatives. The information acquired by the agent as to the choiceworthiness of an alternative is fully captured by its choice probability. The reflection principle can be applied along the same lines to any learning model that is based on probabilistic estimates of events, including those we considered in previous chapters. Whenever it holds, the reflection principle guarantees that the agent assimilates new information consistently into her prior epistemic state. Besides updating so as to be consistent with the symmetries of a learning situation, this is the main rationality aspect of classical Bayesian learning by conditioning on observations. This establishes the conclusion we’ve been working toward since the end of Chapter 1: probabilistic models of learning that operate under procedural limitations observe the same criteria of rational learning as classical Bayesian updating: dynamic consistency and symmetry. There certainly are properties that set these models apart from each other. But when it comes to learning from new information, they share a common set of epistemic virtues.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.5 Martingales

119

5.5 Martingales We can push dynamic consistency further by zooming into the sequential process of acquiring information. Consider the Johnson–Carnap continuum of inductive methods. Given the information generated by the process up to and including period n, the conditional expected value of Pi (n + 1) (the new predictive probability of i after the n + 1st learning event) is given by Pi (n)

ni + 1 + αi ni + αi   , + (1 − Pi (n)) n + 1 + j αj n + 1 + j αj

since Pi (n + 1) is equal to ni +α i n+1+ j αj

ni +1+α i n+1+ j αj

if i is observed, and it is equal to

if an outcome other than i is observed. We know that Pi (n) =

ni + αi  . n + j αj

Hence, the conditional expected value of Pi (n + 1) is equal to  n + j αj − ni − αi ni + αi ni + αi ni + 1 + α i     , + n + j αj n + 1 + j αj n + j αj n + 1 + j αj which reduces to ni + αi 33  . n + j αj It follows that the conditional expected value of Pi (n + 1) given the history of the process is equal to Pi (n). Thus, a Carnapian agent does not expect to change her predictive probabilities from one period to the next. This demonstrates that the sequence of predictive probabilities Pi (1), Pi (2), etc. is an instance of a martingale. The theory of martingales is one of the most important areas of modern probability theory. A martingale is a sequence of random variables X1 , X2 , . . . that can be thought of as a sequence of fair gambles, assuming that Xn represents the total fortune of a gambler at time n. The series of gambles is fair if the total fortune does not change on average: conditional on past gambles, the gambler does not 33 Since

 ni + αi ni + 1 + αi + n + j αj − ni − αi ni + αi    . = n + j αj n + 1 + j αj n + j αj

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

120

Radical Probabilism

expect to win or lose in the next round. That is, the sequence X1 , X2 , . . . is a martingale if, for all n, E[Xn+1 |X1 = x1 , . . . , Xn = xn ] = xn .

(5.4)

We assume that each Xn can take on only a finite number of values, and that each event of the form Xi = xi has a positive probability; otherwise, this equation only holds with probability one. The significance of martingales derives from the martingale convergence theorem: martingales converge under fairly general conditions. Nearly every strong convergence result about random variables can be derived from the martingale convergence theorem.34 Importantly, though, the martingale condition (5.4) is compatible with the random variables X1 , X2 , . . . being probabilistically dependent, so that the martingale convergence theorem is considerably more general than the classical strong law of large numbers. I shall return to this topic in Chapter 8, where we will see that the martingale convergence theorem provides the basis for an analysis of when the prior beliefs of agents have no effect on their long-run beliefs. Brian Skyrms has shown that the martingale condition (5.4) is a generalization of the reflection principle.35 In the setting considered by Skyrms, an agent undergoes an infinite sequence of black box learning situations regarding a proposition A, which is represented by a sequence Y1 , Y2 , . . . of degrees of belief. Because of the black box nature of the process, the information at time n is given by the history Y1 , . . . , Yn of the first n learning experiences. Think of Yn as the best estimate of the truth of A given the information available at time n. In this situation, the martingale principle (5.4) is a requirement of dynamic consistency. That is to say, the martingale principle guarantees that the agent’s new probability for A after the nth learning event consistently extends what has been learned previously. Because the predictive probabilities of the Johnson–Carnap continuum form a martingale, each predictive probability Pi (n) is not just an estimate of observing state i in the next period, as pointed out in the preceding section. Rather, each Pi (n) is also an estimate of an idealized event, that does not depend on time. This idealized event can be understood in terms of the de Finetti representation for exchangeable sequences of states, according to which each state has an unknown probability of occurring that is independent of other states and the same for all times n. The probabilistic setup of 34 See Williams (1991). 35 Skyrms (1990, 1997). See also Zabell (2002).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.6 Conditional Probability and Conditional Expectation

121

the Johnson–Carnap continuum presumes that there is such an underlying idealized event, and that predictive probabilities are dynamically consistent estimates of its truth. Can we find martingales in other learning models? This will depend on whether, in a concrete application of the model, the agent’s estimates are only about events in the immediate future, or if each estimate in a sequence also refers to an idealized time-independent event. For instance, in the basic model of reinforcement learning choice probabilities are about choosing alternatives in the next period. Suppose that, in addition, there is an idealized event of how good a certain alternative is because payoffs are determined in a time-homogenous manner (e.g., nature chooses payoffs according to an independent multinomial distribution). In this case, the choice probabilities of an alternative ought to constitute a martingale. Whenever an agent’s best estimates are a martingale, the martingale convergence theorem implies that they will converge almost surely. So the agent is sure that the learning process will be successful in the sense that it will tend to a maximally informed opinion.36 The predictive probabilities of the Johnson–Carnap continuum converge to the limiting relative frequencies of states; these maximally informed opinions can thus be viewed as the chances underlying the observational process. This fairly strong result doesn’t apply to all learning processes, though. Returning to the example of reinforcement learning, there is absolutely nothing wrong with estimating the probability of choosing an alternative on the next trial without it also being the estimate of an underlying timehomogeneous idealized choice event. In this case, a sequence of choice probabilities will typically not be a martingale. But this does not preclude the agent from being dynamically consistent regarding next period’s event, which is the more basic requirement for the learning models considered so far.

5.6 Conditional Probability and Conditional Expectation What we are going to see in the next chapter is that reflection and martingale principles are based on one of the cornerstones of modern probability theory, the mathematical theory of conditional expectations and conditional probabilities, which was introduced by Kolmogorov in his seminal 36 See Skyrms (1997).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

122

Radical Probabilism

monograph Grundbegriffe der Wahrscheinlichkeitsrechnung.37 Conditional probabilities and conditional expectations can be treated more generally than in terms of the finite partitions we have assumed so far. Partitions may be denumerable, of course, but even this is often too restrictive: in general, an experiment may not lend itself to be described by a partition. Consider, for instance, the experiment of choosing a point at random from the unit interval. No countable partition can capture the outcomes of this experiment. In this case one commonly considers σ -algebras instead of partitions, that is, a class of propositions that is closed under complementation and countable unions. A countable partition can always be thought of as a σ -algebra, simply by considering all its countable unions and intersections. A real valued random variable X also gives rise to a σ -algebra. The Borel sets are the sets that we can create by taking countable unions and intersections of intervals of real numbers. The class of all Borel sets is a σ -algebra, the Borel σ -algebra. The random variable X is a function from  to the real numbers. We now form a system of subsets of  by lumping together, for each Borel set B, all elements of  that X maps into B. The resulting system of subsets of  is a mirror image of the Borel σ -algebra, and it is also a σ -algebra.38 If X takes on only countably many values, its σ -algebra derives from a countable partition, with elements being given by events of the form X = r. Conditional expectation and conditional probability can be generalized to σ -algebras. The conditional expectation of X given the σ -algebra F, E[X|F], is a random variable that takes on values depending on which elements of F obtain. The conditional expectation of X given the random variable Y, E[X|Y], is the expectation of X conditional on the σ -algebra generated by Y; informally, E[X|Y] is the expected value of X given that we know the value of Y. The conditional probability P[A|F] is a special case of conditional expectation. Recall the indicator IA of A (IA = 1 if A is true, and IA = 0 if A is false). The conditional probability P[A|F] is equal to E[IA |F]. Similarly, the conditional probability of A given Y, P[A|Y], is the probability assigned to A given the σ -algebra generated by Y. If the σ -algebra F comes from a countable partition, then P[A|F] = P[A|E] with probability one if E is the true element of the partition. The same relationship holds for conditional expectations. Thus, conditional

37 See Kolmogorov (1933). An introduction to conditional expectation and conditional

probability is part of every good textbook on probability theory, e.g., Ash and Doléans-Dade (2000). 38 It is the smallest σ -algebra that makes X measurable.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.6 Conditional Probability and Conditional Expectation

123

expectation and conditional probability given a σ -algebra coincide with their standard counterparts in the relevant special cases. I’ve introduced the general concepts of conditional probability and conditional expectation because they are needed for models of learning, such as the basic model of reinforcement learning, in which variables can take on a continuum of values. The new concepts allow us to state the reflection and martingale principles more generally. Suppose the random variable X denotes your anticipated degree of belief for A after a learning experience. Then the reflection principle says that P[A|X] = X

with probability one.

(5.5)

This reflection principle is effectively the same as (5.3) whenever X takes on only a finite number of values. The principle (5.5) can be modified to cover learning situations in which an agent changes her opinions concerning a quantity Y. Let X now be the anticipated new estimate of the random variable Y. As an example, suppose that you have two apple trees in your garden, which appear to bear about the same number of apples. After having picked all apples from the first tree, you know the value of X. What you don’t know is the number of apples Y on the second tree. But given X, your conditional expectation of Y is plausibly equal to X. It is, in fact, a requirement of dynamic consistency that E[Y|X] = X

with probability one;

(5.6)

that is, the expectation of Y given knowledge of X is almost surely equal to X.39 By setting Y = IA we recover (5.5). The condition (5.6) is a rationality requirement for learning models which do not use probability judgments, unlike most of the models considered in this book. The general reflection principle (5.6) is closely related to the martingale condition, which for a sequence of random variables, X1 , X2 , . . ., says the following: E[Xn+1 |X1 , . . . , Xn ] = Xn

for all n with probability one.

(5.7)

Clearly, this also reduces to the more restricted principle (5.4) if random variables are finitely valued. All three conditions – (5.5), (5.6), and (5.7) – convey certain forms of dynamic consistency in the beliefs of an agent. I have tried to motivate this rather informally so far, saying that reflection or martingale conditions are 39 See Goldstein (1983).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

124

Radical Probabilism

necessary for assimilating new information rationally. In the next chapter, I will make these claims more precise.

5.7 Predicting Choices Before moving on, I’d like to briefly address an issue that I have avoided up until this point. Radical probabilism allows an agent to assign probabilities to her own choices. Wolfgang Spohn and Isaac Levi have called the coherence of such probabilities into question.40 The basic difficulty is that assigning probabilities to one’s own choices appears to be incompatible with a betting approach to probabilities. For suppose you are offered fair odds on the proposition that you choose a certain act. This will typically have an influence on the probability with which you choose the act. There does not seem to be a stable disposition to bet in this case. Hence, if we identify degrees of belief with dispositions to bet, there are no degrees of belief for one’s own choices. Choice probabilities are essential for many learning models of bounded rationality and for a number of applications of radical probabilism. So Spohn’s and Levi’s objections threaten to blow up much of the approach I have developed in this book. But things are not as bad as they might seem at first sight. Wlodek Rabinowicz has raised a number of convincing points against the arguments of Spohn and Levi.41 In particular, Rabinowicz has noted that even though degrees of belief might not be measurable by betting odds, it does not follow that they don’t exist, unless we endorse a very restrictive pragmatic approach to subjective probabilities. The catholic view I recommended in Chapter 1 certainly has room for nonpragmatic approaches to probabilities. For these approaches, assigning probabilities to one’s own choices is largely unproblematic. There is no reason why from the point of epistemic accuracy, say, choice probabilities would cause any complication. This is not to say that there is no problem. Applications of radical probabilist models of learning require that choice probabilities and similar mathematical objects do represent aspects of actual epistemic agents. This raises the question of how these quantities should be measured. Spohn’s and Levi’s arguments show that an approach through dispositions to act might not always work. As an alternative, choice probabilities may be 40 See Spohn (1977, 1978) and Levi (1997). 41 Rabinowicz (2002).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

5.7 Predicting Choices

125

estimated by relative frequencies (as is common in applications of Luce’s choice theory). I will leave the large and complex discussion of measurability issues at that, because (lucky me) it goes beyond the main topic of this book – rational probabilistic learning – which presupposes that we are already given a model of the agent’s learning process.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:02:39, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.007

6

Reflection

I have beliefs. I may not be able to describe “the precise reasons” why I hold these beliefs. Even so, the rules implied by coherence provide logical guidance for my expression of these beliefs. My beliefs will change. I may not now be able to describe the precise reasons how or why these changes will occur. Even so, just as with my beliefs, the rules implied by coherence provide logical guidance for my expression of belief as to how these beliefs will change. Michael Goldstein The Prevision of a Prevision

In the preceding chapter, we saw how central reflection and martingale principles are to a generalized theory of learning. The purpose of this chapter is to examine those principles in more detail. Reflection requires one’s future beliefs to cohere with one’s prior beliefs. There are many examples of cases where this would not seem advisable. What these examples have in common is that factors other than learning new information have an influence on how beliefs change. If we restrict belief revisions to learning events, the reflection principle can be derived as a requirement of rational belief change in three different ways. The first one, based on dynamic Dutch book arguments, is well known. The other two approaches are new. According to one, reflection principles can be derived within an accuracy framework, in which degrees of belief are regarded as best estimates of truth values. The third approach starts with the value of knowledge theorem of decision theory. The value of knowledge theorem says that you never expect to make worse decisions after having conditioned your prior probabilities on some new information: typically, new information is valuable. The same should be true for learning in general, not just for Bayesian conditioning. Reflection principles are an immediate consequence of requiring that the value of knowledge property holds for generalized learning.

6.1 Probabilities of Future Probabilities

126

Recall the black box learning situation introduced in the previous chapter. You are about to change your degree of belief for some proposition A

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.1 Probabilities of Future Probabilities

127

in response to an unspecified learning experience. You cannot now know what the new value of your belief will be; but if you contemplate what might happen, your anticipated future degree of belief for A is given by a random variable, X, concerning which you have degrees of belief, meaning that any proposition expressible in terms of X, such as X = r or r < X < s, has a definite probability. For the sake of mathematical simplicity, we assume that you only consider a finite number of propositions like A, and that the anticipated future degrees of belief for those propositions take on only finitely many values. These assumptions allow us to set aside certain thorny issues which revolve around the question of whether subjective probabilities should be countably additive. I will say a bit more about those issues at the end of this chapter, after the groundwork has been laid for the less controversial finite case. How should you think about your anticipated future degrees of belief in the black box learning situation? The answer I proposed in the preceding chapter is that anticipated beliefs ought to be governed by a reflection principle. There are a number of related principles that go by this name. One was introduced in the previous chapter. It says that, whenever the event X = r has positive probability, then P[A|X = r] = r.

(6.1)

Other reflection principles are immediate consequences of (6.1). The first one follows from (6.1) and the theorem of total probability:1   P[A] = P[A|X = r]P[X = r] = rP[X = r]. (6.2) r

r

The rightmost term is just the expected value of X. Hence, this formulation of the reflection principle says that your present degree of belief for A is equal to the expectation of your future degree of belief for A. A consequence of (6.2) is that P[A] is a weighted average of the probabilities P[X = r]. In other words, P[A] is a point in the simplex spanned by the probabilities for one’s possible future degrees of belief. Suppose, for example, there are only three possible values, r1 , r2 , r3 , for one’s future degrees of belief. Then we have p1 = P[X = r1 ], p2 = P[X = r2 ], p3 = P[X = r3 ] and p1 + p2 + p3 = 1. Now think of an equilateral triangle which is spanned by the three points (p1 , 0, 0), (0, p2 , 0), (0, 0, p3 ). This triangle is a simplex. It follows from the second reflection principle (6.2) that P[A] is inside that triangle. This is what the third reflection principle asserts in general. It sometimes also takes on a weaker form, requiring that your current 1 The converse is also true; see van Fraassen (1995).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

128

Reflection

beliefs lie between the values spanned by your future beliefs. In other words, P[A] must lie between the maximum and the minimum of the values of the anticipated future degrees of belief, P[X = r]. All three kinds of reflection principles relate your current beliefs to your anticipated future beliefs. The constraints they impose are quite mild, however. As an illustration, consider the second reflection principle (6.2). Whatever the exact value of your current belief, P[A], might be, your probabilities for X = r can generally take on a wide range of different values as long as they are in balance. Despite the fact that they are in general rather unrestrictive, there is no shortage of examples featuring violations of reflection principles.2 One well-known example involves extreme probabilities: you are certain that X has a specific value, P[X = r] = 1 for some r. In this case all three reflection principles imply that P[A] = r. Thus, your present belief needs to be equal to your anticipated future belief. It is easy to see that this can lead into problems. Consider the story of Ulysses and the Sirens.3 Ulysses’ ship is drawing near the island of the Sirens. He knows that once it is sufficiently close, his beliefs and the beliefs of his crew are going to change in an irrational manner. For instance, let A be the proposition that his ship is in no danger when steered toward the cliffs. Ulysses now believes that A is false, P[A] = 0. However, his anticipated future degree of belief when being under the influence of the Siren’s song, X, is equal to some r > 0 with certainty, P[X = r] = 1. By reflection, P[A] = r, which contradicts Ulysses’ present belief. Violations of reflection are not restricted to situations which involve extreme probabilities. Another plausible scenario is one in which you anticipate to learn something but forget part of it before you update your beliefs. This may result in a mismatch between new and old probabilities. Similarly, if you plan to go to a bar and expect to have a few drinks too many, your anticipated degrees of belief at two o’clock in the morning will perhaps not cohere with your prior probabilities. Many other examples violate reflection principles. However, the question is whether they are bona fide counterexamples to reflection. A counterexample requires more than just coming up with situations for which reflection principles are implausible; it also has to be demonstrated that reflection principles are supposed to apply to these situations in the first place. If we 2 See Levi (1987), Bacchus et al. (1990), Christensen (1991), Talbott (1991), Maher (1992),

Bovens (1995), Arntzenius (2003), and Briggs (2009). 3 See Skyrms (1990) and van Fraassen (1995).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.2 Dynamic Consistency

129

think of reflection principles as principles of epistemic rationality that regulate rational learning, then the examples brought up against it are not, in fact, counterexamples. Each example involves elements that have nothing to do with learning, such as being manipulated by the Siren’s song, coming under the influence of a thought-altering substance, or forgetting. Reflection principles, being principles of rational learning, don’t need to apply to these situations.4

6.2 Dynamic Consistency This point – that reflection is a principle that guides rational learning but not arbitrary types of belief change – follows naturally from dynamic coherence arguments. The dynamic Dutch book argument for the reflection principle (6.1) is due to Bas van Fraassen and has the same basic structure as David Lewis’s argument for conditioning. Michael Goldstein has presented a more general argument for conditional expectations.5 In both van Fraassen’s and Goldstein’s settings there is a black box learning situation where your future beliefs are determined in response to some unspecified information. A dynamic coherence argument then evaluates policies for updating your current belief, P[A], to your future belief, X, for A. These quantities specify your present fair betting odds and the fair betting odds you are prepared to adopt in the future. It is not difficult to see that an inconsistency results if, given that X = r, you change your fair conditional betting odds to a value other than r: P[A|X = r]  = r. This inconsistency drives the dynamic Dutch book argument for reflection. The conclusion of the Dutch book argument is that for degrees of belief to be dynamically consistent they need to satisfy reflection. In order to interpret this result correctly, recall our discussion of dynamic consistency in Chapter 1. The crucial distinction that we have drawn there is between ex ante and ex post evaluations. Before a learning event, rules for updating can only be evaluated as rational if they are consistent with one’s prior degrees of belief. If the learning situation is given by learning the truth about an observational partition, we have seen that conditioning 4 Moss (2015), in criticizing Titelbaum’s (2012) view, takes this to be a disadvantage because it

restricts principles of belief change to pure learning situations. There is a restriction, but in contrast to Moss I don’t view it as a problematic one. It arises from restricting attention to epistemic rationality, where we abstract away from many features of real life learning situations such as forgetting. 5 See van Fraassen (1984) and Goldstein (1983).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

130

Reflection

is the only dynamically consistent learning rule. The learning situation is a very different one now: there is no observational partition, and so learning proceeds on less certain grounds. In all other respects, though, our earlier discussion of dynamic consistency can be transferred almost word-for-word. If prior to going through the black box you consider the situation to be an authentic learning event, X represents your anticipated future degree of belief after having assimilated the new information into your prior beliefs. A violation of reflection is then indeed irrational. The ensuing Dutch book indicates that you are planning to incorporate the new evidence inconsistently into your old system of beliefs. You thus cannot be epistemically rational in the very basic sense that your degrees of belief are best judgments both before and after the learning experience.6 Let’s take another look at the examples of the preceding section in the light of this reading of dynamic Dutch book arguments. Suppose that Ulysses is offered a series of bets prior to being near the Siren’s island. Because his beliefs violate reflection, betting in accordance with his present and anticipated degrees of belief as to the proposition A would lead to a sure loss. Is it irrational, then, for him to hold those beliefs? Of course it’s not. There is nothing wrong with them because Ulysses does not expect to have a learning experience where he updates his beliefs on new information. In fact, he anticipates undergoing a belief change that has nothing to do with learning at all. Thus, he would not commit ex ante to the future belief X since he thinks that X comes about in an epistemically illegitimate way. Analogous remarks apply to other violations of reflection. If you are about to have a few drinks too many, you don’t expect to have a black box learning experience (you might go through a different kind of black box, though). The same is true for belief change by forgetting. While in this case you might learn something, you also forget something, which together does not constitute a genuine learning event. So the epistemic state you end up in is not one where you expect to make best judgments.7 In all these examples, the agent’s new degree of belief does not arise in a way that is fully epistemically legitimate. Thus, the agent has ex ante 6 Jeffrey (1988), Skyrms (1990), and van Fraassen (1995) have expressed similar views. For a

more detailed discussion see Huttegger (2013). Mahtani (2012) also discusses dynamic Dutch book arguments for conditionalization and reflection, but not from the perspective of epistemic rationality that I use here. 7 In an interesting paper, Briggs (2009) thinks of these situations as involving “self doubt.” Self doubts may arise because of forgetting, being manipulated, or being intoxicated. It seems to me that saying that the situations are not genuine learning events is tantamount to saying that they involve self doubt.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.3 Expected Accuracy

131

every reason not to endorse the new degree of belief: her anticipated future degrees of belief do not hold up to her ex ante standards. This shows that, in fact, no genuine dynamic Dutch book is possible. The dynamic Dutch book arguments in the above examples presume that the agent is willing to accept bets that are fair according to her prior beliefs and her anticipated future beliefs. But since the latter are epistemically defective, why should she accept the associated bets as fair? Unless she is forced to accept bets at prices she considers unfair ex ante, the agent will not enter into all of the bets, nullifying the Dutch book. To put the same point in a different way, the sure loss results from the agent foreseeing an irrational belief change. So the analysis yielded by the betting approach correctly identifies an underlying irrationality. If the agent’s future beliefs are not under the control of a policy she can implement, she can at least guard herself against acting on her irrational future beliefs. That is what Ulysses did, and that is what we are doing if we leave our car at home before going to a bar. Dynamic Dutch book arguments, then, strongly suggest that reflection principles are requirements of epistemic rationality. Does this mean that it is irrational to drink alcohol, or to forget? When raising such questions we need to be careful not to mistake epistemic rationality for all-thingsconsidered rationality. All-things-considered, it may be rational to have a drink or to forget something, for instance if you enjoy drinking, or if you think that remembering a certain piece of information is not worth the cost. But this is entirely compatible with insisting that the new opinions associated with drinking or forgetting are not epistemically rational. Epistemic rationality is more restrictive than all-things-considered rationality. It only claims that an agent’s opinions are best judgments given all the information available to her. It might be worth trading off some epistemic value for something else.

6.3 Expected Accuracy Dynamic Dutch book arguments are not everyone’s cup of tea. Fortunately, the conclusion that reflection is a necessary condition for epistemic rationality can also be arrived at from other starting points. In this and the next couple of sections we look at an approach based on one of the alternatives to the Dutch book approach: epistemic accuracy. We saw in Chapter 1 that epistemic accuracy takes degrees of belief as best estimates of truth values. The truth value of a proposition A can be

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

132

Reflection

identified with its indicator IA . We may also consider quantities other than truth values; think of the height of a person or the numerical outcome of an experiment. Such a quantity can be represented by a random variable, X. Bruno de Finetti has called estimates of random variables previsions, and has built his mature theory of coherence on that notion.8 Coherence arguments show that previsions of random variables are additive. It follows that the coherent previsions of a random variable X can be viewed as its expected value E[X]. Coherent degrees of belief are a special case: they are coherent previsions of indicator variables. Since P[A] = E[IA ], the coherent degree of belief of A is the same as its probability. De Finetti has also considered an alternative to coherence based on measuring the accuracy of estimates. Suppose that the true value of a quantity X is unknown, and that your best estimate of X is Y. The squared distance (X − Y)2 of X and Y, known as the Brier score or the quadratic loss function, is one way to quantify how far away your estimate is from the truth. If the difference between your estimate Y and the true value of X turns out to be large, then you incur a correspondingly large quadratic loss. The quadratic loss function can be used to derive the basic principles of probabilism. One was mentioned in Chapter 1: epistemic accuracy arguments show that degrees of belief should be probabilities if accuracy is measured by the quadratic loss function. De Finetti has proved along the same lines that previsions of random variables should be additive.9 As shown by Joyce, the quadratic loss function can be replaced with other functions that measure closeness to indicators.10 The epistemic significance of these results derives from the basic evaluative scheme that degrees of belief and estimates of random variables are rational if they are as accurate – that is, as close to the truth – as possible.11 If we can show that by violating a probabilistic principle one is guaranteed to be less accurate than by following it, we have a good reason to think that the principle in question is a requirement of epistemic rationality. The epistemic accuracy approach has also been applied to conditionalization in papers by Hilary Greaves and David Wallace, Hannes Leitgeb and Richard Pettigrew, and Kenny Easwaran.12 What these authors have

8 De Finetti (1974). 9 De Finetti (1974). See also Savage (1971). 10 Joyce (1998, 2009). 11 See Pettigrew (2016). 12 Greaves and Wallace (2006), Leitgeb and Pettigrew (2010b), and Easwaran (2013). Easwaran’s

development, in contrast to the other two papers, does not rely on the quadratic loss function. For further developments, see Schoenfield (forthcoming).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.4 Best Estimates

133

shown, in various degrees of generality, is that conditional probabilities are an agent’s best estimate of a proposition’s truth value after learning that a particular proposition is true. Thus, the only rational future degrees of belief are given by conditional probabilities. I won’t consider the arguments of Leitgeb and Pettigrew, Greaves and Wallace, and Easwaran in detail here, since their conclusions follow from the more general results I am going to present in the next section. What I wish to mention, though, is that their framework is not exactly the same as the one used by de Finetti or Joyce. It relies on measuring what Leitgeb and Pettigrew call “expected inaccuracy.” In terms of the quadratic loss function, the expected inaccuracy of taking Y as your estimate of X is given by E[(X − Y)2 ].

(6.3)

The expectation is taken with respect to your prior probability measure. Instead of evaluating (X − Y)2 at each state of the world separately, as de Finetti and Joyce do, the squared differences are weighed by your probabilities for states of the world and added (in the discrete case). The resulting quantity E[(X−Y)2 ] is your expectation of the overall difference between X and Y. Using expected inaccuracies is reasonable if we assume that the agent is statically coherent (i.e., if her degrees of belief are given by a probability measure).

6.4 Best Estimates Working with expected inaccuracy allows us to determine the best estimate of a random variable, which we identify with an estimate that is least inaccurate in expectation.13 Suppose C is the class of random variables that are presently available to you as estimates of a particular random variable, X. The best estimate of X in C is the random variable that minimizes the expected inaccuracy E[(X − Y)2 ] over all random variables Y in C. The best estimate clearly depends on which estimates are in C. If X is itself an element of C, then it clearly is its own best estimate (E[(X − X)2 ] = 0). But in most situations of interest, X won’t be available as an estimate. Quadratic loss functions give rise to a rich geometrical structure within which it is quite straightforward to derive the following two reflection principles, which were mentioned at the end of the last chapter: P[A|Z] = Z

and E[X|Y] = Y

with probability one.

(6.4)

13 This and the next section are based on Huttegger (2013).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

134

Reflection

(The arguments for (6.4) go through for general probability spaces as long as we require probability measures to be countably additive.) In (6.4), Z is taken to be the best estimate of the truth of A, and Y the best estimate of the true value of the random variable X. The equation E[X|Y] = Y says that your conditional expectation of X given your best estimate Y of X has to be equal to Y. The principle P[A|Z] is a special case, since P[A|Z] = E[IA |Z] (Z is your best estimate of the indicator IA ). Hence, in order to establish that reflection principles can be derived within an accuracy framework, it is enough to show that E[X|Y] = Y with probability one if Y is a best estimate of X (i.e., minimizes inaccuracy). The geometry induced by quadratic loss functions is essentially the same as the geometry of Euclidean space. In three-dimensional Euclidean space we have points, called “vectors,” the standard Euclidean distance function between vectors, and a dot product that can be used to define the angle between vectors. In the vector space associated with quadratic loss functions, vectors are random variables having two distinct properties. First, they are measurable with respect to a given Boolean σ -algebra F. Second, they are square integrable (if X is a vector, then E[X 2 ] < ∞). Square integrability is appropriate in the present context, for otherwise the expected inaccuracy (6.3) could be infinite or undefined. The set of all F-measurable, square integrable random variables is a vector space, which we denote by L2 .14 The indicator IA is clearly F-measurable whenever A is an element of F. Note also that IA is square integrable, since E[IA2 ] = E[IA ] = P[A] ≤ 1. Thus, all indicator functions IA of propositions in F are vectors in L2 . The expectation of the product E[XY] of two random variables X, Y in L2 defines an inner product, which generalizes the dot product of standard Euclidean spaces. This gives rise to a notion of angle between random variables. The random variables X and Y are orthogonal if E[XY] = 0 (X and Y are uncorrelated). More generally, just as the dot product of two vectors in three-dimensional Euclidean space is the product of their norms times the cosine of the angle between them, one can think of the inner product of two random variables as the product of their norms times the cosine of the angle between them, the cosine being the correlation between the random variables. Finally, the norm associated with the inner product gives rise to a distance between vectors in L2 . The norm of X, X , is given by the square 14 If X and Y are F-measurable and square integrable, then so is λX + μY for all real numbers

λ, μ. Hence, the set of all square integrable, F-measurable random variables is a vector space over the real numbers.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.4 Best Estimates

135

root of E[X 2 ]. The norm defines a distance between X and Y if we let it be equal to X − Y , i.e., the square root of E[(X − Y)2 ]. Up to a monotonic function (the square root), the distance in L2 is the expectation of the quadratic loss function. Thus, the vector space L2 comes equipped with an inner product and a distance function. It is also complete (there are no “holes” in L2 ).15 So it is safe to think of the random variables in L2 in the same way as we think of points in ordinary Euclidean space.16 The vector space L2 is relevant for us because it allows us to say precisely when a random variable Y is a best estimate of another random variable X. Since distance in L2 is given by E[(X − Y)2 ] up to a monotonic transformation, the random variable closest to X in terms of the quadratic loss function coincides with the random variable closest to X in the geometry of L2 . To see how we can put this to work, consider vector subspaces of L2 . A vector subspace is a subset of random variables in L2 that is itself a vector space. Suppose that the vector subspace K represents the class of your available estimates of X. Your best estimate is the random variable in K closest to X. In L2 this is the random variable in K that minimizes the distance X − Y . Now, it is generally true that for any X in L2 there is a Y in K that minimizes X − Y , provided that K is complete (has no holes). The minimizer Y is almost surely unique; any Y  that agrees with Y except on a set of probability zero also minimizes X − Y , but no other vector in K does. One can think of Y as the orthogonal projection of X on K: we obtain Y by taking X and projecting it onto K (think of drawing a line from X to K that is perpendicular to K).17 Since Y minimizes quadratic loss, it is the best estimate of X. This type of argument can be used to establish conditioning as a principle of rational learning. Let G be a σ -algebra, and let X be a random variable. Then the conditional expectation E[X|G] is the orthogonal projection of X on the vector subspace of all G-measurable random variables.18 Thus, X − E[X|G] minimizes distance between X and the set of all Gmeasurable random variables. In other words, a G-measurable random 15 If we have a sequence of square integrable random variables such that the distance between all

but finitely many of them becomes arbitrarily small, the sequence converges to an element of L2 . More technically, with respect to the norm · , all Cauchy sequences of elements of L2 converge to an element in L2 . The limit is not unique, but almost surely unique. If Y is a limit of the sequence, then any Y  with Y − Y  = 0 is also a limit, which means that Y and Y  are the same except on a set of probability zero. 16 The deeper reason for this is that L2 can be viewed as a Hilbert space. Strictly speaking, L2 is a Hilbert space up to random variables that agree almost surely (Williams, 1991, p. 65). 17 Mathematically, X − Y is orthogonal to all random variables Z in K (Williams, 1991, p. 67). 18 See Williams (1991, p. 85).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

136

Reflection

variable that does not essentially coincide with E[X|G] is a suboptimal estimate of X. If G represents the information available to you after having made an observation, then the class of G-measurable random variables is available to you in the sense that any such random variable could in principle be your updated estimate of X after being informed which elements in G are true.19 To put it differently, any such random variable can be implemented as a policy for updating beliefs in the learning situation given by G. Hence, the conditional expectation E[X|G] is your best estimate of X after the observational event represented by G. Bayesian conditioning is a special case of this result. Take X = IA for some proposition A, and suppose that the observational σ -algebra G is generated by a finite observational partition P = {E1 , . . . En }. Then P[A|P] = E[IA |G] with probability one. If E ∈ P has positive probability, we know that P[A|P] = P[A|E] with probability one. Hence, the above result implies that P[A|E] is the best estimate of A after being informed that E is the true member of the partition. As a result, if you wish to update to your most accurate estimate of the truth, you should plan to use conditional probabilities. The same type of argument also establishes the two reflection principles given in (6.4). Suppose Y is your anticipated future estimate of a quantity X after a black box learning event. The set of all Y-measurable random variables is the set of all random variables measurable relative to the σ -algebra generated by Y. This is a complete vector subspace of L2 . Let’s assume that this vector subspace contains all and only those estimates of X that are available to you after the learning experience. This is a very plausible assumption. The propositions in the σ -field generated by Y capture the conceptual resources of Y (that is, all propositions that can be expressed with Y). They represent the information provided by the learning event if Y is a best estimate. All Y-measurable random variables share the same conceptual resources. So they all are in principle available as estimates after the learning event whenever Y is available. Because the set of all Y-measurable random variables is a complete vector subspace of L2 , the orthogonal projection of X on that subspace is given by the conditional expectation E[X|Y] (this means that X − E[X|Y] minimizes the distance between X and the space of Y-measurable random variables). It follows that the conditional expectation E[X|Y] is the best estimate of X after the black box learning event. Thus, if your anticipated future estimate Y of X is as accurate as possible, it needs to be equal to the conditional expectation E[X|Y]: 19 See Greaves and Wallace (2006) and Easwaran (2013) for discussions of availability.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.5 General Distance Measures

137

E[X|Y] = Y. The standard reflection principle for probabilities follows by specializing to indicator variables. Another application of this type of argument leads to the martingale principle of Chapter 5. Suppose that X1 , X2 , . . . is a sequence of estimates of the indicator IA . This sequence is a martingale if E[Xn+1 |X1 , . . . , Xn ] = Xn . We assume that the estimates improve over time: P[A|X1 , . . . , Xn ] = Xn

(6.5)

for all n. This is the case if the most recent estimate, Xn , is the most informed one and thus overrides all previous estimates. From the arguments above it follows that the conditional expectation E[Xn+1 |X1 , . . . , Xn ] is the best estimate of Xn+1 given the information generated by the process so far, X1 , . . . , Xn . By using our assumption (6.5) twice, we have E[Xn+1 |X1 , . . . , Xn ] = E[P[A|X1 , . . . , Xn+1 ]|X1 , . . . Xn ] = P[A|X1 , . . . , Xn ] = Xn , where the second equality follows from the useful “tower property” of conditional expectations.20 Hence, the martingale condition is a consequence of the assumption that each estimate is better at gauging the truth of A than its predecessors, and the fact that conditional expectations are best estimates.

6.5 General Distance Measures The geometry of L2 is tied to the quadratic loss function. While some would argue that quadratic loss functions are the correct measure of epistemic accuracy, their arguments are not so obviously watertight that everyone would agree.21 It is thus important to extend the arguments of the preceding section to a more general class of epistemic accuracy functions. 20 The tower property implies the second inequality in the following expression:

E[P[A|X1 , . . . , Xn+1 ]|X1 , . . . Xn ] = E[E[IA |X1 , . . . , Xn+1 ]|X1 , . . . Xn ] = E[IA |X1 , . . . Xn ]. This says, informally, that the smaller σ -algebra always wins when conditioned upon.

21 Selten (1998), Leitgeb and Pettigrew (2010a), and Joyce (2009) discuss the Brier score

extensively.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

138

Reflection

The loss functions I wish to consider here are known as Bregman distance functions.22 A Bregman distance function assigns real numbers to pairs of random variables in the following way. Suppose that φ : D → R is a strictly convex differentiable function, where D is an interval in R. Then the Bregman distance function Bφ : D × D → R is defined by Bφ (x, y) = φ(x) − φ(y) − (x − y)φ  (y) (φ  denotes the derivative of φ). If we take φ(x) = x2 , we get the quadratic loss function (x − y)2 . What is important about Bregman distance functions is their connection to conditional expectation. Suppose that F is a Bregman distance function. Then it can be shown that the conditional expectation E[X|Y] has the smallest expected F-distance to X among all Y-measurable functions. Thus, by following the same line of argument as in the previous section, Y is your best estimate of X among all Y-measurable functions if and only if Y = E[X|Y] with probability one. As a consequence, the arguments for conditioning and for the martingale principle also go through whenever we use Bregman distance functions. Bregman distance functions have a nice epistemological motivation. Let F be a distance function between random variables which is not necessarily of Bregman type. Suppose that for all measurable random variables X and for all constant random variables Z such that E[X]  = Z, E[F(X, E[X])] < E[F(X, Z)].

(6.6)

This says that according to the distance F, the expectation of X is on average the best estimate of X. If X = IA , then for any measurable proposition A and for any constant random variable Z such that P[A]  = Z, E[F(IA , P[A])] < E[F(IA , Z)].

(6.7)

Thus (6.7) says that in terms of F your degree of belief P[A] is on average the best constant estimate of the indicator IA . Under some rather mild technical assumptions, it follows from (6.6) that F is a Bregman distance function.23 Conditions such as (6.6) and (6.7) are known as propriety conditions and have been much discussed in the literature.24 Degrees of belief that obey (6.7) are also sometimes called immodest. 22 Banerjee et al. (2005). 23 If (6.6) holds for a continuous nonnegative function F with F(x, x) = 0, and if F is

continuously differentiable in its first argument, then F is the Bregman distance function associated with some strictly convex function φ. 24 See Greaves and Wallace (2006), Joyce (2009), and Easwaran (2013).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.6 The Value of Knowledge

139

This is a little misleading because it suggests there is something wrong with them. But from the point of view of epistemic rationality, the kind of modesty considered here is false modesty: degrees of beliefs that are modest are self-undermining, for they recommend degrees of belief other than themselves as being superior. A function F that violates (6.7) is inconsistent with regarding one’s opinions as the best possible given one’s present information. The same holds for random variables in (6.6). Self-undermining opinions are exactly as bad as they sound. If one regards other opinions as superior, then epistemic rationality requires one to reconsider matters and adopt another set of opinions. Let me sum up our results on epistemic accuracy. An approach based on expected inaccuracies supports all the principles of rational change of opinions I am advocating in this book, from conditioning, to reflection principles, to the martingale condition. And it does so for a very broad class of measures of epistemic accuracy.

6.6 The Value of Knowledge Our third way of justifying reflection is based on the value of knowledge theorem.25 This result says that the expected utility of an uninformed decision cannot be greater than the prior expectation of an informed decision. In certain idealized decision situations, more information should not lead you to expect to make worse decisions. The proof of the value of knowledge theorem for Savage’s decision theory is due to I. J. Good.26 Good’s value of knowledge theorem can be looked at in terms of a second-order decision problem. The first-order decision problem consists of n acts A1 , . . . , An , m states of the world S1 , . . . , Sm , and a utility function u for conjunctions of acts and states. The second-order problem is that you can either make a choice now, or defer until after the outcome of an experiment is revealed to you. The experiment is represented by a finite partition P = {E1 , . . . , Ek } of observational propositions. Should you decide now or wait until you get more information? Suppose that you follow Savage’s decision theory and choose the act A that maximizes the expected utility  P(Si )u(A&Si ), E[u(A)] = i 25 This and the next section are based on Huttegger (2014). 26 Cf. Good (1967). See also Savage (1954). There is a note by Ramsey that anticipates the

theorem, published in Ramsey (1990); see Skyrms (1990) on this and on Good’s argument.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

140

Reflection

where P(Si ) is your present probability for state Si and u(A&Si ) is your utility if the state of the world is Si and you choose A. The value of making a choice now is given by max E[u(Aj )]. j

If you wait until after the experiment is performed, you may condition your probabilities on the new information. For simplicity, we assume that P(Ek ) is positive for every member of P. The posterior expected value for act A, after being informed that E is true, is equal to  P(Si |E)u(A&Si ). E[u(A)|P] = i

If you plan to choose the act that maximizes your future expected utility, then your prior expectation of your future expected utility is   P(Ek ) max P(Si |Ek )u(Aj &Si ). E[max E[u(Aj )|P]] = j

j

k

i

It is not difficult to show that max E[u(Aj )] ≤ E[max E[u(Aj )|P]]. j

j

The inequality is strict if not all acts have the same expected utility on every member of the partition.27 Hence, the value of choosing now is typically less than the value of choosing after having obtained additional information. I will refer to this relation between choosing now and deferring the decision as the value of knowledge relation. Good’s value of knowledge theorem relies on a number of implicit assumptions. The experiment is costless; otherwise you may not wish to wait for more information even if it would be useful. Moreover, you know that you are an expected utility maximizer and that you will be one after learning the true member of the partition. The states, acts, and utilities are the same before and after the learning experience. Also, having the learning experience does not by itself alter your probabilities for states of the world (although the outcomes of the experience usually do); the learning experience and the states of the world are probabilistically independent. Furthermore, you know that you will update by conditioning. Finally, by working within Savage’s decision theory we import several additional assumptions into the value of knowledge theorem, most notably the probabilistic independence of states and acts. 27 See, e.g., Skyrms (1990, pp. 88–89).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.7 Genuine Learning

141

Thus, the value of knowledge theorem operates under fairly special conditions. Jay Kadane, Mark Schervish, and Teddy Seidenfeld have examined a number of situations where it fails; in these situations ignorance might be bliss.28 The question whether the value of knowledge theorem should be a theorem of every rational decision theory is complex and nuanced. Part of the problem is that there is a lot more going on in examples of blissful ignorance aside from obtaining more information. A case in point is getting to know something one might rather not know. In Mikhail Bulgakov’s famous novel The Master and Margarita, on more than one occasion Moscow citizens are foretold their fate by the devil, who goes by the name Voland (a self-proclaimed German) and is visiting their city to celebrate Walpurgisnacht. After becoming a victim of Voland’s activities, one of them, a canteen manager, is given a piece of free information: in nine months he will die from liver cancer. He then tries to act prudently on that information – to no avail. Should he have been willing to pay to not be informed about the time and cause of his death? Maybe so. Not wanting to know something like that could be rational. Does this contradict the value of knowledge theorem? Not necessarily. Obtaining information about the circumstances of his death may change more than just the informational state of the canteen manager. For instance, it might also transform his preferences, which violates one of the assumptions of Good’s theorem, namely that the decision problem is the same before and after the learning event. Even though the question of whether any rational decision theory should satisfy the value of knowledge relation is difficult to answer, Good’s theorem applies to a fairly well circumscribed class of decision situations. The close connection between the value of knowledge and learning by conditioning in these situations provides the basic insight for my third justification of reflection.

6.7 Genuine Learning Recall the basic problem of this chapter: if learning takes place in a black box, how can we say that a belief change comes from learning? Besides dynamic coherence and accuracy, the value of knowledge relation is another way to evaluate what’s going on inside the black box without opening it. This can be done by making precise how learning events should affect decision situations. Taking conditioning as a paradigm of learning, the idea is 28 See Kadane et al. (2008), which contains a thorough discussion of costless information.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

142

Reflection

this: if I think that an impending belief change is a genuine learning event, then I expect the new beliefs to help me make better decisions in all relevant decision problems. As we saw in the last section, the value of knowledge relation holds in all Savage-style decision situations whenever an agent updates by conditioning. Since we think of black box learning as a generalization of conditioning, it is plausible to require that generalized learning also satisfies the value of knowledge relation. The requirement is, of course, restricted to the decision situations of the preceding section (the underlying decision problem is in the style of Savage, the utilities are the same before and after the learning experience, you are certain to act as a Bayesian maximizer now and in the future, and so on). We ignore other decision situations because they might allow the value of knowledge theorem to fail in the basic case of conditioning. Thus, the value of knowledge relation fails to reliably indicate learning events in these situations. Let’s again assume a fixed set of acts A1 , . . . , An and a fixed set of states S1 , . . . , Sm . Consider the class C of all decision problems based on these two sets. Each decision problem in C is defined by fixing the utilities u(Ai &Sj , ), i = 1, . . . , n, j = 1, . . . , m; thus, C is parametrized by utility functions u. The agent contemplates a shift from P, the prior probability over states, to Q, where Q is a random probability measures over states.29 We assume, of course, that the random probability measure Q is in the domain of the prior probability P. The present expected value of act A is given by  P(Si )u(A&Si ), EP [u(A)] = i

and the posterior expected value of A is given by EQ [u(A)] =



Q(Si )u(A&Si ).

i

The prior expectation of the posterior expected utility of A is equal to EP [EQ [u(A)]]. The main postulate can now be expressed as follows:

29 That is, Q is given by (Q(S ), . . . , Q(S )). The random variables Q(S ) can take on finitely m 1 i

many values that sum up to one.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.8 Massaging Degrees of Belief

143

Postulate. If a belief change from P to Q constitutes a genuine learning event, then EP [max EQ [u(Aj )]] ≥ max EP [u(Aj )] j

j

(6.8)

for all decision problems in C, where the inequality is strict unless the same act maximizes expected utility regardless of which random probability Q is realized. The left side of (6.8) is the prior expectation of the value of your posterior choice after the learning event; the right side is the value of choosing an act now. The postulate makes precise the idea that black box learning should respect the value of knowledge relation. That the value of knowledge relation should hold for all relevant Savage-style decision problems is a robustness requirement. Obtaining new information should be helpful, or at least not harmful, in all such situations. In contrast to learning, other sources of belief change (taking drugs, falling in love, having a sunstroke, etc.) might be helpful in some, but not in all decision situations. It is possible to derive the reflection principle (6.1) from the value of knowledge postulate.30 The derivation involves an additional assumption. We suppose that the set of states is sufficiently fine-grained so as to capture the future random probability measure Q; that is to say, if S is a state of the world, then it determines a particular future probability measure over states. The additional assumption says that the future probability of a state S is zero whenever that probability is not the one realized in S. In other words, the agent is certain that she will know the posterior after the learning event. For the kinds of learning events we are considering here, this assumption is plausible. The learning event is given by a shift in probabilities; so anything that makes you forget the shift must be something that disrupts the learning process, turning it into something that fails to be a pure learning experience.

6.8 Massaging Degrees of Belief Our three ways of justifying the reflection principle – dynamic coherence, accuracy, and the value of knowledge – are variations of one theme. That probabilistic principles can be approached from different directions is something clearly recognized by de Finetti and his successors.31 Distinct 30 For a proof, see Huttegger (2014, Section 4). The class of decision problems C is actually larger

than it needs to be. Only a set of particular utility functions is required in the proof. 31 De Finetti (1974).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

144

Reflection

interpretations of beliefs – as betting odds, as best estimates, and as beliefs in general decision problems – converge on the same formal structure. In our case, they converge on reflection and related principles, establishing those principles as robust laws of rational learning. One worry about what has been said in this and earlier chapters is that our agents seem to have a lot of insight into their own belief states. Considering random variables that represent an agent’s present or future opinions requires that she has beliefs over these opinions, and so we implicitly assume that she has epistemic access to her own opinions.32 This assumption may seem particularly problematic for agents with limited cognition. Probabilistic models of learning make use of idealizations, as do all models in the natural and social sciences and in philosophy. Higher-order probabilities are one type of idealizing assumption that we put to use here. What is the status of such idealizations in investigations that want to say something about rationality? Because of the close connection to the topics of this book, it is helpful to start by considering the role of idealizations in decision theory from a point of view that goes back to Ramsey.33 The theories of Ramsey and Savage assume that a rational agent chooses consistently, even though our preferences are never fully consistent. Consistency is an idealization that allows one to study the logic that underlies rational decision making. Accordingly, consistency is used in a primarily evaluative way. Suppose that after carefully studying my preferences, a decision theorist points out some inconsistencies. It follows that my preferences are irrational. The normative consequence of this is that I should change my preferences, provided that I’m able to and that the costs of rethinking the decision situation are not too high. This results in what Ken Binmore has aptly called a massaging process.34 Insofar as I’m able to reconsider my preferences, the ideal of consistency helps me knead my preferences until all bumps and hiccups are gone. A system of consistent preferences can thus be viewed as an idealized model of a rational agent who has taken the process of massaging to the limit of a reflective equilibrium. The important point though is that consistency remains the evaluative guide throughout this process. The fact that agents fail to arrive at a reflective equilibrium due to constraints does not diminish the value of consistency as a guiding principle. 32 Such worries are articulated in, e.g., Williamson (2002) and Weisberg (2007). 33 See Ramsey (1931) and Savage (1954). 34 Binmore (2009).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.8 Massaging Degrees of Belief

145

Consistency plays the same role for partial beliefs. It has an evaluative role that helps avoid having partial beliefs that contradict one another. Some slight inconsistencies may not be important for an agent in certain situations. But in order to have a standard of full rationality in which the evaluative role of consistency can be fully appreciated, it is useful to consider the case in which a massaging process has carried an agent’s partial beliefs to a reflective equilibrium under the logical guidance of consistency. The massaging process does not need to lead to a unique probability measure. Sometimes, qualitative probability judgments or interval valued probabilities is all that can be had. Sharp probabilities are just a good compromise between tractability and having a model of a rational agent. Higher-order probabilities fit right into this picture. An agent who consistently assigns probabilities to her own current or future probabilities has massaged her beliefs to a fully rational state. This does not mean that before and during the massaging process she always has full insight into what her lower-order partial beliefs are. As these beliefs are evolving, however, consistency requires higher- and lower-order beliefs to become aligned with each other, which in equilibrium will mean that they are consistent. To put the same point somewhat differently, an idealized model can often be thought of as approximating an agent who hasn’t carried the massaging process to the limit. The agent is then nearly rational if principles that hold in the idealized model hold approximately for her. A good example, which is due to Rachael Briggs, is relevant for reflection. Briggs has proven that in some cases where reflection fails, a weaker kind of “distorted reflection” that is close to reflection may hold.35 As I have occasionally pointed out, cognitively limited agents usually cannot be regarded as reflecting on their own learning procedures. As a result, they do not in general have the kinds of higher-order beliefs described above – the limit of a conscious massaging process. That does not mean higher-order beliefs play no role in how such an agent learns from experience. The basic model of reinforcement learning is a helpful example. In applications, choice probabilities or propensities describe an agent’s choice behavior at least to some extent. But that the agent may not have cognitive access to these quantities need not keep us from trying to evaluate the learning process in terms of a probabilistic model, provided that the agent behaves as if she has choice probabilities and propensities. Since the evaluation we are interested in proceeds from the agent’s perspective, the model will just look like the endpoint of a massaging process; the 35 See Briggs (2009).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

146

Reflection

agent is not doing the massaging (some other evaluator is), but this does not affect the analysis of the model.

6.9 Countable Additivity Our arguments for reflection have been mostly restricted to finite probability spaces. Their extension to general probability spaces raises a number of subtle issues. In a sense, we could avoid them by maintaining that any probabilistic situation can be approximated, for practical purposes, with a sufficiently large finite probability space. While this may be correct, the importance of infinite probability spaces calls for some remarks on when reflection principles hold in general probability spaces. In infinite probability spaces one can derive reflection principles in terms of the approaches discussed in this chapter, provided the probability space is countably additive.36 Countable additivity is an integral part of standard probability theory. De Finetti, however, has argued that countable additivity should not be adopted as a general axiom. This position is well known from his later works, but his opposition reaches back to some of his earliest papers. De Finetti’s point of view is important because it is directly relevant to whether reflection principles hold in general probability spaces.37 Kadane, Schervish, and Seidenfeld provide a thorough discussion of the relationship between countable additivity and the generalization of the reflection principle (6.2) to conditioning given a σ -algebra, E[X] = E[E[X|F]],

(6.9)

where E[X|F] is the conditional expectation of the random variable X given some σ -algebra F.38 The special case of indicator variables provides a reflection principle for conditional probabilities: P[A] = E[P[A|F]].

(6.10)

If the expectations are taken with respect to a countably additive probability measure, then (6.9) and (6.10) hold. But they may fail if the underlying 36 A probability measure is countably additive if the probability of the union of any countable set

of pairwise mutually exclusive events is equal to the sum of their probabilities. A probability measure is finitely additive if the probability of the union of any finite set of pairwise mutually exclusive events is equal to the sum of their probabilities. Clearly, every countably additive probability measure is also finitely additive, but the converse is not true. 37 His arguments are summarized in de Finetti (1974); see also de Finetti (1972). Eugenio Regazzini (2013) describes the origins of de Finetti’s views on countable additivity and provides many helpful references. 38 Kadane et al. (1996).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

6.9 Countable Additivity

147

probability measure is merely finitely additive (that is, finitely but not countably additive).39 There are a number of restrictions on finitely additive probabilities that restore reflection principles.40 Countable additivity nevertheless seems to be salient. It is well known that if one allows a countable number of bets, the same coherence arguments as in the finite case entail countable additivity.41 One could reject the admissibility of a countably infinite sequence of bets, as de Finetti does, but it is not easy to see why in the idealized world of probability spaces, which are populated by all kinds of infinite objects, this kind of infinite object should not be admissible. If we don’t allow it, there will be a certain mismatch between the space of events on the one hand and admissible probability measures on the other. This is because de Finetti does not constrain events at all; he thinks, for example, that any subset of the unit interval may be assigned a probability, including very complex objects like the Cantor set. What’s more important, de Finetti, like everyone else, identifies a set A with the union of a countable number of mutually exclusive sets that partition A. It seems to me that this structure of events should be mirrored in the structure of probabilities, which would entail countable additivity. In other words, if there is a problem with adding infinitely many probabilities, there is also a problem with taking the union of infinitely many events. All of this is not to say that countable additivity is not an issue. The debate between finite and countable additivity is essentially a debate about how to deal with the infinite. Countable additivity is tantamount to treating countably infinite operations in the same way as their finite counterparts, while finite additivity allows the infinite to be completely different from any finite approximation. There does not seem to be a universally correct way to choose one of the two options. Every choice involves tradeoffs. In many applications, however, countable additivity is a very plausible assumption. The reason is that we usually only care about a large but finite number of elements, such as observing an unbounded but finite number of coin flips. Countable additivity allows us to approximate the large finite with the infinite limit. Thus, results we derive for the infinite limit hold for large finite 39 The root of this result is that merely finitely additive additive probability measures may not be

conglomorable. Conglomerability basically requires that your unconditional probability for an event must be in an interval, whenever your conditional probabilities given a partition are in that interval. Dubins (1975) gives an example of a finitely additive probability measure that is not conglomerable. Countable additivity implies conglomerability in countable partitions, but not in uncountable partitions (Seidenfeld et al., 2014). 40 Such as strategic measures; see Purves and Sudderth (1976). 41 Adams (1962).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

148

Reflection

cases. We will have occasion to discuss this issue further in Chapter 8. For now, let me just note that countable additivity is often plausible because it enables us to explore the case we really care about (finite and large) without being committed to saying anything about what really happens in the infinite limit. Because of their relation to countable additivity, reflection principles also find some support in such cases.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:03:36, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.008

7

Disagreement

The criteria incorporated in the personalistic view do not guarantee agreement on all questions among all honest and freely communicating people, even in principle. Leonard J. Savage The Foundations of Statistics

Learning models usually feature an individual agent who responds in some way to new information. They are monological, in the sense that they ignore that learning often takes place in a social context. The learning models discussed in previous chapters are no exception. In the last two chapters of this book I wish to pursue two issues that are relevant for extending my approach to social epistemology. Both topics have to do with epistemic disagreement. Disagreement is ubiquitous in many areas of our lives. It is common in cutting-edge science, economics, business, politics, religion, and philosophy, not to mention the many ordinary disagreements we all have to deal with day to day. Many of our disagreements have epistemic aspects. For instance, divergent economic policies may be based on different assessments of data and models. What is the relationship between epistemic disagreements and rational learning? To what extent are divergent opinions compatible with all agents being epistemically rational? Finding answers to these questions is of crucial importance. This chapter focuses on learning from others. The opinions of other agents who might disagree with you are taken as evidence that can cause you to change your beliefs. We will see that this learning situation requires no substantially new solution. Radical probabilism provides us with all the resources to model updating on the opinions of other agents. The most important part of this chapter is a “rational reconstruction” of a rule for merging the opinions of agents which has been much discussed in the epistemological literature on peer disagreement. This rule, which is known as straight averaging, combines the opinions of a group of agents additively by assigning each of them equal weights. I will show that something

149

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

150

Disagreement

close to straight averaging emerges from combing a principle of Carnapian inductive logic with the theory of higher-order probabilities.

7.1 Agreeing to Disagree John Harsanyi’s common prior assumption is of fundamental importance to modern economic theory.1 It requires agents to have the same prior probability over certain aspects of a model. Thus, while it allows them to have some private information, it nevertheless entails that they have essentially the same model of the world. According to Harsanyi, differences in subjective probabilities should only come from differences in information; whenever two agents have the same information, they should also have the same subjective probabilities. We discussed in Chapter 5 a similar line of reasoning in epistemology, regarding the question of whether beliefs are uniquely determined by evidence. Harsanyi’s common prior doctrine is an instance of this view. For this reason, it faces the same problems when it is justified by the principle of indifference or something similar. The standard defense of Harsanyi’s doctrine in economics proceeds along a different trajectory, though. Through their own observations and by communicating with each other, agents accumulate enough of the same evidence for their beliefs to largely agree.2 This defense requires priors to be mature. I will explore the processes which may guide an agent to a mature prior in the next chapter, where we shall see that the conclusion which favors Harsanyi’s doctrine and similar positions is not universally correct. Yet, even if agreement holds for mature priors, it should not be expected to hold for juvenile priors. If juvenile priors don’t need to agree, then the posteriors derived from them also don’t need to agree. The tight connection between having different priors and posteriors is obvious in simple examples like flipping a coin. If our opinions about the bias of the coin are not the same initially, we will usually disagree after having updated our priors based on, say, five coin flips. There seems to be nothing wrong with that. I know that you conditionalize on the same information, but since you had a different prior, you don’t need to have the same posterior. Disagreement is entirely compatible with each of us being rational. The close connection between having the same posterior and having the same prior has been demonstrated more generally in Robert Aumann’s 1 Harsanyi (1967). 2 Acemoglu et al. (2016).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.2 Diverging Opinions

151

famous result on the impossibility of agreeing to disagree.3 Aumann invites us to think about two agents who have the same prior probability. Each agent gets some private information about the true state of the world. The two pieces of information need not be the same. Both agents update their prior by conditioning on their information. Both posteriors are then assumed to be common knowledge: this means the agents know both posteriors, they know the other one knows both posteriors, they know that the other one knows that they know, and so on. Common knowledge may be the result of each agent telling the other one about her posteriors. Based on the common prior assumption and the common knowledge assumption, Aumann proves that the agents’ posteriors are guaranteed to be the same.4 Despite the common prior assumption, this result may be surprising. The agents, after all, need not condition on the same evidence. If they condition on distinct propositions, how can they end up with the same posteriors? This is where the common knowledge assumption comes in, for it implies that each agent fully shares her information with the other agent. So the agents don’t just update on their own evidence, but on shared evidence. This together with the common prior assumption explains why the agents’ posteriors agree. The common knowledge assumption is thus a very general expression of agents having the same evidence. Aumann’s result accordingly says that posteriors disagree only if priors disagree whenever agents have the same evidence. Since disagreement seems to be no problem at least for juvenile priors, there is no inconsistency in having different posteriors which are based on the same information. Any argument against this conclusion has to live up to the formidable task of showing that there is always a unique rational prior regardless of the specific features of an epistemic situation.

7.2 Diverging Opinions So rational Bayesians may enjoy the delights of disagreement. But if they take disagreement as new evidence about the issues at stake, is there a rational way to respond to disagreement? In the course of examining this issue, I am mostly going to consider, as in Aumann’s model and in accordance with the main themes of this book, opinions that take the form of numerical probability assignments. Therefore, our basic situation of interest is a group 3 Aumann (1976). 4 The agents agree regardless of whether they share their evidence or their posteriors. But the

posterior they come to agree depends on what they are sharing.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

152

Disagreement

of agents expressing their subjective probabilities for at least one proposition. For such situations it is, I think, important to distinguish between two distinct perspectives on disagreement: the individual perspective and the group perspective. From a group’s perspective, we are interested in how a collective opinion should be determined based on individual opinions. A well-known example is the field of social choice. In social choice, opinions are preferences, and the goal is to aggregate the preferences of agents in a way that represents the group’s preference. This is closely related to the literature on judgment aggregation.5 As examples of judgment aggregation, consider a panel of judges who must deliver a verdict on whether a defendant is guilty, or a company’s executive board which decides whether to renew the contract of a CEO. In both examples, opinions are categorical judgments. But the aggregation literature is not restricted to the categorical setting. If the group members express their subjective probabilities regarding a set of issues, we may be interested in pooling their beliefs into a group belief.6 A good illustration for this situation is a statistician who is in the process of assigning prior probabilities. The probability judgments of a group of trustworthy statisticians is something she might exploit. What she would like to arrive at is a balanced aggregation of all the evidence given by the group members’ probabilities. Aggregation procedures specify how to respond to disagreement (or agreement) from the group’s perspective. The goal is to find a balanced representation of the different opinions that meets certain desirable properties, such as the Pareto principle. By contrast, the individual perspective seeks to answer the question of how an individual should change her beliefs after learning what the degrees of belief of other individuals are. Suppose, for example, that two agents disagree as to whether Slytherin or Gryffindor is more likely to win the next Quidditch game. Learning about their disagreement might prompt each agent to revise her beliefs in light of the other agent’s probability. There is a point of connection between what we have called the individual perspective and Aumann’s treatment of agreeing to disagree. In Aumann’s setting, agents don’t just update on their private information, they also learn the posterior of the other agent.7 There are, of course, a lot 5 See Zabell (1988) on the history of probabilistic accounts of judgment aggregation. For an

overview of judgment aggregation see List (2012). 6 See Dietrich and List (2015) for an overview. Genest and Zidek (1986) provide a survey of the

classical literature. 7 See Geanakoplos and Polemarchakis (1982) for a dynamic interpretation of this process.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.2 Diverging Opinions

153

of specific assumptions behind Aumann’s result which the individual perspective seeks to relax in order to find more general accounts of how we should learn from others. The main difference between the individual perspective and the group perspective is that from the group’s perspective there is no individual learner who incorporates the opinions of others into her already existing system of opinions. For this reason, the individual perspective is much closer to my approach to learning in this book, and I’m going to focus on it in the remainder of this chapter. I don’t wish to leave the impression, though, that the two perspectives are wholly unrelated. The question of whether, or when, the two perspectives are compatible is indeed of crucial importance. It is especially relevant if aggregation aims at summarizing the epistemic contributions of group members (e.g., the prior-kneading statistician); in this case, we would expect that rational aggregation procedures correspond closely to how a rational agent learns from others.8 In epistemology, the debate of how to respond to diverging opinions is often framed in terms of how to resolve disagreements among epistemic peers.9 Roughly speaking, epistemic peers are people with a similar education, intelligence, expertise, and evidence. Various positions on how to respond to peer disagreement have been staked out. The two positions which have dominated earlier discussions are the so-called steadfast view and the equal weight view.10 There are two rivaling intuitions behind these views. On the one hand, you may wish to stick to your beliefs if you have fully considered the issues at hand, even if a peer disagrees with you. On the other, you might wish to take peer disagreement as evidence in addition to the information you already have. It seems quite clear that neither the steadfast nor the equal weight view can be the correct response to peer disagreement in all situations. For this reason many epistemologists have developed moderate views that do not propose one rule which ought to be used in all cases of peer disagreement.11 This points in the right direction. An agent’s response to disagreement should depend on the epistemic context. The subjective inductive logic developed earlier in this book is especially well suited for making this more precise, since it allows us to identify updating protocols that are consistent 8 See Bradley (2006, 2007) or Steele (2012) for discussions of compatibility. 9 See Christensen (2009) for an overview. Gutting (1982) is usually credited with introducing

the term “epistemic peer.” 10 See, e.g., Kelly (2005, 2008), Christensen (2007), Elga (2007), Jehle and Fitelson (2009), and

Lam (2011) for discussions of these views. 11 Kelly (2010) and Elga (2007, 2010).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

154

Disagreement

with the basic structure of a learning situation. Approaching the problem of disagreement with the tools of subjective inductive logic has, as we will see below, a number of additional advantages. First of all, it guarantees that learning from others is consistent, and that it is not fundamentally different from learning other kinds of evidence. Furthermore, the special case of peer disagreement is treated in the same way as more general cases of learning from others, in which you learn from agents who don’t all have the same epistemic status. Finally, using inductive logic allows us to exploit those probabilistic symmetries which are known to be invaluable for the analysis of inductive inference in many other epistemic situations. Making use of symmetries is especially important because of the by now well-known drawback of the subjective Bayesian approach: its complexity. It is easy to maintain that conditioning should govern the problem for learning from others in principle because it is the gold standard of rational updating. This viewpoint ignores that the assignment of probabilities can be a forbidding task even in moderately complex situations in which one plans to condition on the probabilities of others.12 What we would like to have instead are simple heuristic rules that allow us to calculate posteriors from the opinions of others in a way that is compatible with Bayesian conditioning. The derivation of the Johnson–Carnap continuum of inductive methods is a useful template for how this might be achieved with the help of symmetries.

7.3 Learning from Others The opinions of other people can be used in many different ways. A fairly straightforward case is deference to the opinion of an expert.13 We sometimes defer to the beliefs of another person because that person has much more information than we do, or because she has superior reasoning skills. If one of your friends is a real Quidditch aficionado and you’re not even remotely interested in it, then you will probably defer to his opinions about who is going to win the next Quidditch match. In probabilistic terms, deference to an expert’s opinions is closely related to the reflection principles discussed in the previous chapter. If you fully defer to an expert’s degree of belief Z concerning a proposition A, then your probability for A conditional on being told the value of Z is equal to that value. 12 This is pointed out, e.g., by Steele (2012) and Easwaran et al. (2015). 13 See for instance Joyce (2007) or Elga (2007).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.3 Learning from Others

155

The case of full deference is a limiting case, though. Even if you regard someone very highly, you might only partially defer to her opinions. This is particularly clear when there is more than one expert. Experts always disagree (the proverbial n + 1 opinions of n experts). When they disagree, we cannot consistently defer to each of them. Is there another way to consistently learn from their opinions? I am going to concentrate on a simple special case, which can however be generalized quite easily. Let X and Y be two random variables representing the probabilities of two distinct agents, Professor X and Doctor Y, regarding a proposition A. If P is your probability measure, then we’d like to determine the probability of A conditional on learning that Professor X claims that p is the probability of A and Doctor Y maintains that q is its probability, P[A|X = p, Y = q]. The two agents may be thought of as different from you. But X might as well represent your own degree of belief, in which case P[A|X] = X = P[A]. Our treatment of learning from others will not depend on whether you update on the opinions of two other agents, or on your own opinion and that of another agent. One of the most popular rules of updating beliefs upon learning that X = p and Y = q is the method of linear averaging.14 Averaging requires you to set your posterior equal to a weighted average of X and Y, wp + (1 − w)q,

(7.1)

where w can be any real number between zero and one. An important special case is when both experts are weighed equally, yielding a posterior of p+q . 2 In epistemology, this special case is known as splitting the difference or straight averaging. The intuitive rationale behind straight averaging comes from thinking of X and Y as epistemic peers, in the following sense. The chance that one of them is wrong and the other one is right is supposed to be roughly equal to 1/2; thus the equally weighted average is, at least roughly, your best estimate of the probability of A.15 Linear averaging can be regarded as a generalization that makes room for weighing agents differently. Straight averaging and linear averaging have been criticized for a number of reasons. Carl Wagner has derived linear averaging from a small set 14 Averaging is applied to the problem of updating by DeGroot (1974), Lehrer and Wagner

(1981), or Genest and Schervish (1985). 15 See Elga (2007) and Jehle and Fitelson (2009).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

156

Disagreement

of plausible axioms, chief among them an independence from irrelevant alternatives requirement, but these axioms are not universally accepted.16 Apart from that issue, Wagner’s axiomatic theory does not proceed within a Bayesian framework. Thus, it is unclear whether it can be fit into the context of consistent Bayesian learning. Other criticisms apply more specifically to averaging as Bayesian updating.17 Straight averaging as Bayesian updating asserts that P[A|X = p, Y = q] =

p+q ; 2

(7.2)

more generally, P[A|X, Y] may be equal to a linear average with unequal weights, as in (7.1). The most serious criticisms have to do with the extent to which (7.2) or its generalization are compatible with a Bayesian approach. As it turns out, conditioning often is almost incompatible, or only trivially compatible, with linear averaging.18 The main reason, in my view, is the rigidity imposed on updates by the weights, which depend only on the agents, X and Y, but not on their opinions, p and q.19 In the next section we shall see how this can be overcome. In an appropriately modified version of straight averaging, weights are adjusted so as to include terms for the opinions expressed by the agents, and not just terms weighing the agents. This will allow us to judge when updating by straight averaging is an adequate heuristic. Our modification will also allow a more steadfast response in other situations. Thus, our reconstruction of straight averaging will lead to a deeper understanding of when a conciliatory response is called for and why an agent should sometimes react more steadfastly in the face of disagreement.

7.4 Averaging and Inductive Logic Our reconstruction of straight averaging draws from two sources: Carnapian inductive logic and the theory of conditional expectations. Along with the two experts’ degrees of belief, X and Y, we assume there is another, auxiliary, degree of belief, Z, for the proposition A. We suppose that X, Y, and Z take values in a denumerable set of numbers between zero and 16 See Wagner (1985). For criticisms of Wagner’s axioms and alternatives that are especially

relevant for an epistemic context, see Dietrich and List (2015). 17 See, e.g., Jehle and Fitelson (2009) or Easwaran et al. (2015). 18 But see Genest and Schervish (1985) and Romeijn (2015) on how linear averaging can be

derived within Bayesian models. 19 On this point, see Bradley (2015).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.4 Averaging and Inductive Logic

157

one (e.g., finitely many real numbers, all rational numbers, all computable numbers). This allows us to apply Carnapian inductive logic; for a continuum of values, it would be necessary to use a more general inductive logic.20 The auxiliary random variable Z stands for what is often referred to as the correct opinion in the literature on peer disagreement. For instance, Elga characterizes epistemic peers by saying that they are equally likely to be correct.21 One way to convey in probabilistic terms that Z is the correct opinion is to require that P[A|X, Y, Z] = Z.

(7.3)

For simplicity, we assume that all combinations of values of X, Y, and Z have positive probability. Equation (7.3) says that, in the presence of X, Y, Z, you defer to Z. Thus, Z is regarded as superior to X and Y. This might be because Z is the opinion of an agent with more information or with more skillful reasoning abilities. In the present setting, Z does not have to be thought of as being the correct opinion in an absolute sense; it just needs to be better, or at least not worse, than X and Y. However, if you wish to walk that path, Z may also be thought of as an idealized object, for example an idealized information source such as the chance of A, which assimilates all humanly accessible types of information. We would like to calculate P[A|X, Y] – that is, the agent’s conditional degree of belief after being informed about the two experts’ opinions, X and Y. This can be done by using the tower property, which was mentioned in the previous chapter, and our assumption (7.3). Observe that, by the tower property, E[P[A|X, Y, Z]|X, Y] = P[A|X, Y].

(7.4)

This says that the coarse-grained estimate (relative to X and Y) of the fine-grained estimate P[A|X, Y, Z] of A is equal to the coarse-grained estimate P[A|X, Y] of A.22 Furthermore, (7.3) says that P[A|X, Y, Z] = Z. Hence E[P[A|X, Y, Z]|X, Y] = E[Z|X, Y].

(7.5)

Combining (7.4) and (7.5) shows that P[A|X, Y] = E[Z|X, Y].

(7.6)

20 Such as the one in Skyrms (1993). 21 Elga (2007). 22 The equality holds since

E[P[A|X, Y, Z]|X, Y] = E[E[IA |X, Y, Z]|X, Y] = E[IA |X, Y] = P[A|X, Y].

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

158

Disagreement

Thus, the evidential relevance of X and Y on A is the same as the evidential relevance of X and Y on the idealized random variable Z. As a result, in order to calculate P[A|X, Y] it is enough to calculate E[Z|X, Y]. Since Z takes on values in a denumerable set, the conditional expectation E[Z|X, Y] is a weighted sum that ranges over all possible values p of Z:  p P[Z = p|X, Y]. The values of the conditional probabilities P[Z = p|X, Y] can be determined by explicating the idea that the opinions X and Y are on a par as estimates of the truth value A, namely by requiring that X and Y be evidentially relevant for Z in an entirely symmetric way. This can be made precise by using one of the basic principles of Carnapian inductive logic, Johnson’s sufficientness postulate. Let np be the number of times X and Y assume the value p; np represents how many experts express the probabilistic opinion p. (In the case of two experts, np is equal to zero, one, or two.) Recall that Johnson’s sufficientness postulate says that the conditional probability of Z = p only depends on p and np : P[Z = p|X, Y] = fp (np ).

(7.7)

Hence, the conditional probability of Z = p is symmetric in X and Y. It is irrelevant whether X or Y reports p; what counts is how often p is being reported. Thus, X and Y are on a par when it comes to Z, and no opinion is regarded more or less highly just because it was reported by a particular agent. This is one reasonable sense in which Professor X and Doctor Y can be thought of as epistemic peers.23 Note, furthermore, that P[Z = p|X, Y] is allowed to vary with p, which means that updating may proceed not just by weighing agents, but also by weighing their opinions. The sufficientness postulate (7.7), together with the assumption that all combinations of values of X and Y have positive probability, implies that there exists a number αp for each p such that P[Z = p|X, Y] =

n p + αp  . 2 + s αs

(7.8)

(See Appendix A; strictly speaking, we assume that Z can take on at least three values.) Because we are interested in cases where X and Y are positively relevant for Z, we will focus on the case where no αp is negative (otherwise Professor X and Doctor Y would be anti-experts). The parameters αp express that part of your conditional degree of belief in Z = p over 23 I don’t claim that it is the only reasonable understanding of epistemic peers, nor that it is the

most prevalent one.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.4 Averaging and Inductive Logic

159

and above the evidence provided by X or Y. If neither X nor Y is equal to p, the conditional probability of Z = p is given by 2+

αp 

s αs

.

If αp is positive, this conditional probability could be non-negligible. This fact will play an important role in how to interpret our agent’s predictive probabilities. The conditional probability (7.8) can now be used to calculate the conditional expectation of Z given X and Y:   np + α p  . E[Z|X, Y] = p P[Z = p|X, Y] = p 2 + s αs p p In particular, if X = p and Y = q, the conditional expectation is given by E[Z|X = p, Y = q] = p Let R =



 1 + αp 1 + αq αt    . +q + t 2 + s αs 2 + s αs 2 + s αs t =p,q

t =p,q t αt .

Then

E[Z|X = p, Y = q] =

p + q + pαp + qαq + R  . 2 + s αs

(7.9)

Since in (7.6) we have shown that P[A|X, Y] = E[Z|X, Y], our main result follows: p + q + pαp + qαq + R  P[A|X = p, Y = q] = . (7.10) 2 + s αs Let me note some immediate observations. The argument leading up to this result does not treat learning from others in a substantially new way, but in accordance with the generalized theory of probabilistic learning developed in this book. The opinions of other agents are treated as evidentially relevant in the same way as factual observations. This fits right into a radical probabilist framework, in which sources of information other than factual statements are permissible. The important twist in the argument is the auxiliary expert variable Z, which establishes the crucial link between the proposition of interest, A, and the two experts, X and Y. The experts process their information before announcing their probability for A. This can be thought of as each agent getting a noisy signal from an underlying process, which is probabilistically relevant for the superior estimate Z. The sufficientness postulate is a relatively weak expression of agents being considered as peers. This is, I think, an advantage, because it allows the influence of an expert’s opinion to also depend on the opinion, not just

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

160

Disagreement

the expert. If, for instance, αp is much larger than αq , X has a stronger influence on the probability of A than Y does conditional on X = p and Y = q. This is compatible with considering X and Y to be peers, since the opinion weights, αp , do not favor one agent over the other: if it were the case that X = q and Y = p, then Y would have a stronger influence than X on the probability of A. How does the update rule (7.10) work in practice? Depending on the opinion weights, it sometimes resembles straight averaging, sometimes a steadfast update, and sometimes something in-between. Let’s look at straight averaging first. Suppose the parameters αp are all close to zero. This implies that R is very small, and thus we have P[A|X = p, Y = q] ≈

p+q . 2

Equality is obtained if all parameters αp are equal to zero.24 This helps us identify when straight averaging is a good heuristic for updating. Besides, the fact that both agents are considered to be epistemic peers, the opinion parameters αp are crucial. Approximate straight averaging requires them to be close to zero. This says that while the agent can have uneven prior beliefs with regard to Z, those beliefs are not resilient: they can be overcome easily by new evidence, which in our case consists of the opinions of X and Y. As a result, approximate straight averaging is an appropriate response to disagreement if (i) informants are considered peers and if (ii) one does not have strong opinions about Z. Straight averaging is however not always a good heuristic according to our model. Sometimes we are led to something closer to a steadfast view, which maintains that one should stick to one’s belief in the face of disagreement. Suppose that X = P[A] = p is your degree of belief in A, and that your prior opinions are quite strong. Specifically, let αp be sufficiently larger than zero and also large with respect to the other alpha parameters. In this case, your revised belief (7.10) conditional on Y = q will be much closer to p than to q whenever p  = q. Even though you consider Y to be a peer (sufficientness postulate), your degrees of belief are already quite settled, and one piece of additional evidence doesn’t make much of a difference. So there is no unique answer as to whether your response to peer disagreement ought to be conciliatory or steadfast. The correct response depends on the details of the epistemic situation. Suppose your beliefs are 24 This would be Reichenbach’s straight rule. Although this case is a logical possibility, it is as

problematic here as it is for inductive logic. If all parameters are zero, you essentially have no prior opinions about Z. Hence, whenever both X and Y are equal to p, you would assign the event Z = p probability one, thereby completely disregarding any other possibility.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.5 Generalizations

161

resilient because they are based on a large body of evidence; in this case, the fact that a peer disagrees with you will not change your opinions by much. If, however, your beliefs are based on only weak evidence, they might still be quite malleable, and so disagreement among peers may lead you to approximately average opinions. Depending on your background beliefs, updating might also lead your conditional beliefs to lie somewhere between averaging and being steadfast.

7.5 Generalizations Our treatment of learning from others can be generalized in a number of ways. First, it is extendible to the case of more than two experts, simply by imposing the sufficientness postulate on a group of experts. Second, it can also be generalized to opinions that are not probabilities. The random variables X and Y may be estimates of any kind of quantity (such as the height of a tree). Here, the same approach as above leads to approximate straight averaging of the two estimates X and Y whenever prior beliefs are not resilient. The more complete account of linear averaging in (7.1) allows experts to be assigned different weights according to how well they are regarded. I am not going to develop the corresponding model here, because it is fairly complex. But I do have a roadmap for such a model. We saw in Chapter 3 that more complex epistemic situations require us to move from exchangeability to partial exchangeability. Weighing experts differently requires a similar move: we need to give up the sufficientness postulate (7.7) and arrange experts in groups, each group having a distinct influence on the conditional probability of the supreme expert, Z. A modified sufficientness postulate would assume that any expert within a given group of experts has the same evidential influence. Spelling this out in detail allows one to develop an account of linear averaging (7.1) along the same lines as the above account of straight averaging.25 Since our approach to averaging is based on a conditioning model, it meets some worries about linear averaging, mainly those that are based on incompatibilities between linear averaging and conditionalization. A case in point is that linear averaging is not associative and commutative.26 In our model, the experts’ reports commute with other potential conditioning 25 Some more details of the underlying principles are provided in my development of an

analogical inductive logic (Huttegger, 2016). 26 Easwaran et al. (2015).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

162

Disagreement

events, since approximate straight averaging itself comes from Bayesian conditioning and Bayesian conditioning is commutative. As to associativity, straight averaging need not be associative. Associativity means that if an agent updates on the opinions of two other agents, the three following procedures lead to the same final belief: one where she first averages her opinion with that of one agent, averaging the resulting update with the second agent in the next step; one where this order is reversed; and one where she averages her opinion with both agents simultaneously. Suppose, for example, that there are three agents X, Y, W who report X = p, Y = q, W = r. If I encounter X and Y first, my probability of A may be (approximately) given by x=

p+q . 2

After meeting W, I might contemplate revising my belief to (approximately) x+r . 2 If I meet Y first and update on X later, I might get a different result. Both these results may be different from taking the straight average of all three agents, p+q+r 3 . The belief revision policy described here is one where the specific values of the first two experts may be forgotten before the opinion of the third expert is incorporated; only their average value is thought to be significant. This is not what the updating procedure I have advocated in this chapter does. Both P[A|X, Y] and P[A|X, Y, W] are, under the appropriate conditions, approximately given by averaging X and Y in the first case and X, Y, and W in the second case. So it seems that our model falls prey to a failure of associativity. In my view, this is not a problem. There is no reason why updating on the opinions of peers requires one to only take the (approximate) average of X and Y to be relevant for further updates and not the values of X and Y. The sufficientness postulate allows the values of X and Y to be relevant for P[A|X, Y, W], and not just their (approximate) average. And it does so for a good reason: averages can be obtained in many different ways; so once we average, we lose information that can be relevant for further updating. This is ignored by associativity. So far we have considered how learning from others should affect a rational individual’s degrees of belief. This does not answer the question of how disagreement could be resolved by learning from others. Within the

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.6 Global Updates

163

present model, a consensus emerges if all agents involved have sufficiently similar prior inductive assumptions. Suppose, for example, that P and Q are the probability measures of agents X and Y, respectively. This implies P[A|X] = P[A] and Q[A|Y] = Q[A]. If the two inductive assumptions (7.3) and (7.7) hold for both P and Q, then the conditional probabilities P[A|X, Y] and Q[A|X, Y] are of the form (7.10).27 Thus, under the right circumstances both conditional probabilities will be close to the average P[A]+Q[A] . The crucial assumptions are that neither agent has a resilient 2 prior opinion, and that they view each other as peers. If one of these assumptions fails, they might disagree substantially after updating on each others’ opinions.

7.6 Global Updates The model in the previous section only tells us how to update the proba¯ It says nothing about how the probabilities bility of A and its negation A. of other propositions should be affected. Jeffrey conditioning is a straightforward solution whenever the learning event is restricted to receiving information about A.28 As we saw in Chapter 5, under this condition the conditional probabilities of any proposition B given A or A¯ is the same before and after the learning experience, which implies that the new probability of B is equal to ¯ ¯ P[B|A]P[A|X, Y] + P[B|A]P[ A|X, Y]. If two agents follow this recipe, there is no guarantee that they will agree on the posterior probability of B, even if their posterior probabilities for A are very close. If the learning event involves more than just one proposition and its negation, the same approach as in the previous section can be put to work. As an example, consider a partition of three events, A1 , A2 , A3 . Let X1 , X2 and Y1 , Y2 be two agents’ reported probabilities of, respectively, A1 and A2 . One reasonable assumption is that the conditional probability of A1 only depends on X1 and Y1 , that is, 27 In order to apply the sufficientness postulate one usually has to assume that all possible values

of X and Y have positive probabilities. This is not the case here, since one value of X, P[A], has probability one for P (likewise for Y and Q). We can resolve this problem either by assuming that P[A] is the only possible value of X from the perspective of that agent, or by saying that the sufficientness postulate only needs to hold with probability one. 28 The usefulness of Jeffrey conditioning for learning from others has been noted several times in the literature; see, e.g., Steele (2012).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

164

Disagreement

P[A1 |X1 , X2 , Y1 , Y2 ] = P[A1 |X1 , Y1 ]. Similarly, the conditional probability of A2 only depends on X2 and Y2 : P[A2 |X1 , X2 , Y1 , Y2 ] = P[A2 |X2 , Y2 ]. To be sure, this is only the simplest case; there is nothing illegitimate about abandoning these assumptions. In more complex settings, the sufficientness postulate would have to be modified in order to account for all evidential influences. But in this – the simplest – case we can proceed along exactly the same lines as in the preceding sections. If there are auxiliary variables Z1 and Z2 such that P[A1 |X1 , Y1 , Z1 ] = Z1

and P[A2 |X2 , Y2 , Z2 ] = Z2 ,

and if the sufficientess postulate (7.7) holds for X1 , Y1 , Z1 as well as X2 , Y2 , Z2 , then it can be shown that the conditional probabilities P[A1 |X1 , X2 , Y1 , Y2 ] and P[A2 |X1 , X2 , Y1 , Y2 ] are given by expressions similar to (7.10).29 Once these probabilities have been determined, the probabilities of other propositions can be updated by Jeffrey conditioning. In this example, the two experts are considered to be equally qualified regarding all propositions of the partition. This assumption can be modified as suggested in the previous section, by using a more general kind of sufficientness postulate that weighs agents differently. Updating according to the above scheme is compatible with, say, the agents being weighed equally regarding A1 but unequally regarding A2 .

7.7 Alternatives At this point in the book it goes without saying that I don’t think of approximate straight averaging as the only admissible response to disagreement. What this chapter has achieved is putting together a model within which one can consistently update in this way. But the model does not apply in every situation, and so approximate straight averaging and its variants should not be regarded as a universal response to disagreement.30 An interesting alternative update procedure has recently been proposed by Kenny Easwaran, Luke Fenton-Glynn, Christopher Hitchcock, and Joel 29 There is the additional constraint that the conditional probabilities must add up to less than

one, which constrains the alpha parameters. 30 See Lasonen-Aarnio (2013) for a different critique of blanket responses to peer disagreement.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

7.8 Conclusion

165

Velasco.31 According to their rule, the probability of A conditional on the reports of two agents, X = p and Y = q, is equal to pq . pq + (1 − p)(1 − q) This multiplicative rule has a number of interesting properties, among them one, called synergy, that sets it apart from averaging. Synergy means that the posterior probability of A does not need to lie between p and q. If p is your degree of belief, then an announcement of Y = q may boost your degree of belief beyond p. As an example, think of two doctors, both of whom make the same diagnosis with high probability. Conditioning on the fact that both probabilities are high may result in a larger posterior probability than any of the doctors’ estimates. This is not consistent with averaging, which requires posterior probabilities to lie between the two experts’ opinions.32 In order to understand when it is appropriate to adopt the multiplicative rule, we need to understand its inductive assumptions. Easwaran, FentonGlynn, Hitchcock, and Velasco have shown that the multiplicative rule is a straightforward consequence of likelihoods being linear in the reported degrees of belief of the other agent Y; that is to say, the probability of Y = q given that A is true is linear in q, and the probability of Y = q given that A¯ is true is linear in 1 − q.33 This is one way to capture the idea that Y is thought to be, to some extent, reliable. Conditional on A being true, your probability of A increases with Y reporting a high degree of belief, and it decreases conditional on A being false. The multiplicative rule is a proper way of updating in this particular type of learning situation.

7.8 Conclusion It is not difficult to think of inductive assumptions, modeled on the sufficientness postulate or some other template, that go beyond the epistemic situations we have examined in this chapter – for example, by considering the possibility of dependencies among agents or between agents and other events. The upshot is that responses to disagreement, like all learning situations, require thinking and judgment. There is no one-size-fits-all rule 31 Easwaran et al. (2015). 32 The example is essentially the same as the corresponding example discussed in Christensen

(2011). 33 The class of admissible likelihoods is actually larger; see Easwaran et al. (2015).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

166

Disagreement

for revising one’s beliefs based on the opinions of others. According to a broadly Bayesian approach, the analysis of social learning situations from an individual’s perspective is effectively the same as the analysis of any other learning situation. Whether, or to what extent, learning from others resolves disagreements depends on the inductive assumptions of all the agents involved. If their prior points of view about the situation are too far apart rational agents may continue to disagree after having learned from one another.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:04:25, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.009

8

Consensus

Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion. Charles S. Peirce How to Make Our Ideas Clear

The kind of probabilism I advocate in this book is very undogmatic in that it typically allows for a range of admissible beliefs in response to the same information. The charge of excessive subjectivity is often brought against such a view. This charge is particularly pressing when the rationality of science is at stake: after all, the emergence of a consensus seems to be one consequence of what it means to respond rationally to scientific evidence. Does radical probabilism have anything to offer that would account for this feature of scientific rationality? The previous chapter explored learning from other agents. As we have seen, this leads to a consensus only under special conditions, which express an initial agreement among the agents about the structure of the epistemic situation. That agreement is required at some levels in order to get agreement at others is a theme we are going to encounter again in this chapter, in which we set aside learning from others and focus instead on situations where agents revise their opinions based on the same information. In the epigraph to this chapter, Peirce expresses an optimistic view about this process: learning from the same evidence shall overcome any initial disagreement. There is more than a grain of truth in this statement, as is shown by a number of theorems that I will discuss in this chapter. These theorems go a long way toward showing that probabilistic learning can lead to a rational consensus. However, there are also limits to this. The question of when Jeffrey conditioning leads to a consensus is especially interesting in this regard; we will see that considering Jeffrey conditioning helps reveal a dependence between long-run agreement and whether evidence is solid or soft. On the whole, then, this chapter leads to a nuanced picture of how rational learning, disagreement, and consensus are related. By updating on

167

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

168

Consensus

increasing evidence a consensus emerges under carefully controlled conditions – those that correspond to some qualitative features of scientific investigations. Consensus is, however, not something we should always expect.

8.1 Convergence to the Truth To start with, we look at a few results that are quite general. To simplify the discussion, I’ll mostly focus on a simple and representative special case: flipping a coin. Suppose that we are about to flip a coin, and that there is no upper bound on the number of our observations (observations are effectively cost-free). This setting may be modeled by infinite sequences of heads and tails, or, equivalently, by the set of all infinite binary sequences (1 is heads, 0 is tails). We update our consistent prior degrees of belief over infinite binary sequences by conditioning on what we have observed. If there was exactly one rational prior probability measure, as Laplace and the gang would have us think, then conditioning on observations will never lead to a disagreement as long as we observe the same sequence. But what happens if our priors are not identical? There are some results in mathematical statistics that provide us with answers to this question. In this section, I discuss a theorem known as convergence to the truth. A brief look at tossing a coin finitely often helps in explaining the basic idea. Suppose you observe only ten coin flips, and let A be a proposition about these coin flips (e.g., “no heads is observed,” or “a head is observed at the second and the seventh trial”). If your prior assigns positive probability to each sequence of ten flips, then your conditional probability of A after ten flips of the coin is either zero or one – one if the observed sequence is compatible with A and zero otherwise. If your prior assigns zero probability to some sequences, then your conditional probability of A after ten flips of the coin is zero or one with probability one. This is finite convergence to the truth. The theorem on convergence to the truth says that this result extends to infinitely many observations. The proposition A may now be about infinite binary sequences. This includes propositions about finite binary sequences, but also propositions about the infinite such as “the limiting relative frequency of heads is 13 ” or “there are infinitely many heads.”1 Now consider your conditional probabilities of A given an increasing sequence 1 The proposition in question needs to be measurable.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.1 Convergence to the Truth

169

of observations. According to the theorem of convergence to the truth, the sequence of conditional probabilities of A converges to one or zero with prior probability one, depending on whether the infinite sequence is compatible with A. You are certain that in the limit you will know the truth about A. Hence, your juvenile priors turn out to have no influence on your mature beliefs about propositions like A. The theorem on convergence to the truth is a special case of the martingale convergence theorem, which I mentioned briefly in Chapter 5. Recall that a martingale is a sequence of random variables X1 , X2 , . . . that can be thought of as a sequence of fair gambles. If Fn denotes the information available at time n, then the defining characteristic of martingales is that, for all n = 1, 2, . . ., E[Xn+1 |Fn ] = Xn

with probability one;

(8.1)

given your evidence at time n, your conditional expectation of Xn+1 is just Xn .2 Thus, if Xn represents your total fortune at time n, then you don’t expect to win anything on the next bet conditional on your present evidence. The martingale convergence theorem says that a martingale converges almost surely to a random limit, provided that it satisfies some technical requirements.3 The sequence of conditional probabilities P[A|F1 ], P[A|F2 ], . . . is a martingale that meets those conditions. Therefore, it converges with probability one. Convergence to the truth is a consequence of this result whenever A is an element in the smallest σ algebra generated by the Fn .4 Thus, learning the truth in the limit is a matter of consistency. This result is subject to certain qualifications. To begin with, conditional probabilities only converge to the truth if the proposition A does not go beyond the information provided by coin flips. If A also says something about, say, the rings of Saturn, then no amount of evidence about coin flips is able to settle everything about A. In such a case the martingale convergence theorem does not imply convergence to the truth. What it does imply is convergence to a maximally informed opinion, which could be any number between zero and one.5 2 This definition of a martingale is a bit more general than the one given in Chapter 5. The values

of the random variables X1 , . . . , Xn are assumed to be part of the history up to time n. 3 For instance, if it is nonnegative. For a more general sufficient condition, see Ash and Doléans-Dade (2000). 4 Discussions of convergence to the truth can also be found in Schervish and Seidenfeld (1990)

and Earman (1992). There are other theorems on convergence to the truth; see, in particular, Diaconis and Freedman (1986). 5 Skyrms (1997).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

170

Consensus

The difference between convergence to the truth and convergence to a maximally informed opinion has consequences that reach beyond the present discussion. Think of a probability space which includes theoretical concepts that are only very loosely connected to verifiable observations, as is the case for highly theoretical scientific theories. As we go to the limit, observations may not capture everything about the states of the probability space. Under these conditions, convergence to a maximally informed opinion applies, whereas convergence to the truth might fail. Another qualification is that the martingale convergence theorem and, a fortiori, convergence to the truth depend on countable additivity. Martingales need not converge if a prior probability on the space of all infinite binary sequences is merely finitely additive. The main reason is that countable additivity guarantees a kind of uniform convergence: the infinite limit is uniformly approachable by finite approximations (compare the remarks at the end of Chapter 6). What this means is that there are no surprises at infinity; a property which is exhibited eventually by all finite approximations cannot get lost in the limit. Thus, countable additivity restricts the set of all finitely additive probability measures in such a way that a martingale X1 , X2 , . . . which eventually gets and stays close to a value does not jump discontinuously to some other value at infinity. Certain merely finitely additive probability measures, on the other hand, allow convergence to be nonuniform, in which case the limit may be very different from any of its finite approximations.6 Champions of finite additivity could appeal to results showing that there are classes of merely finitely additive measures for which the martingale convergence theorem holds.7 My own view is that uniform convergence is actually a quite desirable property, which supports the assumption of countable additivity. At infinity all kinds of things can happen. But our observations – of coin flips or any other events – are always finite. So the relevant question is how our degrees of belief behave in response to a large, but finite amount of evidence. Uniform convergence is the proper concept for this enterprise since it enables us to transfer limit results to finite approximations of the limit. It is a bit ironic that finitist considerations would support countable additivity. It is not altogether surprising, though, once you realize that the difference between finite and countable additivity manifests itself only at infinity; merely finitely additive measures 6 The issue of nonuniform convergence is discussed at length by de Finetti (1974) in terms of

distribution functions. 7 See Purves and Sudderth (1976), Chen (1977), and Zabell (2002).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.2 Merging of Opinions

171

may treat any finite collection of an infinite class of mutually exclusive and exhaustive events differently than the full class. Therefore, results that are restricted to merely finitely additive measures rely in a substantial way on actual infinity.8 As a final consideration I’d like to emphasize that convergence to the truth does not mean that conditionalizers think they will actually converge to the truth, come what may. Convergence to the truth conveys a kind of internal consistency property of the model. If the model does not actually capture the learning situation, because the learner is confronted with something unanticipated, convergence to the truth is off the table.

8.2 Merging of Opinions The core message of convergence to the truth is that our prior beliefs often cease to be important when our posterior beliefs are based on a sufficient amount of evidence. But the theorem on convergence to the truth is limited in important ways. As recently highlighted by Gordon Belot, conditional probabilities do not necessarily converge to the truth; although the event of non-convergence has probability zero, it is nonempty.9 This raises the question whether we can say anything interesting about conditional probabilities when they don’t converge to the truth. An answer can be found in a seminal paper by David Blackwell and Lester Dubins from 1962.10 The theorem of Blackwell and Dubins tells us when the conditional probabilities of two individuals come and stay close (which might even happen in cases where they don’t converge individually). As an illustration, consider again the space of all infinite binary sequences. Suppose that P and Q are two prior probability measures over that space, representing the initial opinions of two distinct agents. What we wish to know is when their conditional probability measures merge, where two sequences of probability measures merge if they eventually stay within any small preassigned value of each other, uniformly in all measurable events.11 8 The brief discussion here is related to the larger issue of how to interpret convergence results of

Bayesian probability theory; see Huttegger (2015a). This paper is a response to Belot (2013), whose arguments depend in an essential way on actual infinity. 9 Belot (2013). 10 Blackwell and Dubins (1962). A similar result was proven by Gaifman and Snir (1982). 11 Blackwell and Dubins make this precise by using the supremum norm as a measure of the distance between two probability measures. An alternative is the distance measure proposed by Kalai and Lehrer (1994), although the results of Kalai and Lehrer are in an important sense equivalent to Blackwell and Dubins’s theorem.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

172

Consensus

The crucial assumption for merging of conditional probabilities is absolute continuity. The prior Q is absolutely continuous with respect to P if, for all A, Q[A] > 0 implies P[A] > 0; equivalently, if P assigns probability zero to a proposition, then Q needs to do so as well. Blackwell and Dubins’s theorem says that if Q is absolutely continuous with respect to P, then their conditional probabilities merge – that is, they eventually stay within any small preassigned value ε of each other uniformly in all measurable events with Q probability one. From the perspective of Q, then, beliefs will not differ very much from the beliefs of the other agent in the long run. The same is true from the perspective of P if, in addition, P is absolutely continuous with respect to Q. In this case, the two priors are mutually absolutely continuous, which means that the agents agree on the propositions that have probability zero. The theorem of Blackwell and Dubins does not assume that conditional probabilities converge to the truth – as a matter of fact, they need not converge at all. Blackwell and Dubins’s theorem is therefore a more general expression of the empiricist idea that prior opinions won’t matter given enough evidence; even if posteriors don’t converge to the truth, they will be increasingly similar. In addition to the qualifications I have mentioned in the context of convergence to the truth, we now have to consider the assumption of absolute continuity. Absolute continuity expresses a certain kind of agreement over which propositions have zero probability. This is an instance of the common theme in the literature on probabilism and disagreement mentioned earlier: that an agreement result always presupposes agreement at some other levels. The assumption that agents get the same evidence is another instance of this theme. Without these assumptions, merging of beliefs may fail.12 I will hold off further discussion of absolute continuity until the next section, where we will consider it in a more concrete context. There is only one point I’d like to emphasize right away: absolute continuity is a substantive assumption. On the space of all infinite binary sequences there is no probability measure that is absolutely continuous with respect to every other probability measure; in addition, there is no probability measure with respect to which every other measure is absolutely continuous. (Pop quiz: why? Hint: no probability measure can assign positive probability to every infinite binary sequence.) Hence there are no measures that would guarantee merging of opinions regardless of what others 12 For an example where absolute continuity doesn’t hold, see Huttegger (2015b).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.3 Nash Equilibrium

173

believe. Moreover, it seems that absolute continuity cannot be derived from assumptions that are, prima facie, more plausible as general postulates, such as open-mindedness, which would require both agents to assign positive probability to any finite binary sequence.13 As a consequence, absolute continuity should be viewed as a special assumption that may, but need not, hold in a particular learning situation. This reveals a visible dent in the optimistic Peircean picture of scientific progress. From a classical Bayesian perspective, the setup of Blackwell and Dubins is a good model for some types of controlled scientific experiments. A failure of absolute continuity reveals a radical initial disagreement among rational agents, which might lead to a sustained long-run disagreement. On the bright side, though, the theorem of Blackwell and Dubins shows that agents will overcome initial disagreements that may be quite pronounced, as long as they are not too radical.

8.3 Nash Equilibrium The theorem of Blackwell and Dubins has important consequences for learning in games. This application is interesting in its own right, since, as we saw in Chapter 3, the theory of games provides an excellent training ground for abstract models of learning. However, it also serves to illustrate some important points about merging of beliefs. Let’s consider Bayesian agents who maximize payoffs in a forwardlooking manner. This is achieved by properly discounting future payoffs, allowing agents to evaluate the effect of strategy choices on the future. In Chapter 2 we saw that such a sophisticated Bayesian agent need not converge to playing the optimal action in a bandit problem. Recall the reason for this result: in order to obtain information about the payoffs of the bandit problem, the corresponding acts have to be chosen. Yet if, given the observations, a suboptimal act seems best because it just had a lucky streak, a Bayesian agent might stop choosing other acts and thus forego the opportunity to get more information about payoffs. As a result, posterior probabilities need not converge to nature’s probabilities. The failure of Bayesian agents to converge to nature’s true probability distribution over payoffs can be translated into the language of merging. In the previous section, we have considered the merging of two subjective priors, P and Q. If one probability, Q, instead represents the chances with 13 An example can be found in Huttegger (2015b).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

174

Consensus

which nature chooses states, then absolute continuity of Q with respect to P implies that subjective conditional probabilities merge with chance one to conditional chances. Now, in a bandit problem there is a non-negligible chance that the agent’s conditional probabilities do not merge to nature’s conditional chances. But the reason need not be a failure of absolute continuity – absolute continuity might hold while probabilities don’t merge. Non-merging is made possible by the structure of the bandit problem. Since nature’s probabilities are unknown, the agent does not know her payoff function.14 The lack of payoff information can get in the way of merging even when absolute continuity holds, thus constituting another factor that may prevent a long-run consensus. In contrast to bandit problems, applications of merging to game theory assume that players know their own utility functions. The applications concern one of the central questions in game theory, which we have already mentioned in Chapter 3, namely whether players converge to a Nash equilibrium as they repeatedly interact with each other. There are various senses of converging to a Nash equilibrium. In applying the theorem of Blackwell and Dubins to the problem of convergence to Nash equilibrium, Ehud Kalai and Ehud Lehrer invoke a kind of approximate convergence of players’ beliefs to something closely resembling a Nash equilibrium.15 It is worth examining Kalai and Lehrer’s result in more detail. They suppose that in an infinitely repeated game each player has a prior probability measure over infinite sequences of opponents’ choices. In addition, each player has a complete contingency plan – a strategy – of how to act after each possible finite history of play. The players’ choices are best responses to their beliefs about opponent play. Best responses are given by calculating expected utilities of discounted future payoffs with respect to probabilities conditional on the observed history of opponent choices. Together, these assumptions imply that the players’ strategies determine a unique probability measure over the infinitely repeated game. This measure is the true probability which controls the actual sequence of choices, assuming each player implements her strategy. In this setting the Blackwell and Dubins theorem entails the following. Suppose that the true probability measure is absolutely continuous with respect to a player’s prior; then that player’s conditional probabilities and the true conditional probabilities merge with chance one.16 Let us assume in addition that 14 As observed by Kalai and Lehrer (1993). 15 See Young (2004) for a thorough discussion of different notions of convergence to Nash

equilibrium. Kalai and Lehrer develop the results discussed here in Kalai and Lehrer (1993, 1994). 16 That is, they merge with respect to the true probability measure.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.3 Nash Equilibrium

175

absolute continuity holds for all players. Then eventually, since each player’s sequence of choices is a best response to her subjective beliefs, their choices will resemble a Nash equilibrium of the game, again with chance one.17 What Kalai and Lehrer’s theorem says, in brief, is that rational Bayesian agents converge to a Nash equilibrium with probability one whenever the absolute continuity condition holds. Most criticisms of this result are accordingly directed against the assumption of absolute continuity. Absolute continuity requires beliefs to contain a “grain of truth” about the opponents’ strategic intentions, which begs the question in several important ways. We mentioned in the preceding section that there is no probability measure with respect to which every probability measure is absolutely continuous in an uncountable probability space. It follows that there are no subjective beliefs that would merge with the opponents’ true strategic intentions regardless of what those strategies are. The choice of a prior always opens up the possibility of failure to merge with a class of strategies. This is especially problematic in the context of game theory, where strategic considerations may make it hard to actually achieve absolute continuity.18 I do not wish to enter all the subtleties of the debate surrounding Kalai and Lehrer’s theorem. What is important about this debate for the larger topic of this chapter is that it highlights just how difficult it might be to achieve absolute continuity, the central assumption for results about long-run consensus. Do the strictures imposed by absolute continuity thus somehow undermine Bayesian rationality? A closer look at Kalai and Lehrer’s theorem makes one point clear: as it stands, the theorem says something about convergence from an external perspective where both the players’ beliefs and their strategies are known; that’s why it asserts merging with chance one. A player, however, does in general not know the other players’ strategies. Hence, from her perspective it is not clear whether the players’ beliefs and strategies will in fact merge even if all relevant absolute continuity requirements are satisfied. If the player contemplates the question of merging from her own epistemic standpoint, whether she thinks she’s going to learn the opponents’ strategic intentions depends on the degree to which she thinks the absolute continuity assumption holds. This is a type of higher-order uncertainty where players entertain beliefs over the other players’ strategies, in particular as to whether they are absolutely 17 For a precise statement and proof, see Kalai and Lehrer (1993). 18 See Nachbar (1997), Miller and Sanchirico (1999), Foster and Young (2001), and Young

(2004).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

176

Consensus

continuous relative to their own beliefs. The important point is that a failure of absolute continuity does not imply anything about the internal rationality of a player. A player can have fully considered and internally consistent beliefs while, in a higher-order model, assigning positive probability to the failure of absolute continuity and thus to a failure of merging. In this way, Kalai and Lehrer’s theorem illustrates a basic fact about inductive inference which we have encountered on several occasions in this book: that strong inductive conclusions require particular inductive assumptions at some other level. The strong inductive conclusion of merging follows from a strong inductive assumption, absolute continuity, which may or may not hold. It is up to the agent to make up her mind as to whether absolute continuity holds, and to draw the correct conclusions about merging. Thus, while results about merging depend on absolute continuity, the latter is only a special kind of inductive assumption and by no means affects the individual rationality of an agent. Another criticism of the Kalai and Lehrer result, due to Dean Foster and Peyton Young, is directed against the connection between Bayesian learning and expected utility maximization.19 Foster and Young consider a special, though significantly large, class of games, which they call uncertain almost-zero-sum games. Such a game is obtained by taking a zero-sum game and perturbing its payoffs with small terms chosen at random.20 Such games have no Nash equilibrium in pure strategies; that is to say, every Nash equilibrium is completely mixed, meaning that in equilibrium each strategy is chosen with positive probability. Assume now that an uncertain almost-zero-sum game is repeated infinitely often, and that each player is a Bayesian maximizer of discounted future payoffs, just as in Kalai and Lehrer’s setting. Foster and Young prove that, for almost all realizations of payoffs, at least one player is not a good predictor of the opponents’ strategies. Being a good predictor is not the same as the merging of beliefs; it says that the mean square error of the player’s prediction goes to zero. Moreover, the players’ choices are far from any Nash equilibrium (in a sense different from the one used by Kalai and Lehrer), even in the long run.21 This result tells us that, regardless of whether absolute continuity holds, perfectly rational Bayesian learners fail to converge to the set of Nash equilibria in a significant class of games. Examining the proof of this result 19 See Foster and Young (2001) and Young (2004). 20 The distribution may not be known to players. Thus, uncertain games are not the same as

games of incomplete information. 21 Foster and Young’s result holds not just for Bayesian learning, but for any predictive learning

rule that is perfectly rational.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.4 Merging and Probability Kinematics

177

reveals the crucial role being played by the assumption of perfect rationality. Perfect rationality requires players to choose a strict best response to their conditional beliefs. As a consequence, players only mix between strategies (choose more than one strategy with positive probability) if they are exactly indifferent between those strategies, which is almost never the case. Since all Nash equilibria in uncertain almost-zero-sum games are fully mixed, they cannot be robustly attained by perfectly rational agents. However, they could be attained if players are only almost perfectly rational. Almost perfect rationality requires that players choose strategies which are not best replies with some small but positive probability. This modification is very plausible if the players’ payoffs are somewhat uncertain in each period. An appropriate randomization of best responses could block Foster and Young’s result and might restore convergence to Nash equilibrium for Bayesian conditioning in the relevant situations where absolute continuity holds.

8.4 Merging and Probability Kinematics So far we have seen that Bayesian rational learning leads to merging of opinions, assuming certain conditions hold. Something we have taken as given is that agents update by Bayesian conditioning. Radical probabilism maintains, however, that conditioning is too restrictive since it does not cover learning from uncertain evidence. This raises the question of whether convergence to the truth, merging of opinions, or something similar is possible when evidence is uncertain.22 In Chapter 5, we saw the radical probabilist’s analogue to convergence to the truth, which is due to Brian Skyrms.23 Evidence is uncertain, and we model the learning situation in terms of an infinite sequence of black box learning experiences, X1 , X2 , . . ., representing an agent’s sequence of degrees of belief regarding a proposition A. Dynamic consistency requires that X1 , X2 , . . . is a martingale; this can be shown by a dynamic Dutch book argument or by an argument from expected inaccuracy (see Chapters 5 and 6). Hence, the sequence of degrees of belief converges to a random limit, X, with probability one. The limit X is a maximally informed opinion, but it does not need to be the same as the indicator of A. This is to be expected, since uncertain evidence cannot in general conclusively establish whether or not a proposition is true. In this regard, the situation here is similar to the 22 See Huttegger (2015b), from where this material is drawn. 23 Skyrms (1997).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

178

Consensus

one mentioned above, in which the truth value of A is not fully captured by the σ -algebra of observations. Convergence to a maximally informed opinion is the most we can ask for in these situations. Given the radically amorphous nature of the black box learning model, it is quite remarkable that every rational learning process for A converges to a maximally informed opinion instead of being permanently unstable. Analogues to merging of beliefs for uncertain evidence are more complex than analogues to convergence to the truth, since we have to consider more than one agent. In order to make the problem more tractable, I am going to focus on probability kinematics. Recall from Chapter 5 that probability kinematics, or Jeffrey conditioning, is a generalization of Bayesian conditioning to updating on uncertain information about an observational ¯ partition. If the partition only has two elements, E and its negation E, probability kinematics assumes that learning only modifies the probabil¯ leaving unaltered the conditional probabilities given the ities of E and E, two observational propositions. As a consequence, the new probability of any proposition, A, in our probability space is given by the weighted average of its probabilities conditional on E and E¯ and their new probabilities. Probability kinematics is thus sufficiently close to Bayesian conditioning that a sharp result on merging seems feasible. Let’s first note how probability kinematics can be applied to our model of observing infinitely many coin flips. We start by observing whether the first coin flip turns up heads or tails. This generates a two-element partition of the set of all infinite coin flips, which describes our information after the first coin flip: the first element contains all sequences that start with a heads, and the second element contains all sequences that start with a tails. The information we have after the second flip of the coin is a fourelement partition, consisting of those sequences that start with two heads, those that start with heads and tails, those that start with tails and heads, and those that start with two tails. The second partition clearly is a refinement of the first partition. For each subsequent trial n, there is a partition of 2n elements describing the information we have after the first n flips of the coin, each element corresponding to a particular length-n-sequence of coin flips. This observational process generates an infinite sequence of increasingly fine partitions. Suppose now that uncertain learning experiences only change the probabilities of elements of each trial’s partition. Then probability kinematics is the proper learning model. Thus, for each proposition A about coin flips, the new probability of A at n is given by

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.4 Merging and Probability Kinematics

Pn [A] =



P[A|Ei ]Pn [Ei ],

179

(8.2)

i

where i ranges over all elements of the nth partition, and Pn [Ei ] is the new probability of Ei . Assume now that we have two agents with priors P and Q who both update by probability kinematics (8.2). What we would like to know is whether, or when, the updated probabilities, Pn and Qn , come and stay close to each other in the sense of Blackwell and Dubins. The biggest obstacle for solving this problem is that uncertain evidence is conceptually much more complex than certain evidence. That two agents have the same certain evidence just means that they have observed the same proposition to be true. There is no similarly unequivocal sense of having the same uncertain evidence. In fact, there are at least two ways of capturing the notion of uncertain evidence within a probabilistic framework. Jim Joyce calls them hard Jeffrey shifts and soft Jeffrey shifts. They correspond to, roughly speaking, solid and fluid kinds of uncertain evidence.24 As we shall see, the distinction between soft and hard Jeffrey shifts has a real impact on merging of opinions. Let’s start with hard Jeffrey shifts. A hard Jeffrey shift sets values for the new probabilities of the elements of an observational partition independently of their old probabilities; that is to say, it overrides the information about the partition expressed by the prior probabilities. In our setting, the new probability assignments for the partition, Pn (Ei ), are assumed to come from a publicly observable information source. This allows different agents to update on the same probability assignments Pn (Ei ). An example of this is a mechanical measurement instrument that makes noisy observations of an underlying physical process. After each observation, it shows a probability distribution over increasingly refined partitions that are used by both our agents for revising their beliefs. Another example is IBM’s Watson software system, which can be used as an expert advisor in various contexts. A system like Watson returns answers to queries that are weighed by probabilities. These probability assignments might be used by agents as input for updating schemes like probability kinematics. In order to have a general picture in mind, we can think of hard Jeffrey shifts in terms of deferring to the probabilities announced by an expert. For agents to get the same uncertain evidence means, then, to defer to the same expert. Assume now that at each trial both agents update in terms of hard Jeffrey shifts, and denote the new probabilities for elements of the nth partition by 24 Joyce (2010).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

180

Consensus

Pn and Qn . If they get the same uncertain evidence, then Pn = Qn for each n. Provided that hard Jeffrey shifts are a good model of some kinds of uncertain evidence, this clearly is a plausible requirement for merging of opinions since agents who don’t have the same evidence (certain or uncertain) cannot in general hope to achieve a consensus by updating. Updating on hard Jeffrey shifts can lead to merging of opinions, but only under additional assumptions. The first says that degrees of belief for each fixed member of a partition form a martingale. That is to say, for each element E of the nth partition, the sequence of new probability assignments Pm (E), m ≥ n, is a martingale. A simple special case is when Pm (E) is constant for m ≥ n. This would require that information about the elements of a partition won’t change later in the process, whereas the martingale condition requires the information to not change on average. By what we have said about dynamic consistency, this is necessary whenever the sequence of hard Jeffrey shifts constitutes a genuine learning process. The second assumption also restricts the kind of future probability distributions over partitions that we may contemplate ex ante. It requires that, for each n, the new probabilities Pn over the nth partition are absolutely continuous with respect to the prior P. A violation of this condition would have an unwelcome epistemic consequence, namely that you think you will later assign positive probability to an event which has zero probability for you now. If you foresee this prior to making any observations, you effectively contemplate an event that has both zero and positive probability. Requiring absolute continuity of posteriors with respect to priors avoids this type of confusion. If you think that an event might have positive probability in the future, it already deserves some positive probability now. In fact, this assumption needs to be somewhat strengthened in order to prove our merging result for probability kinematics. A reasonable strengthening requires absolute continuity to be uniform over all contemplated posteriors. What this guarantees is that absolute continuity holds as we go to the limit. The rationale for this move is the same as the one for countable additivity mentioned earlier: if we think of the infinite limit as an approximation to large but finite sequences, limiting properties should hold uniformly.25 Together with the requirement that Q is absolutely continuous with respect to P, the foregoing assumptions provide the basis for proving an 25 The assumption of uniform absolute continuity is actually stronger than necessary for the

merging result, but the weaker sufficient condition is less intuitive. For a more thorough discussion of this assumption, see Huttegger (2015b, Section 4).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.5 Divergence and Probability Kinematics

181

analogue of Blackwell and Dubin’s theorem. The new theorem asserts that Pn and Qn come and stay close for large n uniformly in all events A with Q-probability one.26 This demonstrates that we can extend merging of opinions to hard Jeffrey shifts in cases that are relevantly similar to the classic merging result: those where priors are absolutely continuous and agents update rationally in response to the same uncertain evidence.

8.5 Divergence and Probability Kinematics To a conditionalizer experience speaks with what Bas van Fraassen has called “the voice of an angel,” rendering what is learned a clear and distinct truth.27 Both conditionalization and hard Jeffrey shifts are the kind of experiences Peirce refers to as “a force outside of ” the investigators. For hard Jeffrey shifts are like the voice of an angel received through a noisy channel. They cause everyone who listens to shift probabilities on a partition in exactly the same manner. There is no room for interpreting the evidence differently. This is the sense in which both kinds of updating policies are based on solid evidence, even though in the case of hard Jeffrey shifts the solid evidence might be inconclusive. Soft Jeffrey shifts capture a more fluid kind of evidence. They allow people to interpret the same evidential inputs differently in the light of their prior. Consider the following example of a learning experience for a four-element partition E1 , . . . , E4 .28 Let the learning experience arise from probability kinematics on the partition with new probabilities given by 1 3 1 P1 [E1 ] = , P1 [E2 ] = , P1 [E3 ] = , P1 [E4 ] = 0. 5 10 2 This learning experience is a hard Jeffrey shift. Now examine a learning experience that is given by the following shift, which specifies the new probabilities for the elements of the partition as a function of their prior probabilities: P1 [E1 ] = 2·P[E1 ], P1 [E2 ] =

1 ·P[E2 ], P1 [E3 ] = 5·P[E3 ], P1 [E4 ] = 0·P[E4 ]. 2

26 For details, see Huttegger (2015b, Theorems 5.1 and 9.2). There is a slight mistake in the

statement of Theorem 9.2, as well as Theorem 9.1, which both assume that the martingale condition holds almost surely, although it of course holds simpliciter. I am indebted to Michael Nielsen and Rush Stewart for pointing this out to me. 27 van Fraassen (1980). 28 The example is taken from Joyce (2010).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

182

Consensus 1 1 If the prior probabilities are P[E1 ] = 10 , P[E2 ] = 35 , P[E3 ] = 10 , P[E4 ] = 1 , then probability kinematics on the partition leads to the same numerical 5 results as in the example of the hard Jeffrey shift. But there is a profound difference: the second approach only fixes the new-to-old probability ratio for each element of the partition. When combined with probability kinematics, this is called a soft Jeffrey shift. Soft Jeffrey shifts are a Bayesian model for how the interpretation of an evidential input may be influenced by one’s prior. Since merging is only to be expected when agents have the same evidence, the most basic question is whether agents can have the same uncertain evidence in terms of soft Jeffrey shifts. Finding an answer requires us to take a little detour. Soft Jeffrey shifts play an important role in studying the so-called problem of the noncommutativity of probability kinematics.29 In the present context, commutativity means that an agent ends up with the same posterior probability regardless of the order of successive updates. Bayesian conditioning is commutative, but in general probability kinematics is not. This has caused some concerns as to whether probability kinematics is a rational way to update beliefs. I agree with Carl Wagner and Jim Joyce that these concerns are misguided.30 As Joyce points out, probability kinematics is noncommutative exactly when it should be – namely, when a belief revision overrides previous information. For hard Jeffrey shifts, this idea has been developed by Persi Diaconis and Sandy Zabell; for soft Jeffrey shifts, Wagner, extending results by Hartry Field and Richard Jeffrey, has shown that with an appropriate understanding of what it is to get the same uncertain evidence, probability kinematics is commutative.31 Wagner’s argument is relevant for us because it gives rise to a notion of having the same soft uncertain evidence. The new notion is based on a numerical measure of what is being learned in a learning event which isolates the new information you get by factoring out your prior. For conditioning, a suitable measure is the likelihood ratio.32 More generally, one can use Bayes factors. If E and F are two events, the Bayes factor of E and F is given by the ratio of their new odds at time n and their old odds from time n − 1: Pn [E] Pn−1 [E] . Pn [F] Pn−1 [F]

29 See Diaconis and Zabell (1982) and Wagner (2002). 30 See Döring (1999), Lange (2000), Wagner (2002), Kelly (2008) and Joyce (2010). 31 See Diaconis and Zabell (1982), Wagner (2002), Field (1978), and Jeffrey (1988). 32 Good (1950, 1983).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.5 Divergence and Probability Kinematics

183

Being ratios of new-to-old odds, Bayes factors can be formulated in terms of relevance quotients. The relevance quotient for E is Pn [E] . Pn−1 [E] Thus, the Bayes factor of E and F is equal to the ratio of their relevance quotients. A soft Jeffrey shift on a partition specifies the relevance quotient for each element E of that partition up to multiplication by a positive constant. This means that the new probability of E depends on its old probability: Pn [E] = cn β(E)Pn−1 [E]. Hartry Field has used soft Jeffrey shifts for his reformulation of probability kinematics.33 The numbers β(E) specify a Field shift. Any Field shift is accompanied by a soft Jeffrey shift.34 Under Field shifts, successive belief revisions are commutative. For probability kinematics, this leads to criteria of commutativity in terms of certain Bayes factor identities.35 Suppose that an agent successively revises her beliefs by probability kinematics on two partitions, and consider the Bayes factors for the first shift in a given order of revisions. In order to have commutativity, these Bayes factors must be identical to the Bayes factors for the second shift in the reversed order. This says that probability kinematics is commutative if these two learning events are the same up to the agent’s probability prior to each learning event. We can therefore think of Bayes factors as representing the uncertain evidence that is not already contained in the prior. Bayes factors thus provide us with another notion of getting the same uncertain evidence, this time a partial characterization of a soft kind of uncertain evidence. In accordance with this notion, let us stipulate that whenever two agents get the same soft uncertain evidence with respect to a partition, their Bayes factors for all pairs of events of that partition are the same. This is equivalent to saying that all their relevance quotients for a partition are equal up to multiplication by a positive constant. Does merging of opinions hold for soft uncertain evidence? In general, the answer is no. In the present setting, two agents whose beliefs satisfy our earlier requirements (in particular, absolute continuity and the martingale 33 Field (1978). 34 Given by

Pn [E] = 

β(E)Pn−1 [E] . E∈En β(E)Pn−1 [E]

35 Wagner (2002).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

184

Consensus

condition) can have divergent posterior probabilities with increasing information.36 In the example of flipping a coin infinitely often, the story is this: two prior beliefs are being updated by probability kinematics on increasingly refined partitions. The evidence is soft – the Bayes factors of the agents’ new probabilities are the same. In this situation, prior beliefs may be such that Jeffrey conditioning leads to updated probabilities of propositions about coin flips that are far apart with positive probability, and remain so even in the long run. If you think about it, this result is rather intuitive. The Bayes factor criterion expresses a fluid kind of evidence, since the constraints it puts on the new probability distribution are quite loose and allow uncertain evidence to be interpreted in more than just one way; this is what sets it apart from hard Jeffrey shifts. Soft evidence leads to divergent beliefs that have a lasting effect in the agents’ belief dynamics because differences in interpretation may never be resolved. The important lesson is that sustained long-run disagreement and individually rational learning from the same uncertain evidence are compatible in this case, even though our benign assumptions (absolute continuity, same increasing evidence) are in place.

8.6 Alternative Approaches The result of the previous section is not an accident. Two independent approaches to updating on uncertain information lead to the same conclusion, providing us with additional reasons to think that rational learning and divergence of opinions can go hand in hand. Mark Schervish and Teddy Seidenfeld have proved some results that are relevant for our discussion.37 Schervish and Seidenfeld don’t assume that agents update by Bayesian conditioning; instead, posteriors are conditional probabilities chosen from a given set of probability measures. How large this set is depends on what the admissible responses to the same evidence are. If the criteria of admissibility are rather restrictive – corresponding to solid evidence – then posterior probabilities merge with increasing information. However, consensus is not guaranteed for any set of admissible probability measures. Loose criteria of admissibility correspond to updating on fluid evidence, where opinions may not merge.38 36 See Huttegger (2015b, Section 6). 37 Schervish and Seidenfeld (1990). 38 A sufficient condition for merging is that the set of probability measures is closed, convex,

generated by finitely many extreme measures such that all measures in that set are mutually

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.7 Rational Disagreement

185

The economists Daron Acemoglu, Victor Chernozhukov, and Muhamet Yildiz have studied a different model of learning from uncertain evidence.39 In their model, underlying states cannot be observed directly but only through noisy signals. The reliability of a signal is expressed by the likelihood that the signal is observed, supposing the state it indicates has in fact occurred. Besides having potentially unreliable signals, there is an additional layer of uncertainty: agents have subjective probability distributions over reliabilities (likelihoods). This higher-order uncertainty is what allows agents to interpret evidence in different ways. For example, signals that are observed with high frequency, but are thought to be unreliable, may indicate that the corresponding state is not really very frequent. On the other hand, a reliable but infrequent signal may point to a rather frequent state. Acemoglu, Chernozhukov, and Yildiz prove several results that are relevant for us. Suppose signals are generally reliable. Then each agent converges to the truth, and their opinions merge. However, if signal reliability is sufficiently uncertain, then both convergence to the truth and merging of opinions fail; indeed, the prior probability of these events is equal to zero. Once again, the source of these results is that evidence, though shared among agents, is too soft and uncertain to serve as a foundation for establishing long-run consensus.

8.7 Rational Disagreement Convergence to the truth and merging of opinions are both aspects of the problem of induction. They demonstrate that consistency sometimes requires us to think that differences in prior probabilities will be insignificant in the limit. This robustness with respect to prior probabilities indicates the belief in an underlying reality that will be unraveled by a sufficiently thorough inductive investigation. Convergence and merging are also aspects of the debate on objective priors. One road to objective priors proceeds from the view that there is a uniquely rational response to any piece of evidence (recall the Harsanyi doctrine, which requires differences in prior probabilities to come from differences in prior evidence). While this seems hardly plausible when there is little prior evidence, one could try to argue that probabilities are almost unique when the amount of shared evidence is very large. Merging of absolutely continuous. If the extreme points of the set are only weak-star compact, then there is no assurance of consensus. 39 Acemoglu et al. (2016).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

186

Consensus

opinions might, on this view, serve as a basis for common priors. In other words, agents can be assumed to have a common prior if their probability assignments are based on a large system of shared evidence. Although several models confirm this Peircean story, we have just seen that it comes with significant qualifications. First of all, in the setting of Blackwell and Dubins, merging might require a lot of evidence; just how much depends on the specifics of the epistemic situation. In the standard case of having exchangeable probabilities over infinitely many coin flips, conditional probabilities usually merge rather quickly. In the absence of special assumptions, merging might take awfully long. In fact, it might take so long that the merging result is moot for practical purposes. Absolute continuity is of course another restrictive assumption. Besides absolute continuity, the nature of evidence and information has turned out to have a profound impact on the dynamics of beliefs. Uncertain evidence may yield merging of opinions as long as it is of a particular kind, which we have called “solid.” What this means is that there is effectively no room for interpreting the evidence. Otherwise, uncertain evidence is “fluid” and puts only rather loose constraints on the set of admissible posteriors. With the latter kind of evidence, rational learning may not lead to merging of opinions. The bottom line is that in a radical probabilist framework merging and convergence obtain only under fairly regulated circumstances. Why, then, shouldn’t our response be: so much the worse for the radical probabilist? If probabilistic methods cannot guarantee that we learn the truth about any domain of the real world, not even in an idealized limit, why not abandon these methods and adopt a different approach? The epistemology underlying this critique of radical probabilism takes reliability – in the sense of employing methods that guarantee convergence to the truth – as a necessary condition for rational inquiry.40 Radical probabilism, on the other hand, is based on rational degrees of belief. As repeatedly emphasized in this book, there are two general aspects about the rationality of degrees of belief: first, they are an agent’s internally consistent best estimates of truth values; and second, new information is incorporated consistently into one’s body of beliefs in order to have better estimates of truth values. So the notion of rationality at work here is not “teleological,” as with reliability; what is emphasized is the here and now and how to take the next step. 40 One example is Kevin Kelly’s formal learning theory (Kelly, 1996).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

8.7 Rational Disagreement

187

So reliability looks at rational inquiry from a future perspective, whereas probabilists approach it from a present perspective. This does not mean that the two perspectives are completely divorced. In fact, we have seen in this chapter that probabilistic methods often are reliable as a matter of consistency. The take home message is, in my view, that probabilistic learning from experience is reliable in precisely those cases where it should be reliable. The assumptions of convergence and merging theorems capture features of circumstances where any reasonable learning method should be reliable because the information about the process captures the reality under investigation, at least in the long run. This is the kind of situation we presumably strive to attain in the sciences, where a lot of energy goes into obtaining solid and robust evidence for controlled observational setups. Here, Bayesian theorems on merging and convergence are at home. But evidence comes in different guises and degrees. In particular, it can be uncertain and open to different interpretations. In those situations probabilistic learning can fail to lead to a consensus, as would any other method for learning from experience that is rich enough to encompass such epistemic situations. There is no reason why we should expect rational learning to be capable of perfectly reflecting an underlying reality under these circumstances. Thus, although it is critically important in many specific situations, reliability is too strong to serve as a rationality requirement in general. The failure of convergence and merging does not show there is anything inherently wrong with our probabilistic foundations of rational learning. What it does show is that learning, despite being rational, cannot lead to ideal results if the epistemic circumstances don’t allow it. Let me sum up, then. The point of view I have been developing in this book leads to models of rational learning that can be truly open-ended. At no point in a learning process do we necessarily have agreement among rational learners. Interpersonal agreement is reserved for special occasions, such as particularly well-designed scientific investigations. However, even under otherwise favorable circumstances a soft kind of information allows individual rationality to be consistent with sustained disagreement. With this in mind, we might approach areas where disagreements among well-informed and qualified people abound with an attitude of epistemic tolerance.41 Given the background of the admittedly simple but useful model of soft Jeffrey shifts, this kind of disagreement is to be expected. 41 In a well-known essay, van Inwagen (1996) discusses three fields – philosophy, politics, and

religion: a list which could certainly be extended.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

188

Consensus

Jeffrey’s model at least partly resolves what is puzzling about many situations where individuals acknowledge each others’ rationality in the face of disagreement while maintaining their own beliefs. For the radical probabilist, there is nothing intrinsically wrong with this, except under the special conditions where consensus results apply.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:05:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.010

Appendix A

Inductive Logic

This appendix summarizes how the Johnson–Carnap continuum of inductive methods can be derived from first principles. In addition, we are going to apply it to bandit problems.

A.1 The Johnson–Carnap Continuum of Inductive Methods Suppose that X1 , X2 , . . . , Xn is a finite sequence of random variables. Each Xi takes values in a set {1, . . . , t} of observable events, where t is assumed to be countable. The number of random variables n may be finite or infinite. The finite sequence X1 , . . . , Xn is called exchangeable if P[X1 = x1 , . . . , Xn = xn ] = P[X1 = xσ (1) , . . . , Xn = xσ (n) ]

(A.1)

for each permutation σ of the integers from 1 to n. An infinite sequence X1 , X2 , . . . is said to be exchangeable if (A.1) holds for all of its finite initial segments.1 Let X1 , . . . , Xm+1 be a finite sequence of random variables as above, and let mi denote the number of times i ∈ {1, . . . , t} has been observed in the first m trials X1 , . . . , Xm . Then Johnson’s sufficientness postulate holds if, for all i and mi , P[Xm+1 = i|X1 , . . . , Xm ] = fi (mi , m).

(A.2)

The function f assigns a probability to each triplet i, mi , m. It is also common to assume that every sequence of observations has positive probability, a condition which is known as regularity: P[X1 = x1 , . . . , Xm = xm ] > 0 for all combinations of x1 , . . . , xm . (A.3) The Johnson–Carnap continuum of inductive methods follows from exchangeability, Johnson’s sufficientness postulate, and regularity. More precisely, suppose that X1 , . . . , Xn , n ≥ 3 (the sequence may be infinite), 1 Strictly speaking, exchangeability refers to the probability measure P and not to the random

variables.

189

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

190

Inductive Logic

is an exchangeable sequence of random variables such that t ≥ 3. Suppose also that for every m < n both (A.2) and (A.3) hold. If the Xi are not independent, then there exist nonzero constants αi , which are either all negative  or all positive, such that m + j αj is nonzero and P[Xm+1 = i|X1 , . . . , Xm ] =

mi + αi  m + j αj

(A.4)

for each m < n and i ∈ {1, . . . , t}. Johnson’s posthumously published proof of this result contained some quirks, which were fixed by Sandy Zabell.2 Carnap’s approach, though slightly different, is essentially equivalent.3 The observations are assumed to be probabilistically dependent since otherwise one cannot learn from experience. If one wants to include the case of independent observations, the resulting rule of succession ignores observed frequencies. Here is a brief outline of the structure of the elegant Johnson–Zabell proof. There are two key lemmata. The first follows without assuming exchangeability. It says that predictive probabilities are given by a linear function of the ni : If t ≥ 3 and (A.2) as well as (A.3) hold, then there exist constants ai and b such that for all i ∈ {1, . . . , t}, fi (ni , n) = ai + bni .

(A.5)

Johnson’s sufficientness postulate is empty if t = 2. In this case, linearity could be assumed right away. We referred to this lemma in Chapter 7 in the context of deriving approximate straight averaging. Notice that if b = 0, then Xn+1 is independent of X1 , . . . , Xn . In this case no learning from experience is possible. Assuming that trials are not independent (as in the main result) implies that b  = 0. By setting b =  (1 − j aj )/n and αi = ai /b it can be shown that fi (ni , n) = ai + bni =

ni + αi  . n + j αj

This already has the form of a rule of succession. But it still needs to be established that the parameters αi do not depend on n (n plays a crucial role in the proof of the first lemma, where ai and b are both functions of n). This is achieved by the second lemma: Suppose that X1 , . . . , Xn+1 , Xn+2 , n ≥ 1 is an exchangeable sequence such that both (A.3) and (A.5) hold for n and n + 1. 2 Zabell (1982). 3 See Kuipers (1978) for an excellent account of Carnap’s proof.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

A.2 De Finetti Representation

191

(i) If bn · bn+1 = 0, then bn = bn+1 = 0. (ii) If bn · bn+1  = 0, then bn · bn+1 > 0 and αin = αi,n+1 for all i. Hence, trials are either independent, or the α parameters do not depend on the number of trial. The key fact used in the proof of the second lemma is a consequence of exchangeability: P[Xn+1 = i, Xn+2 = j|X1 , . . . , Xn ] = P[Xn+1 = j, Xn+2 = i|X1 , . . . , Xn ] In words, given past observations, the order of the next two observations is probabilistically irrelevant.4 The two lemmata together immediately imply the main result.

A.2 De Finetti Representation The more traditional Bayesian approach to generalized rules of succession is based on de Finetti’s representation theorem. If X1 , X2 , . . . is an infinite exchangeable sequence and the number of values t is finite, then de Finetti’s theorem asserts that there exists a unique chance prior μ such that for any n, P[X1 = x1 , . . . , Xn = xn ] = pn1 1 · · · pnt t dμ(p1 , . . . , pt ). The integral ranges over the set of possible chances p1 , . . . , pt , and ni denotes the number of times outcome i appears in the sequence x1 , . . . , xn . That the chance prior is a Dirichlet distribution means that dμ is equal to

 t  j=1 αj   pα1 1 −1 · · · pαt t −1 dp1 · · · dpt−1 . t j=1  αj (Here  is the gamma function; i.e. if k is a positive integer, then (k+1) = k!.) In this case, the conditional probability P[Xn+1 = i|X1 , . . . , Xn ] is given by (A.4). If X1 , . . . , Xn is a finite exchangeable sequence taking on a finite number of values, t, there is an important result known as de Finetti’s representation theorem for finite sequences of random variables.5 Given n1 , . . . , nt , the sequence X1 , . . . , Xn has the same probability as drawing at random without replacement from an urn that contains balls labelled 1, . . . , t 4 This property together with Johnson’s sufficientness postulate implies exchangeability (Kuipers,

1978). 5 See Diaconis (1977), Diaconis and Freedman (1980a), Aldous (1985) and Zabell (1989).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

192

Inductive Logic

with frequencies n1 , . . . , nt , respectively. These draws have a multivariate hypergeometric distribution Hn1 ,...,nt . Moreover, 

P[X1 = x1 , . . . , Xn = xn ] =

P[X1 = x1 , . . . , Xn = xn |n1 , . . . , nt ]P[n1 , . . . , nt ]

n1 ,...,nt



=

P[n1 , . . . , nt ]Hn1 ,...,nt .

n1 ,...,nt

Hence, the probability P can be viewed as a mixture of multivariate hypergeometric probabilities, the mixing weights being the prior joint probabilities P[n1 , . . . , nt ] of outcome frequencies after n draws. In particular, if we take P[n1 , . . . , nt ]  =

n n1 · · · nt





t  αj j=1   pα1 1 −1 · · · pαt t −1 dp1 · · · dpt−1 , pn1 1 · · · pnt t t  αj j=1

then a generalized rule of succession can be derived for finite exchangeable sequences. This follows because the finite exchangeable sequence has the same probabilistic structure as the corresponding initial segment of an infinite exchangeable sequence. For other values of P[n1 , . . . , nt ], the resulting rules may be different. Suppose, for instance, that P[n1 , . . . , nt ] = 1 for a certain set of observed frequencies n1 , . . . , nt . Then the sequence of random variables corresponds to choosing from an urn without replacement. The corresponding rule of succession has negative alpha parameters, a case which is covered by the Johnson-Zabell result. Negative alpha parameters are interesting because they allow for a kind of inductive inference that is contrary to classical enumerative induction: it allows, say, choosing red balls from a finite urn to make the observation of more red balls less probable.

A.3 Bandit Problems We consider the two-armed slot machine introduced in Chapter 2. Let the left arm have success probability p, and let the right arm have success probability q. We assume that p > q. Let si be the number of successes of arm i and ni the total number of times i has been chosen thus far. For arm i, average reinforcement learning uses the following update rule:

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

A.3 Bandit Problems

si + αis . ni + αis + αif

193

(A.6)

This is the probability of obtaining a success with arm i given all past observations. The parameters αis and αif are positive and determine the agent’s prior probabilities for having a success or a failure when choosing i. Let the agent’s choices be given by a perturbed best responses rule. More precisely, at trial n + 1 the agent chooses an arm that maximizes expected payoffs relative to the predictive probabilities after having made n observations with probability 1 − ε, and the other arm with probability ε > 0. Call this an ε-perturbed best response rule. We prove two propositions that are very simple consequences of Lévy’s extension of the second Borel–Cantelli lemma.6 Let F1 , F2 , . . . be an increasing sequence of σ -algebras on some underlying space of possibilities . Let A1 , A2 , . . . be a sequence of events such that An ∈ Fn for all n. Then, P[Ak |Fk−1 ] denotes the conditional probability that Ak occurs given the σ -algebra Fk−1 . This probability is a random variable since its value generally depends on which element in  is true. The generalized Borel–Cantelli lemma tells us when infinitely many An occur: Lemma 1 Let F1 , F2 , . . . be an increasing sequence of σ -algebras and A1 , A2 , . . . a sequence of events, where An ∈ Fn for all n. Then almost surely infinitely many An occur iff ∞ 

P[Ak |Fk−1 ] = ∞.

k=1

Lemma 1 can be paraphrased as follows. Take some ω ∈ . Then ω is, or is not, a member of the set {ω | ω ∈ An infinitely often} and of the set

  ∞   P[Ak |Fk−1 ](ω) = ∞ . ω  k=1

6 E.g., Durrett (1996, p. 240).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

194

Inductive Logic

Lemma 1 asserts that the set of those ω that are an element of one but not the other set has probability zero. In the bandit problem, let  be the set of all infinite histories that record (i) the agent’s choice and (ii) whether it resulted in a success or a failure. Let Fn be the algebra of events describing the first n trials of this process. Let Ln be the event that the left arm is chosen on trial n, and Rn the event that the right arm is chosen. Let pn be the predictive success probabilities (A.6) for the left arm, and qn the corresponding success probabilities for the right arm. Notice that the fictitious play process for bandit problems is the same as what we called “average reinforcement learning” in Chapter 2. Proposition 1 If an average reinforcement learner uses an ε-perturbed best response rule, then pn → p and qn → q with probability one. Proof It is obvious that P(Ln |Fn−1 )(ω) ≥ min{ε, 1 − ε} for all n and for all ω. This implies ∞ 

P(Lk |Fk−1 )(ω) = ∞

k=1

for all ω. Hence, by Lemma 1, Ln occurs infinitely often almost surely. Let t1 , t2 , . . . be the random times where Ln occurs. Consider the sequence of successes and failures along this sequence. That is, set Yn equal to the outcome at time tn . Then Y1 , Y2 , . . . is almost surely an infinite sequence. It is well known that inductive logic converges to p with chance 1.7 An analogous argument shows that Rn occurs infinitely often with probability one and that qn → q almost surely. Proposition 1 asserts that the perturbed average reinforcement learner is, with probability one, epistemically successful in the limit. If ε is small, she will also choose the left arm most, but not all of the time; the right arm is chosen with nonvanishing probability. One can improve on this result by introducing decreasing ε-perturbed best response rules. A decreasing ε-perturbed best response rule is one where the agent chooses the arm that is not a best response at time n with probability εn , where the sequence of perturbation probabilities satisfies the following two requirements: 7 E.g., Skyrms (1991).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

A.3 Bandit Problems ∞ 

εn = ∞

195

and εn → 0.

n=1

An example of such a sequence is εn = n1 . Proposition 2 If an average reinforcement learner uses a decreasing ε-perturbed best response rule, then pn → p and qn → q almost surely. Furthermore, the left arm is chosen almost surely in the limit. Proof If the average reinforcement learner uses a decreasing ε-perturbed best response rule, then she chooses the arm with higher expected payoff in period n with probability 1 − εn and the other arm with probability εn . Without loss of generality we can suppose that εn < 1 − εn for all n, since this happens eventually. It follows that P(Ln |Fn−1 )(ω) ≥ εn for all n and for all ω. Thus, ∞ 

P(Lk |Fk−1 )(ω) ≥

k=1

∞ 

εk = ∞

k=1

for all ω. Hence, by Lemma 1, the left arm is chosen infinitely often almost surely. The same argument works for Rn . That the predictive probabilities converge now follows as in the proof of Proposition 1. Since εn goes to zero, this implies that the left arm will be chosen almost surely in the limit. The requirement that the sum of the perturbation probabilities diverges is crucial. If, for example, εn = n12 , then ∞ 

εk < ∞.

k=1

Here, the left arm might be chosen only finitely often with positive probability, in which case the conclusions of Proposition 2 do not hold.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.011

Appendix B

Partial Exchangeability

In this appendix, I review a notion of partial exchangeability due to de Finetti,1 with special attention being paid to how it can be applied to learning procedures.

B.1 Partial Exchangeability De Finetti introduced the concept of partial exchangeability in order to make the Bayesian theory of inductive inference applicable to situations in which order invariance fails. In Chapter 3, we saw one example, Markov exchangeability. In Chapter 2, I briefly mentioned another kind of example and simply referred to it as “partial exchangeability.” According to this kind of probabilistic symmetry, outcomes can be of different types, and the predictive probabilities of outcomes can be affected by analogical influences between types.2  Mathematically, let Xni be the nth outcome of type i and N = i Ni the total number of observations, where Nj the total number of type j outcomes observed thus far. We assume there are t types and s outcomes, both t and s being finite. Observations are represented by an array of t rows: X11 . . . XN1 ,1 .. .. .. . . . X1t . . . XNt ,t .

(B.1)

Such an array (more precisely, the probability distribution defined over the array) is partially exchangeable if it is exchangeable within each type. That is to say, if nij is the number of times an outcome j of type i has been observed in the first N trials, then the array (B.1) is partially exchangeable if every array with the same counts nij has the same probability. The numbers nij 1 de Finetti (1938, 1959). 2 In Huttegger (2016), I study an analogical inductive logic based on the idea of partial

196

exchangeability.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.012

B.2 Representations of Partially Exchangeable Arrays

197

are a sufficient statistic. An infinite array, for which Ni → ∞, 1 ≤ i ≤ t, is partially exchangeable if every finite initial array is partially exchangeable. As an example, consider two people flipping a coin. There are two outcomes (heads, tails) and two types (one person flips the coin, the other person flips the coin). If you think that one person has the ability to effectively determine the outcome of the coin flip and the other one lacks this ability, coin flips are not exchangeable, but partially exchangeable.

B.2 Representations of Partially Exchangeable Arrays There is a representation theorem for partially exchangeable arrays due to de Finetti that proceeds along the same lines as his theorem for exchangeable sequences. The underlying chance setup also involves independent trials, but instead of being identically distributed they have a common distribution only within a type. De Finetti’s representation theorem says that every partially exchangeable probability measure is a mixture of this chance setup.3 More formally, suppose each type occurs infinitely often (Ni → ∞, 1 ≤ i ≤ t), and let s be the number of outcomes. Then there exists a unique measure μ such that for all 0 ≤ nij ≤ Ni , i = 1, . . . , t, j = 1, . . . , s, P[X11 = x11 , . . . , XN1 ,1 = xN1 ,1 ; . . . ; X1t = x1t , . . . , XNt ,t = xNt ,t ]

t n n = pi1i1 · · · pisis dμ(p1 , . . . , pt ). (B.2) t i=1

The integral ranges over the t-fold product, t , of the s − 1-dimensional unit simplex, , and μ is the mixing measure on the probability vectors pi = (pi1 , . . . , pis ) ∈ , 1 ≤ i ≤ t. Here is one way to think about a partially exchangeable process. First, choose chances for each outcome of each type according to the prior probability μ. Since μ is not required to make chances independent, considerations of similarity may enter at this stage. Second, following a fixed sequence of types in which each type occurs infinitely often, choose outcomes independently for each type according to the common distribution that was determined in the first stage. By (B.2), the resulting array is partially exchangeable. Returning to our example, we could choose a given bias for heads for one person who is flipping the coin and another bias for the 3 See de Finetti (1938, 1959), Link (1980), and Diaconis and Freedman (1980a).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.012

198

Partial Exchangeability

other person. We then flip the coin independently, the bias depending on the person who flips the coin at a given trial. The most important case for us is where chance parameters of different types are independent of each other, that is, the mixing measure dμ is a product measure dμ1 × · · · × dμt . In this case the representation (B.2) simplifies to t

n n pi1i1 · · · pisis dμ(pi ). (B.3) i=1 

B.3 Average Reinforcement Learning Recall that in Chapter 2 we briefly mentioned the model of average reinforcement learning, which uses the average payoff obtained by an act, Ai , as an estimate of that act’s value:  nij + αij  πj ni + j αij j

Here, the sum ranges over a finite number of possible payoffs, πj . This expression is the expected value of payoffs with respect to the following predictive probabilities: nij + αij  . (B.4) ni + i αij The predictive probabilities (B.4) can be derived within de Finetti’s framework of partial exchangeability. Suppose there are t acts and s payoffs. Think of acts as types and of payoffs as outcomes. This gives rise to an array of payoffs where each row is associated with a particular act. If every act is chosen infinitely often, and if the array is partially exchangeable, then de Finetti’s theorem for partially exchangeable arrays applies. If, in addition, types are independent of one another, (B.3) implies that the conditional probability of getting a particular payoff j given the next time act i is chosen is  ni1 nis  pij pi1 · · · pis dμi (p)  ni1 . nis  pi1 · · · pis dμi (p) If each measure μi is given by a Dirchlet distribution,   t

 αij α −1 α −1    pi1i1 · · · pisis dpi1 · · · dpi,s−1 ,  α ij j i=1 then, by a result mentioned in Appendix A, predictive probabilities are given by (B.4).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.012

B.4 Regret Learning

199

The following slight modification of Johnson’s sufficientness postulate can be used as an alternative to Dirichlet priors for deriving (B.4): P[Xi,ni+1 = Xn1,1 ; ...; X1,t , ..., Xnt,t ] = fij (nij , ni ). The proof is effectively the same as the one given by Zabell for inductive logic on Markov exchangeable sequences.4

B.4 Regret Learning Another widely considered alternative to Bayesian models is regret learning.5 The notion of regret is rather intuitive, and so regret learning might capture some aspects of our own learning behavior. (If you never regret anything, please skip this section.) Suppose there is a decision problem with finitely many states of the world and finitely many acts. If at time n you choose act A and nature chooses state S, then your regret for not having chosen act A is equal to u(A, S) − u(A , S). If the difference is positive, you regret having chosen A and not A. Otherwise you don’t regret your choice. Notice that regrets are counterfactual quantities. In order to calculate these differences, we need to be able to determine what our utilities would be had we chosen otherwise. However, there are payoff-based alternatives that only estimate regrets.6 The central quantity in regret learning is the average regret of A. Let Sk be the state at trial k and Ak the act chosen at trial k. Immediately after the nth trial, your average regret of A is given by 1 u(A, Sk ) − u(Ak , Sk ). n n

(B.5)

k=1

The average regret of A quantifies how much you regret not having chosen A at each trial. In this way it provides some information about how well you have done. Average regrets can be used by agents in various ways. A well-known proposal is to choose an act with maximum average regret; another one is to choose an act with probability proportional to its average regret. In both 4 Zabell (1995). For details, see the appendix of Huttegger (2017). 5 On regret learning, see Hart and Mas-Colell (2000), Young (2004) and Cesa-Bianchi and Lugosi

(2006). 6 Young (2004).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.012

200

Partial Exchangeability

cases, the average regret of A is taken to be an estimate for the next period’s regret of A. Acts are thus evaluated in terms of estimated regrets instead of expected utilities. One possible rationale for average regrets is based on ideas similar to exchangeability and partial exchangeability. To see how this would work, first observe that, since there are finitely many states and acts, there are only finitely many per period regrets for each act, A. Let’s denote them by rA,1 , . . . , rA,s . Consider now a sequence of random variables XA,1 , XA,2 , . . . taking rA,1 , . . . , rA,s as values, and let nA,i be the number of times rA,i occurs in the first n trials XA,1 , . . . , XA,n . Applying the Johnson–Carnap continuum of inductive methods to this setting provides the following predictive probabilities: P[XA,n+1 = ri |XA,1 , . . . , XA,n ] =

nA,i + θA,i  . n + j θA,j

The expected regret of A is then equal to  nA,k + θA,k  rA,k . n + j θA,j k

This reduces to (B.5) if the θ parameters are equally zero. If some θk are positive, then prior to any observations the expected regret of A is  θA,k rA,k  . j θA,j k

What this shows is that average regrets are based on the Johnson–Carnap continuum of inductive methods. The corresponding predictive probabilities can be derived in the usual way, assuming that the regrets of acts are independent of each other.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:07:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.012

Appendix C

Marley’s Axioms

This appendix shows how to apply the work of A. A. J. Marley on commutative learning operators to reinforcement learning.1 Marley works with a slightly more general set of axioms than the one I use here, which is tailored toward the basic model of reinforcement learning.

C.1 Abstract Families Let’s consider the sequences of propensities Qi (1), Qi (2), . . ., one for each act Ai , 1 ≤ i ≤ m. They arise from sequences of choice probabilities that at every stage satisfy Luce’s choice axiom. Let Xi be the range of values the  random variables Qi (n), n ≥ 1, can take on, and let X be i Xi . In many applications, X will just be the set of nonnegative real numbers. Let O be the set of outcomes, which can often be identified with a subset of the reals.2 A learning operator L maps pairs of propensities and outcomes in X × O to X. If x is an alternative’s present propensity, then L(x, a) is its new propensity if choosing that alternative has led to outcome a. This gives rise to a family of learning operators {La }; for each a in O, La can be viewed as an operator from X to X. We assume that there is a unit element e ∈ O with L(x, e) = x for all x in X. The triple (X, O, L) is called an abstract family.3 An abstract family (X, O, L) is quasi-additive if there exists a function h : X → R and a function k : O → R such that for each x, y in X and each a in O (i) x ≤ y if and only if h(x) ≤ h(y) and (ii) h[L(x, a)] = h(x) + k(a). We say that the process given by the sequences of propensities and the sequence of choices X1 , X2 , . . . of acts is a reinforcement learning process if 1 Marley (1967). 2 Marley’s theory allows both X and O to be abstract sets.

3 In the general case where X does not need to be a subset of the reals, an abstract family also

includes a weak ordering relation on X.

201

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:08:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.013

202

Marley’s Axioms

there exists an abstract family (X, O, L) such that for all n whenever Xn = i (Ai is chosen) there is an a ∈ O with Qi (n + 1) = La (Qi (n)), and, if Xn  = i, Qi (n + 1) = Le (Qi (n)), where e is the unit element of O. If such a family is quasi-abstract, then  h−1 (h[Qi (n)] + k(a)) if Xn = i Qi (n + 1) = (C.1) otherwise. Qi (n)

C.2 Marley’s Theorem Some of Marley’s principles tell us when an abstract family fits the learning process we are interested in. Let’s start by introducing the relevant concepts. Definition 1 An abstract family (X, O, L, ) is strictly monotonic if for all x, y in X and each a in O x≤y

if and only if

La (x) ≤ La (y).

Strict monotonicity says that learning is stable; an outcome has the same effect on how propensities are ordered across all propensity levels. Let Lna (x) denote the n-fold application of La to x. Definition 2 Suppose that (X, O, L) is an abstract family: (i) (X, O, L, ≤) is positive if there exists an element u in X such that u < x for all x  = u in X, and u < La (u) for some a in O. (ii) (X, O, L, ≤) is Archimedean if for all x in X and a in O, if La (u) > u, then there is a positive integer n such that Lna (u) ≥ x. Positivity is not necessary for Marley’s theory; dropping it results in a class of models that extends learning procedures such as the basic model of reinforcement learning in interesting ways. The element u corresponds to the lowest propensity in a reinforcement learning model. Archimedean conditions are often used in measurement theory because they guarantee a real-valued representation.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:08:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.013

C.2 Marley’s Theorem

203

Definition 3 An abstract family (X, O, L) is solvable if, whenever x, y in X and a in O are such that Lna (y) < x ≤ Ln+1 a (y), there is an element b in O n with x = Lb (La (y)). Solvability guarantees that every propensity value can be reached by the family of learning operators; the set of acts has to be sufficiently rich so that the set of propensities matches the structure given by the family of learning operators. The central requirement for reinforcement learning is that learning operators be commutative. Definition 4 An abstract family (X, O, L) is commutative if for each x in X and all a, b in O, Lb (La (x)) = La (Lb (x)). The proof of the following theorem goes back to Marley:4 Theorem 2 (Marley, 1967) Let (X, O, L, ≤) be an abstract family that is commutative, strictly monotonic, positive, Archimedean, and solvable. Then (X, O, L, ≤) is quasi-additive, and the function k is nonnegative for all a in O and positive for at least one outcome a. Proof The proof of Theorem 2 requires only some minor modifications of Marley’s proof. Our Archimedean axiom and the solvability axiom depart slightly from the corresponding axioms of Marley’s. Marley’s axioms don’t require positivity (Definition 2). They also require the existence of a b ∈ O with Lb (u) < u, which does not hold for the basic model of reinforcement learning. We have to make sure that this omission does not cause any problems for the proof of Theorem 2. Marley’s intermediate result, his Theorem 5, asserts an equivalence between quasi-additivity and strict monotonicity, on the one hand, and “normality” on the other; normality refers to a property of the real numbers (having no anomalous pairs). This result carries over verbatim to our setting since it is not related to our positivity assumption. The same is true for Marley’s Theorem 4. Marley’s Lemma 2 treats the two cases Lb (u) < u and La (u) > u separately, and its conclusion holds if we only assume the latter. Marley’s Lemma 3 can then be established with the help of Lemma 2. Marley’s 4 See Marley (1967, Theorem 6).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:08:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.013

204

Marley’s Axioms

Lemma 4 again treats the two cases separately, so we can use its conclusion under the assumption of positivity. Finally, Marley’s main result (Theorem 6) makes use of the aforementioned theorems and lemmata and again treats the case of positivity on its own. It follows that its conclusion – namely that (X, O, L) is quasi-additive – also holds in our case. It remains to show that the function k in the quasi-additive representation is nonnegative and positive for at least one a ∈ O. By positivity, La (u) > u for at least one a ∈ O. Since h is strictly increasing it follows that h(u) + k(a) = h[La (u)] > h(u), and so k(a) > 0. Since u < x for all x ∈ X, it must be the case that La (u) ≥ u for all a ∈ u, which again implies h(u) + k(a) = h(La (u)) ≥ h(u). Thus k(a) ≥ 0 for all a ∈ O. Our main result now follows immediately from Marley’s theorem. Corollary Suppose the array of nonnegative propensities Qi (1), Qi (2), . . ., where i ranges over all alternatives Ai , is a reinforcement learning process represented by an abstract family that is strictly monotonic, positive, Archimedean, solvable and commutative. Then the reinforcement learning process updates quasi-additively as in (C.1).

C.3 The Basic Model The basic model of reinforcement learning is a special instance of Marley’s reinforcement learning scheme. It requires that L(x, a) = x + k(a). In other words, the function h of the quasi-additive representation must be the identity function. Marley defines h to be the composition of two functions, φ and k .5 Let’s consider them in turn. The function φ maps X into the set of all finite sequences over A. It gives rise to the following equation for each x in X: Lφ(x) (u) = x Thus, φ defines a sequence of outcomes that describe a learning process that leads from u to the propensity x. The existence of φ follows from strict monotonicity, positivity, the Archimedean property and solvability (see Marley’s Lemma 2). 5 Marley (1967, p. 420).

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:08:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.013

C.3 The Basic Model

205

The function k is an extension of the function k in the quasi-additive representation; it takes finite sequences of outcomes to the nonnegative reals in such a way that the value of the sequence is the sum of the values assigned to outcomes by k. The function h is the composition of k and φ: h(x) = k (φ(x)) for all x ∈ X. It follows that h is the identity function if φ is the inverse of k (provided that the inverse exists).6 This is tantamount to saying that k maps sequences of outcomes to propensities X by virtue of k assigning propensities to outcomes instead of just real numbers that do not need to be members of X. Hence, if the payoff function k maps outcomes to propensities, then h is the identity function and the quasi-additive representation reduces to an additive representation. That k assigns propensities to outcomes is a plausible assumption in many applications.

6 This is a rather innocuous assumption. Marley’s postulates define a weak order on A and on

finite sequences of elements of A. The inverse exists if the weak order is a linear order.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:08:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.013

Bibliography

Abreu, D., and Rubinstein, A. 1988. The Structure of Nash Equilibrium in Repeated Games with Finite Automata. Econometrica, 1259–1281. Acemoglu, D., Chernozhukov, V., and Yildiz, M. 2016. Fragility of Asymptotic Agreement Under Bayesian Learning. Theoretical Economics, 11, 187–225. Achinstein, P. 1963. Variety and Analogy in Confirmation Theory. Philosophy of Science, 30, 207–221. Adams, E. W. 1962. On Rational Betting Systems. Archiv für mathematische Logik und Grundlagenforschung, 6, 7–29, 112–128. Aldous, D. J. 1985. Exchangeability and Related Topics. École d’Été de Probabailités de Saint-Fleur XIII – 1983. Lecture Notes in Mathematics, 1117, 1–198. Alexander, J. M., Skyrms, B., and Zabell, S. L. 2012. Inventing New Signals. Dynamic Games and Applications, 2, 129–145. Armendt, B. 1993. Dutch Books, Additivity, and Utility Theory. Philosophical Topics, 21, 1–20. Arntzenius, F. 2003. Some Problems for Conditionalization and Reflection. Journal of Philosophy, 100, 356–370. Ash, R. B., and Doléans-Dade, C. 2000. Probability and Measure Theory. San Diego: Academic Press. Aumann, R. J. 1974. Subjectivity and Correlation in Randomized Strategies. Journal of Mathematical Economics, 1, 67–96. Aumann, R. J. 1976. Agreeing to Disagree. The Annals of Statistics, 4, 1236–1239. Aumann, R. J. 1987. Correlated Equilibrium as an Expression of Bayesian Rationality. Econometrica, 55, 1–18. Aumann, R. J. 1997. Rationality an Bounded Rationality. Games and Economic Behavior, 21, 2–14. Aumann, R. J., and Brandenburger, A. 1995. Epistemic Conditions for Nash Equilibrium. Econometrica, 63, 1161–1180. Bacchus, F., Kyburg, H. E., and Thalos, M. 1990. Against Conditionalization. Synthese, 85, 475–506. Banerjee, A., Guo, X., and Wang, H. 2005. On the Optimality of Conditional Expectation as a Bregman Predictor. IEEE Transactions on Information Theory, 51, 2664–2669. Bayes, T. 1763. An Essay Towards Solving a Problem in the Doctrine of Chances. Philosophical Transactions of the Royal Society, 53, 370–418.

206

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

207

Beggs, A. W. 2005. On the Convergence of Reinforcement Learning. Journal of Economic Theory, 122, 1–36. Belot, G. 2013. Bayesian Orgulity. Philosophy of Science, 80, 483–503. Benaïm, M., and Hirsch, M. 1999. Mixed Equilibria and Dynamical Systems Arising from Fictitious Play. Games and Economic Behavior, 29, 36–72. Bernstein, S. N. 1917. An Essay on the Axiomatic Foundations of Probability Theory (in Russian). Communications de la Société mathémathique de Kharkov, 15, 209–274. Berry, D. A., and Fristedt, B. 1985. Bandit Problems: Sequential Allocation of Experiments. London: Chapman & Hall. Binmore, K. 2009. Rational Decisions. Princeton: Princeton University Press. Binmore, K., and Samuelson, L. 1992. Evolutionary Stability in Repeated Games Played by Finite Automata. Journal of Economic Theory, 57, 278–305. Blackwell, D., and Dubins, L. 1962. Merging of Opinions with Increasing Information. The Annals of Mathematical Statistics, 33, 882–886. Block, H. D., and Marschak, J. 1960. Random Orderings and Stochastic Theories of Responses. Pages 97–132 of: Olkin, I., Ghurye, S., Hoeffding, W., Madow, W., and Mann, H. (eds.), Contributions to Probability and Statistics. Stanford: Stanford University Press. Börgers, T., and Sarin, R. 1997. Learning Through Reinforcement and the Replicator Dynamics. Journal of Economic Theory, 74, 235–265. Bovens, L. 1995. “P and I Will Believe that Not-P”: Diachronic Constraints on Rational Belief. Mind, 104, 737–760. Bradley, R. 2006. Taking Advantage of Difference in Opinion. Episteme, 3, 141–155. Bradley, R. 2007. Reaching a Consensus. Social Choice and Welfare, 29, 609–632. Bradley, R. 2015. Learning from Others. Working Paper. Briggs, R. 2009. Distorted Reflection. Philosophical Review, 118, 59–85. Brighton, H., and Gigerenzer, G. 2012. Are Rational Actor Models “Rational” Outside Small Worlds. Pages 84–109 of: Okasha, S., and Binmore, K. (eds.), Evolution and Rationality. Cambridge: Cambridge University Press. Brown, G. W. 1951. Iterative Solutions of Games by Fictitious Play. Pages 374–376 of: Koopmans, T. C. (ed.), Activity Analysis of Production and Allocation. New York: Wiley. Buchak, L. 2013. Risk and Rationality. Oxford: Oxford University Press. Bush, R., and Mosteller, F. 1955. Stochastic Models for Learning. New York: John Wiley & Sons. Camerer, C., and Ho, T. 1999. Experience-Weighted Attraction Learning in Normal Form Games. Econometrica, 67, 827–874. Carnap, R. 1950. Logical Foundations of Probability. Chicago: University of Chicago Press. Carnap, R. 1952. The Continuum of Inductive Methods. Chicago: University of Chicago Press.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

208

Bibliography

Carnap, R. 1971. A Basic System of Inductive Logic, Part 1. Pages 33–165 of: Carnap, R., and Jeffrey, R. C. (eds.), Studies in Inductive Logic and Probability I. Los Angeles: University of California Press. Carnap, R. 1980. A Basic System of Inductive Logic, Part 2. Pages 7–155 of: Jeffrey, R. C. (ed.), Studies in Inductive Logic and Probability II. Los Angeles: University of California Press. Cesa-Bianchi, N., and Lugosi, G. 2006. Prediction, Learning, and Games. Cambridge: Cambridge University Press. Chen, R. 1977. On Almost Sure Convergence in a Finitely Additive Setting. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 37, 341–356. Christensen, D. 1991. Clever Bookies and Coherent Beliefs. Philosophical Review, 100, 229–247. Christensen, D. 1996. Dutch Book Arguments Depragmatized: Epistemic Consistency for Partial Believers. The Journal of Philosophy, 93, 450–479. Christensen, D. 2004. Putting Logic in Its Place: Formal Constraints on Rational Belief. Oxford: Oxford University Press. Christensen, D. 2007. Epistemology of Disagreement: The Good News. Philosophical Review, 116, 187–217. Christensen, D. 2009. Disagreement as Evidence: The Epistemology of Controversy. Philosophy Compass, 4/5, 756–767. Christensen, D. 2011. Disagreement, Question-Begging and Epistemic SelfCriticism. Philosopher’s Imprint, 11, 1–22. de Finetti, B. 1931. Sul significato soggettivo della probabilià. Fundamenta Mathematicae, 17, 298–329. de Finetti, B. 1937. La prevision: ses lois logiques ses sources subjectives. Annales d l’institut Henri Poincaré, 7, 1–68. Translated in Kyburg, H. E., and Smokler, H. E. (eds.), Studies in Subjective Probability, pages 93–158, Wiley, New York, 1964. de Finetti, B. 1938. Sur la condition d’equivalence partielle. Pages 5–18 of: Actualités Scientifiques et Industrielles No. 739: Colloques consacré à la théorie des probabilités, VIième partie. Translated in Jeffrey, R. C. (ed.), Studies in Inductive Logic and Probability II, pages 193–205, University of California Press, Los Angeles, 1980. de Finetti, B. 1959. La probabilita e la statistica nei raporti con l’induzione, secondo i dwersi punti di vista. In: Corso C.I.M.E su Induzione e Statistica. Rome: Cremones. Translated in de Finetti, B., Probability, Induction and Statistics, chapter 9, Wiley, New York, 1974. de Finetti, B. 1972. Probability, Induction, and Statistics. New York: John Wiley & Sons. de Finetti, B. 1974. Theory of Probability. Vol. 1. London: John Wiley & Sons. de Morgan, A. 1838. An Essay on Probabilities, and on Their Application to Life Contingencies and Insurance Offices. London: Longman et al. Debreu, G. 1960. Review of “Individual Choice Behavior: A Theoretical Analysis.” American Economic Review, 50, 186–188.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

209

DeGroot, M. 1974. Reaching a Consensus. Journal of the American Statistical Association, 69, 118–121. Diaconis, P. 1977. Finite Forms of de Finetti’s Theorem on Exchangeability. Synthese, 36, 271–281. Diaconis, P., and Freedman, D. 1980a. De Finetti’s Generalizations of Exchangeability. Pages 233–249 of: Jeffrey, R. C. (ed.), Studies in Inductive Logic and Probability II. Los Angeles: University of California Press. Diaconis, P., and Freedman, D. 1980b. De Finetti’s Theorem for Markov Chains. Annals of Probability, 8, 115–130. Diaconis, P., and Freedman, D. 1986. On the Consistency of Bayes Estimates. Annals of Statistics, 14, 1–26. Diaconis, P., and Zabell, S. L. 1982. Updating Subjective Probability. Journal of the American Statistical Association, 77, 822–830. Dietrich, F., and List, C. 2015. Probabilistic Opinion Pooling. Forthcoming in Oxford Handbook of Probability and Philosophy. Döring, F. 1999. Why Bayesian Psychology Is Incomplete. Philosophy of Science, 66, 379–389. Dubins, L. 1975. Finitely Additive Conditional Probabilities, Conglomerability, and Disintegration. Annals of Probability, 3, 89–99. Durrett, R. 1996. Probability: Theory and Examples. Belmont, CA: Duxbury Press. Earman, J. 1992. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: MIT Press. Easwaran, K. 2013. Expected Accuracy Supports Conditionalization—and Conglomerability and Reflection. Philosophy of Science, 80, 119–142. Easwaran, K., Fenton-Glynn, L., Hitchcock, C., and Velasco, J. 2015. Updating on the Credences of Others: Disagreement, Agreement, and Synergy. Philosopher’s Imprint, 16, 1–39. Elga, A. 2007. Reflection and Disagreement. Noûs, 41, 478–502. Elga, A. 2010. How to Disagree About How to Disagree. Pages 175–186 of: Feldman, R., and Warfield, T. (eds.), Disagreement. Oxford: Oxford University Press. Erev, I., and Roth, A. E. 1998. Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria. American Economic Review, 88, 848–880. Field, H. 1978. A Note on Jeffrey Conditionalization. Philosophy of Science, 45, 361–367. Fisher, R. A., Corbet, A. S., and Williams, C. B. 1943. The Relation Between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population. Journal of Animal Ecology, 12, 237–264. Fortini, S., Ladelli, L., Petris, G., and Regazzini, E. 2002. On Mixtures of Distributions of Markov Chains. Stochastic Processes and Their Applications, 100, 147–165.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

210

Bibliography

Foster, D. P., and Young, H. P. 2001. On the Impossibility of Predicting the Behavior of Rational Agent. Proceedings of the National Academy of Sciences of the USA, 98, 12848–12853. Freedman, D. 1962. Mixtures of Markov Processes. Annals of Mathematical Statistics, 33, 114–118. Fudenberg, D., and Levine, D. K. 1998. The Theory of Learning in Games. Cambridge, MA: MIT Press. Gaifman, H., and Snir, M. 1982. Probabilities over Rich Languages, Testing and Randomness. The Journal of Symbolic Logic, 47, 495–548. Gaunersdorfer, A., and Hofbauer, J. 1995. Fictitious Play, Shapley Polygons, and the Replicator Equation. Games and Economic Behavior, 11, 279–303. Geanakoplos, J. D., and Polemarchakis, H. M. 1982. We Can’t Disagree Forever. Journal of Economic Theory, 28, 192–200. Genest, C., and Schervish, M. J. 1985. Modeling Expert Judgments for Bayesian Updating. The Annals of Statistics, 13, 1198–1212. Genest, C., and Zidek, J. V. 1986. Combining Probability Distributions: A Critique and an Annotated Bibliography. Statistical Science, 114–135. Gigerenzer, G., and Gaissmaier, W. 2011. Heuristic Decision Making. Annual Review of Psychology, 62, 451–482. Gilboa, I., and Schmeidler, D. 2001. A Theory of Case-Based Decisions. Cambridge: Cambridge University Press. Gilboa, I., and Schmeidler, D. 2003. Inductive Inference: An Axiomatic Approach. Econometrica, 71, 1–26. Gilboa, I., Samuelson, L., and Schmeidler, D. 2013. Dynamics of Inductive Inference in a Unified Framework. Journal of Economic Theory, 148, 1399–1432. Gittins, J. C. 1979. Bandit Processes and Dynamic Allocation Indices. Journal of the Royal Statistical Society, 148–177. Goldstein, M. 1983. The Prevision of a Prevision. Journal of the American Statistical Association, 78, 817–819. Good, I. J. 1950. Probability and the Weighing of Evidence. London: Charles Griffin. Good, I. J. 1967. On the Principle of Total Evidence. British Journal for the Philosophy of Science, 17, 319–321. Good, I. J. 1983. Good Thinking. The Foundations of Probability and Its Applications. Minneapolis: University of Minnesota Press. Goodman, N. 1955. Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press. Greaves, H., and Wallace, D. 2006. Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility. Mind, 115, 607–632. Gutting, G. 1982. Religious Belief and Religious Scepticism. Notre Dame: University of Notre Dame Press. Hacking, I. 1967. Slightly More Realistic Personal Probability. Philosophy of Science, 34, 311–325. Hájek, A. 2010. Staying Regular. Unpublished manuscript.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

211

Hammond, P. J. 1988. Consequentialist Foundations for Expected Utility. Theory and Decision, 25, 25–78. Harman, G. 2002. Reflections on Knowledge and Its Limits. Philosophical Review, 417–428. Harman, G., and Kulkarni, S. 2007. Reliable Reasoning: Induction and Statistical Learning Theory. Cambridge MA: MIT Press. Harsanyi, J. C. 1967. Games with Incomplete Information Played by Bayesian Players. Parts 1–3. Management Science, 14, 159–183, 320–334, 486–502. Hart, S., and Mas-Colell, A. 2000. A Simple Adaptive Procedure Leading to Correlated Equilibrium. Econometrica, 68, 1127–1150. Hart, S., and Mas-Colell, A. 2003. Uncoupled Dynamics Do Not Lead to Nash Equilibrium. American Economic Review, 93, 1830–1836. Herrnstein, R. J. 1970. On the Law of Effect. Journal of the Experimental Analysis of Behavior, 13, 243–266. Hofbauer, J., and Sandholm, W. H. 2002. On the Global Convergence of Stochastic Fictitious Play. Econometrica, 70, 2265–2294. Hofbauer, J., and Sigmund, K. 1998. Evolutionary Games and Population Dynamics. Cambridge: Cambridge University Press. Hopkins, E. 2002. Two Competing Models of How People Learn in Games. Econometrica, 70, 2141–2166. Hopkins, E., and Posch, M. 2005. Attainability of Boundary Points Under Reinforcement Learning. Games and Economic Behavior, 53, 110–125. Hoppe, F. 1984. Polya-Like Urns and the Ewen Sampling Formula. Journal of Mathematical Biology, 20, 91–94. Howson, C. 2000. Hume’s Problem. Induction and the Justification of Belief. Oxford: Clarendon Press. Howson, C., and Urbach, P. 1993. Scientific Reasoning. The Bayesian Approach. Second edn. La Salle, Illinois: Open Court. Hume, D. 1739. A Treatise of Human Nature. Oxford: Oxford University Press. Edited by D. F. Norton and M. J. Norton, 2000. Hume, D. 1748. An Enquiry Concerning Human Understanding. Oxford: Clarendon Press. Edited by T. L. Beauchamp, 2006. Huttegger, S. M. 2013. In Defense of Reflection. Philosophy of Science, 80, 413–433. Huttegger, S. M. 2014. Learning Experiences and the Value of Knowledge. Philosophical Studies, 171, 279–288. Huttegger, S. M. 2015a. Bayesian Convergence to the Truth and the Metaphysics of Possible Worlds. Philosophy of Science, 82, 587–601. Huttegger, S. M. (2017). Inductive Learning in Small and Large Worlds. Philosophy and Phenomenological Research, 95, 90–116. Huttegger, S. M. 2015b. Merging of Opinions and Probability Kinematics. The Review of Symbolic Logic, 8, 611–648. Huttegger, S. M. 2016. Analogical Predictive Probabilities. Forthcoming in Mind.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

212

Bibliography

Huttegger, S. M., Skyrms, B., and Zollman, K. J. S. 2014. Probe and Adjust in Information Transfer Games. Erkenntnis, 79, 835–853. Jaynes, E. T. 2003. Probability Theory. The Logic of Science. Cambridge: Cambridge University Press. Jeffrey, R. C. 1957. Contributions to the Theory of Inductive Probability. PhD dissertation, Princeton University. Jeffrey, R. C. 1965. The Logic of Decision. New York: McGraw-Hill. Third revised edn. Chicago: University of Chicago Press, 1983. Jeffrey, R. C. 1968. Probable Knowledge. Pages 166–180 of: Lakatos, I. (ed.), The Problem of Inductive Logic. Amsterdam: North-Holland. Jeffrey, R. C. 1975. Carnap’s Empiricism. Minnesota Studies in Philosophy of Science, 6, 37–49. Jeffrey, R. C. 1985. Probability and the Art of Judgment. Pages 95–126 of: Achinstein, P., and Hannaway, O. (eds.), Observation, Experiment, and Hypothesis in Modern Physical Science. Cambridge, MA: Bradford-MIT Press. Jeffrey, R. C. 1988. Conditioning, Kinematics, and Exchangeability. Pages 221–255 of: Skyrms, B., and Harper, W. L. (eds.), Causation, Chance, and Credence. Vol. 1. Dordrecht: Kluwer. Jeffrey, R. C. 1992. Probability and the Art of Judgement. Cambridge: Cambridge University Press. Jehle, D., and Fitelson, B. 2009. What Is the “Equal Weight View”? Episteme, 6, 280–293. Johnson, W. E. 1924. Logic, Part III: The Logical Foundations of Science. Cambridge, UK: Cambridge University Press. Johnson, W. E. 1932. Probability: The Deductive and Inductive Problems. Mind, 41, 409–423. Joyce, J. M. 1998. A Nonpragmatic Vindication of Probabilism. Philosophy of Science, 65, 575–603. Joyce, J. M. 1999. The Foundations of Causal Decision Theory. Cambridge: Cambridge University Press. Joyce, J. M. 2004. Williamson on Evidence and Knowledge. Philosophical Books, 45, 296–305. Joyce, J. M. 2007. Epistemic Deference: The Case of Chance. Proceedings of the Aristotelian Society, 107, 187–206. Joyce, J. M. 2009. Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. Pages 263–297 of: Huber, F., and Schmdit-Petri, C. (eds.), Degrees of Belief. Springer. Joyce, J. M. 2010. The Development of Subjective Bayesianism. Pages 415–476 of: Gabbay, D. M., Hartmann, S., and Woods, J. (eds.), Handbook of the History of Logic, Vol. 10: Inductive Logic. Elsevier. Kadane, J. B., Schervish, M. J., and Seidenfeld, T. 1996. Reasoning to a Foregone Conclusion. Journal of the American Statistical Association, 91, 1228–1236.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

213

Kadane, J. B., Schervish, M. J., and Seidenfeld, T. J. 2008. Is Ignorance Bliss? The Journal of Philosophy, 105, 5–36. Kalai, E., and Lehrer, E. 1993. Rational Learning Leads to Nash Equilibrium. Econometrica, 1019–1045. Kalai, E., and Lehrer, E. 1994. Weak and Strong Merging of Opinions. Journal of Mathematical Economics, 23, 73–86. Kelly, K. T. 1996. Logic of Reliable Inquiry. Oxford University Press. Kelly, T. 2005. The Epistemic Significance of Disagreement. Oxford Studies in Epistemology, 1, 167–196. Kelly, T. 2008. Disagreement, Dogmatism, and Belief Polarization. Journal of Philosophy, 105, 611–633. Kelly, T. 2010. Peer Disagreement and Higher-Order Evidence. Pages 111–174 of: Feldman, R., and Warfield, T. (eds.), Disagreement. Oxford: Oxford University Press. Kemeny, J. G. 1955. Fair Bets and Inductive Probabilities. The Journal of Symbolic Logic, 20, 263–273. Kennedy, R., and Chihara, C. S. 1979. The Dutch Book Argument: Its Logical Flaws, Its Subjective Sources. Philosophical Studies, 36, 19–33. Keynes, J. M. 1921. A Treatise on Probability. London: Macmillan. Kingman, J. F. C. 1975. Random Discrete Distributions. Journal of the Royal Statistical Society B, 37, 1–22. Kingman, J. F. C. 1978a. The Representation of Partition Structures. Journal of the London Mathematical Society, 18, 374–380. Kingman, J. F. C. 1978b. Random Partitions in Population Genetics. Proceedings of the Royal Society of London A, 361, 1–20. Kingman, J. F. C. 1980. The Mathematics of Genetic Diversity. Vol. 34. SIAM. Kolmogorov, A. N. 1933. Grundbegriffe der Wahrscheinlichkeitsrechnung. Berlin: Springer. Krantz, D. H., Luce, R. D., Suppes, P., and Tversky, A. 1971. Foundations of Measurement, Vol. I. Additive and Polynomial Representations. San Diego: Academic Press. Reprinted by Dover 2007. Kuipers, T. A. F. 1978. Studies in Inductive Probability and Rational Expectation. Dordrecht: D. Reidel. Kuipers, T. A. F. 1988. Inductive Analogy by Similarity and Proximity. In: Helman, D. H. (ed.), Analogical Reasoning. Dordrecht: Kluwer. Lam, B. 2011. On the Rationality of Belief-Invariance in Light of Peer Disagreement. Philosophical Review, 120, 207–245. Lane, D. A., and Sudderth, W. D. 1984. Coherent Predictive Inference. Sankhy¯a: The Indian Journal of Statistics, Series A, 166–185. Lange, M. 2000. Is Jeffrey Conditionalization Defective in Virtue of Being NonCommutative? Remarks on the Sameness of Sensory Experiences. Synthese, 93, 393–403.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

214

Bibliography

Laplace, P. S. 1774. Mémoire sur la probabilité des causes par led évènemens. Mémoires de Mathématique et Physique, Presentés à l’Académie Royale des Sciences, par divers Savans & lûs dans ses Assemblées, Tome Sixiéme, 66, 621–656. Translated with commentary by Stephen M. Stigler (1986) Statistical Science 1: 359–378. Laslier, J.-F., Topol, R., and Walliser, B. 2001. A Behavioral Learning Process in Games. Games and Economic Behavior, 37, 340–366. Lasonen-Aarnio, M. 2013. Disagreement and Evidential Attenuation. Noûs, 47, 767–794. Lehrer, K., and Wagner, C. G. 1981. Rational Consensus in Science and Society: A Philosophical and Mathematical Study. Dordrecht: D. Reidel. Leitgeb, H. 2017. The Stability of Belief. Oxford: Oxford University Press. Leitgeb, H., and Pettigrew, R. 2010a. An Objective Justification of Bayesianism I: Measuring Inaccuracy. Philosophy of Science, 77, 201–235. Leitgeb, H., and Pettigrew, R. 2010b. An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy. Philosophy of Science, 77, 236–272. Levi, I. 1980. The Enterprise of Knowledge. Cambridge MA: MIT Press. Levi, I. 1987. The Demons of Decision. The Monist, 70, 193–211. Cambridge: Cambridge University Press. Levi, I. 1997. The Covenant of Reason: Rationality and the Commitments of Thought. Lewis, D. K. 1986. Probabilities of Conditionals and Conditional Probabilities. The Philosophical Review, 95, 581–589. Lewis, D. K. 1999. Papers in Metaphysics and Epistemology. Cambridge: Cambridge University Press. Link, G. 1980. Representation Theorems of the de Finetti Type for (Partially) Symmetric Probability Measures. Pages 207–231 of: Jeffrey, R. C. (ed.), Studies in Inductive Logic and Probability II. Los Angeles: University of California Press. List, C. 2012. The Theory of Judgment Aggregation: An Introductory Review. Synthese, 187, 179–207. Luce, R. D. 1959. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley. Luce, R. D. 1964. Some One-Parameter Families of Commutative Learning Operators. Pages 380–398 of: Atkinson, R. C. (ed.), Studies in Mathematical Psychology. Stanford: Stanford University Press. Luce, R. D. 1977. The Choice Axiom After Twenty Years. Journal of Mathematical Psychology, 15, 215–233. Luce, R. D., and Raiffa, H. 1957. Games and Decisions. New York: John Wiley & Sons. Luce, R. D., and Suppes, P. 1965. Preference, Utility, and Subjective Probability. Pages 249–410 of: Luce, R. D., Bush, R. R., and Galanter, E. (eds.), Handbook of Mathematical Psychology. Vol. 3. New York: Wiley. Maher, P. 1992. Diachronic Rationality. Philosophy of Science, 59, 120–141.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

215

Mahtani, A. 2012. Diachronic Dutch Book Arguments. Philosophical Review, 121, 443–450. Marden, J. R., Young, H. P., Arslan, G., and Shamma, J. S. 2009. Payoff-Based Dynamics for Multiplayer Weakly Acyclic Games. Siam Journal of Control and Optimization, 48, 373–396. Marley, A. A. J. 1967. Abstract One-Parameter Families of Commutative Learning Operators. Journal of Mathematical Psychology, 4, 414–429. McFadden, D. 1974. Conditional Logit Analysis of Quantitative Choice Behavior. Pages 105–142 of: Zarembka, P. (ed.), Frontiers in Econometrics. New York: Academic Press. Miller, R. I., and Sanchirico, C. W. 1999. The Role of Absolute Continuity in “Merging of Opinions” and “Rational Learning.” Games and Economic Behavior, 29, 170–190. Moss, S. 2015. Time-Slice Epistemology and Action Under Indeterminacy. Oxford Studies in Epistemology, 5, 172–94. Nachbar, J. H. 1997. Prediction, Optimization, and Learning in Repeated Games. Econometrica, 275–309. Narens, L. 2003. A Theory of Belief. Journal of Mathematical Psychology, 47, 1–31. Narens, L. 2007. Theories of Probability: An Examination of Logical and Qualitative Foundations. Singapore: World Scientific Publishing. Paris, J., and Vencovská, A. 2015. Pure Inductive Logic. Cambridge University Press. Paul, L. A. 2014. Transformative Experience. Oxford: Oxford University Press. Pedersen, A. P. 2014. Comparative Expectations. Studia Logica, 102, 811–848. Pettigrew, R. 2016. Accuracy and the Laws of Credence. Oxford: Oxford University Press. Press, W. H. 2009. Bandit Solutions Provide Unified Ethical Models for Randomized Clinical Trials and Comparative Effectiveness Research. Proceedings of the National Academy of Sciences, 22387–22392. Purves, R. A., and Sudderth, W. D. 1976. Some Finitely Additive Probability. The Annals of Probability, 4, 259–276. Putnam, H. 1963. Degree of Confirmation and Inductive Logic. Pages 761–783 of: Schilpp, P. A. (ed.), The Philosophy of Rudolf Carnap. Lasalle, IL: Open Court. Rabinowicz, W. 2002. Does Practical Deliberation Crowd Out Self-Prediction? Erkenntnis, 57, 91–122. Ramsey, F. P. 1931. Truth and Probability. Pages 156–198 of: Braithwaite, R. B. (ed.), Foundations of Mathematics and Other Essays. New York: Harcourt Brace. Also in Philosophical Papers, ed., D. H. Mellor (Cambridge University Press, 1990). Ramsey, F. P. 1990. Weight or the Value of Knowledge. British Journal for the Philosophy of Science, 41, 1–4. Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press. Regazzini, E. 2013 The Origins of de Finetti’s Critique of Countable Additivity. Pages 63–82 of: Jones, G., and Shen, X. (ed.), Advances in Modern Statistical

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

216

Bibliography

Theory and Applications: A Festschrift in Honor of Morris L. Eaton. Institute of Mathematical Statistics. Rescorla, R. A., and Wagner, A. R. 1972. A Theory of Pavlovian Conditioning: Variations in The Effectiveness of Reinforcement and Nonreinforcement. Pages 64–99 of: Black, A. H., and Prokasy, W. F. (eds.), Classical Conditioning II: Current Research and Theory. New York. Robbins, H. E. 1952. Some Aspects of the Sequential Design of Experiments. Bulletin of the American Mathematical Society, 527–535. Romeijn, J.-W. 2004. Hypotheses and Inductive Predictions. Synthese, 141(3), 333– 364. Romeijn, J.-W. 2015. Opinion Pooling as a Bayesian Update. Working Paper. Rosenkrantz, R. 1981. Foundations and Applications of Inductive Probability. Atascadero, CA: Ridgeview Press. Roth, A., and Erev, I. 1995. Learning in Extensive Form Games: Experimental Data and Simple Dynamic Models in the Intermediate Term. Games and Economic Behavior, 8, 164–212. Rothschild, M. 1974. A Two-Armed Bandit Theory of Market Pricing. Journal of Economic Theory, 9, 185–202. Rubinstein, A. 1998. Modeling Bounded Rationality. Cambridge MA: MIT Press. Rustichini, A. 1999. Optimal Properties of Stimulus Response Learning Models. Games and Economic Behavior, 29, 244–273. Saari, D. G. 2005. The Profile Structure of Luce’s Choice Axiom. Journal of Mathematical Psychology, 49, 226–253. Samuelson, L. 1997. Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press. Sato, Y., Akiyama, E., and Farmer, J. D. 2002. Chaos in Learning a Simple TwoPerson Game. Proceedings of the National Academy of Sciences, 99, 4748–4751. Savage, J. E. 1998. Models of Computation. Boston: Addison Wesley. Savage, L. J. 1954. The Foundations of Statistics. New York: Dover Publications. Savage, L. J. 1967. Implications of Personal Probability for Induction. The Journal of Philosophy, 64, 593–607. Savage, L. J. 1971. Elicitation of Personal Probabilities and Expectations. Journal of the American Statistical Association, 66, 783–801. Schervish, M. J., and Seidenfeld, T. 1990. An Approach to Consensus and Certainty with Increasing Information. Journal of Statistical Planning and Inference, 25, 401–414. Schoenfield, M. (forthcoming). Conditionalization Does Not (in General) Maximize Expected Accuracy. Forthcoming in Mind. Seidenfeld, T. 1979. Why I Am Not an Objective Bayesian: Some Reflections Prompted by Rosenkrantz. Theory and Decision, 11, 413–440. Seidenfeld, T. 1988. Decision Theory Without “Independence” or Without “Ordering.” Economics and Philosophy, 4, 267–290.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

217

Seidenfeld, T., Schervish, M. J., and Kadane, J. B. 1990. When Fair Betting Odds Are Not Degrees of Belief. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1, 517–524. Seidenfeld, T., Schervish, M. J., and Kadane, J. B. 2014. Non-Conglomerability for Countably Additive Measures that Are Not κ-Additive. Manuscript CMU. Selten, R. 1998. Axiomatic Characterization of the Quadratic Scoring Rule. Experimental Economics, 1, 43–61. Selten, R. 2001. What Is Bounded Rationality? Bounded Rationality: The Adaptive Toolbox, 13–36. Shamma, J. S., and Arslan, G. 2005. Dynamic Fictitious Play, Dynamic Gradient Play, and Distributed Convergence to Nash Equilibria. IEEE Transactions on Automatic Control, 50, 312–327. Shapley, L. S. 1964. Some Topics in Two-Person Games. Pages 1–28 of: Dresher, M., Shapley, L. S., and Tucker, A. W. (eds), Advances in Game Theory. Princeton: Princeton University Press. Shimony, A. 1955. Coherence and the Axioms of Confirmation. The Journal of Symbolic Logic, 20, 1–28. Simon, H. A. 1955. A Behavioral Model of Rational Choice. Quarterly Journal of Economics, 69, 99–118. Simon, H. A. 1956. Rational Choice and the Structure of the Environment. Psychological Review, 63, 129–138. Simon, H. A. 1957. Models of Man. New York: Wiley. Simon, H. A. 1976. From Substantial to Procedural Rationality. Pages 65–86 of: Kastelein, T. J., Kuipers, S. K., Nijenhueis, W. A., and Wagenaar G. R. (eds.), 25 Years of Economic Theory. Springer. Simon, H. A. 1986. Rationality in Economics and Psychology. The Journal of Business, 59, 209–224. Skyrms, B. 1986. Choice and Chance. An Introduction to Inductive Logic. Third edn. Belmont, CA: Wadsworth Publishing Company. Skyrms, B. 1987a. Dynamic Coherence and Probability Kinematics. Philosophy of Science, 54, 1–20. Skyrms, B. 1987b. Coherence. Pages 225–342 of: Rescher, Nicholas, N. (ed.), Scientific Inquiry in Philosophical Perspective. Pittsburgh: University Press of America. Skyrms, B. 1990. The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press. Skyrms, B. 1991. Inductive Logic for Markov Chains. Erkenntnis, 35, 439–460. Skyrms, B. 1993. Carnapian Inductive Logic for a Value Continuum. Midwest Studies in Philosophy, 18, 78–89. Skyrms, B. 1997. The Structure of Radical Probabilism. Erkenntnis, 45, 285–297. Skyrms, B. 2010. Signals: Evolution, Learning, and Information. Oxford: Oxford University Press. Skyrms, B. 2012. Learning to Signal with Probe and Adjust. Episteme, 9, 139–150.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

218

Bibliography

Smith, D. E. 1984. A Source Book in Mathematics. New York: Dover Publications. Spohn, W. 1977. Where Luce and Krantz Do Really Generalize Savage’s Decision Model. Erkenntnis, 11, 113–134. Spohn, W. 1978. Grundlagen der Entscheidungstheorie. Kronberg/Ts.: ScriptorVerlag. Steele, K. 2012. Testimony as Evidence: More Problems for Linear Pooling. Journal of Philosophical Logic, 41, 983–999. Talbott, W. 1991. Two Principles of Bayesian Epistemology. Philosophical Studies, 62, 135–150. Taylor, P. D., and Jonker, L. 1978. Evolutionarily Stable Strategies and Game Dynamics. Mathematical Biosciences, 40, 145–156. Teller, P. 1973. Conditionalization and Observation. Synthese, 26, 218–258. Thorndike, E. L. 1911. Animal Intelligence. New York: Macmillan. Thorndike, E. L. 1927. The Law of Effect. American Journal of Psychology, 39, 212– 222. Thurstone, L. L. 1927. A Law of Comparative Judgment. Psychological Review, 34, 266–270. Titelbaum, M. G. 2012. Quitting Certainties: A Bayesian Framework Modeling Degrees of Belief. Oxford: Oxford University Press. Tversky, A. 1972. Choice by Elimination. Journal of Mathematical Psychology, 9, 341–367. van Fraassen, B. C. 1980. Rational Belief and Probability Kinematics. Philosophy of Science, 47, 165–187. van Fraassen, B. C. 1984. Belief and the Will. Journal of Philosophy, 81, 235–256. van Fraassen, B. C. 1989. Laws and Symmetry. Oxford: Oxford University Press. van Fraassen, B. C. 1995. Belief and the Problem of Ulysses and the Sirens. Philosophical Studies, 77, 7–37. van Inwagen, P. 1996. Is It Wrong, Everywhere, Always, and for Anyone, to Believe Anything upon Insufficient Evidence? Pages 137–153 of: Jorand, J., and Howard-Snyder, D. (eds.), Faith, Freedom, and Rationality. Rowman & Littlefield Publishers. Vanderschraaf, P., and Skyrms, B. 2003. Learning to Take Turns. Erkenntnis, 59, 311–348. Venn, J. 1866. Logic of Chance. London: Macmillan. von Neumann, J., and Morgenstern, O. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Wagner, C. G. 1985. On the Formal Properties of Weighted Averaging as a Method of Aggregation. Synthese, 62, 97–108. Wagner, C. G. 2002. Probability Kinematics and Commutativity. Philosophy of Science, 69, 266–278. Wagner, E. O. 2012. Deterministic Chaos and the Evolution of Meaning. The British Journal for the Philosophy of Science, 63, 547–575.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Bibliography

219

Wagner, E. O. 2013. The Explanatory Relevance of Nash Equilibrium: OneDimensional Chaos in Boundedly Rational Learning. Philosophy of Science, 80, 783–795. Weatherson, B. 2003. From Classical to Intuitionistic Probability. Notre Dame Journal of Formal Logic, 44, 111–123. Weisberg, J. 2007. Conditionalization, Reflection, and Self-Knowledge. Philosophical Studies, 135, 179–197. Weyl, H. 1952. Symmetry. Princeton: Princeton University Press. White, R. 2010. Evidential Symmetry and Mushy Credence. Oxford Studies in Epistemology, 3, 161–186. Williams, D. 1991. Probability with Martingales. Cambridge: Cambridge University Press. Williams, J. R. G. 2012. Generalized Probabilism: Dutch Books and Accuracy Domination. Journal of Philosophical Logic, 41, 811–840. Williamson, J. 2010. In Defense of Objective Bayesianism. Oxford: Oxford University Press. Williamson, T. 2002. Knowledge and Its Limits. Oxford: Oxford University Press. Yellott, J. I. 1977. The Relationship Between Luce’s Choice Axiom, Thurstone’s Theory of Comparative Judgment, and the Double Exponential Distribution. Journal of Mathematical Psychology, 15, 109–144. Young, H. P. 2004. Strategic Learning and Its Limits. Oxford: Oxford University Press. Young, H. P. 2009. Learning by Trial and Error. Games and Economic Behavior, 65, 626–643. Zabell, S. L. 1982. W. E. Johnson’s “Sufficientness” Postulate. The Annals of Statistics, 10, 1091–1099. Zabell, S. L. 1988. Symmetry and Its Discontents. Pages 155–190 of: Skyrms, B., and Harper, W. L. (eds.), Causation, Chance, and Credence. Dordrecht: Kluwer Academic Publishers. Zabell, S. L. 1989. The Rule of Succession. Erkenntnis, 31, 283–321. Zabell, S. L. 1992. Predicting the Unpredictable. Synthese, 90(2), 205–232. Zabell, S. L. 1995. Characterizing Markov Exchangeable Sequences. Journal of Theoretical Probability, 8, 175–178. Zabell, S. L. 1998. The Continuum of Inductive Methods Revisited. Pages 151–385 of: Earman, J., and Norton, J. (eds.), The Cosmos of Science. Essays of Exploration. Pittsburgh: University of Pittsburgh Press/Universitätsverlag Konstanz. Zabell, S. L. 2002. It All Adds Up: The Dynamic Coherence of Radical Probabilism. Philosophy of Science, 69, 98–103. Zollman, K. J. S. 2007. The Communication Structure of Epistemic Communities. Philosophy of Science, 74, 574–587. Zollman, K. J. S. 2010. The Epistemic Benefit of Transient Diversity. Erkenntnis, 72, 17–35.

Downloaded from https://www.cambridge.org/core. Teachers College Library - Columbia University, on 03 Nov 2017 at 08:09:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789.014

Index

Abreu, D., 75 absolute continuity, 172–177, 180 abstract family, 201 Archimedean, 202 commutative, 203 positive, 202 solvable, 203 strictly monontonic, 202 abstract models of learning, 1 accuracy epistemology, 16–19, 131–133 and measuring beliefs, 18 and pragmatic approaches, 17–19 Acemoglu, D., 150, 185 Achinstein, P., 57 Adams, E. W., 147 agreeing to disagree, 151 Akiyama, E., 68 Aldous, D. J., 27, 90, 191 Alexander, J. M., 98 all-things-considered rationality, 131 analogical inductive logic, 161 Armendt, B., 13 Arntzenius, F., 128 Arslan, G., 50, 65, 67 Ash, R. B., 169 Aumann, R., 38, 58, 61, 67, 150

220

Börgers, T., 42, 101 Bacchus, F., 128 bandit problems, 34–37, 41, 52, 174 and inductive logic, 192–195 Banerjee, A., 138 Bayes, T., 13, 25, 77, 86 Bayesian conditioning, 19, 106, 110, 177, see also conditionalization Beggs, A. W., 41, 59, 69–71 Belot, G., 171 Benaïm, M., 62 Bernstein, S. N., 16 Berry, D. A., 35 best estimate, 132 and conditional expectation, 133–135, 139 and conditioning, 135–136 and general distance measures, 137–139 and reflection, 136–137 and the martingale principle, 137

Binmore, K., 19, 22, 75, 82, 144 black box learning, 6, 112, 115, 120, 126, 136, 141, see also generalized learning Blackwell, D., 171 Block, H. D., 47 Borel–Cantelli lemma, 193 bounded rationality, 30, 37–38, 74, 82–85, 101, 102 Bovens, L., 128 Bradley, R., 153, 156 Brandenburger, A., 58 Bregman distance function, 138 Brier score, 132, see also quadratic loss function Briggs, R., 128, 130, 145 Brighton, H., 82 Brown, G. W., 33 Buchak, L., 46 Bush, R., 101 Camerer, C., 39 Carnap, R., 3, 28, 56, 86, 94, 103, 113, 190 case-based reasoning, 52 Cesa-Bianchi, N., 199 chance, 26–28 chaotic behavior, 68 Chen, R., 170 Chernozhukov, V., 150, 185 Chihara, C. S., 17 Chinese restaurant process, 90 choice probability, 5, 41, 43, 48 conditional, 69, 72 Christensen, D., 13, 14, 21, 128, 153, 165 common knowledge, 151 commutativity, 83, 97, see also learning operator, commutative complexity and learning, 73–76 conditional expectation, 121 conditional probability, 122 conditionalization, 19, 106, 111, 114, 181, see also Bayesian conditioning and the reflection principle, 115 conglomerability, 147 consistency, 3, 4, 9, 13–16, 44, 103, see also dynamic consistency consistent embeddability, 6, 81, 85, 98

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:06:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Index

convergence to the truth, 168–171 correlated equilibrium, 67, 74 countable additivity, 127, 134, 146–148, 170 Döring, F., 182 de Finetti’s representation theorem, 27, 191 de Finetti, B., 3, 5, 11, 16, 22–23, 27–28, 40, 56, 62, 86, 106, 132, 143, 146, 170, 196, 197 de Groot, M., 155 de Morgan, A., 86 deference, 154, 179 Diaconis, P., 63, 109, 111, 169, 182, 191 Dietrich, F., 156 Dirichlet distribution, 28, 191 Dirichlet process, 93 disagreement, 149, 160 and synergy, 165 individual and group perspective, 152 rational, 185–188 divergence of opinions, 181–184 Doléans-Dade, C., 122, 169 Dubins, L., 90, 147, 171 Durrett, R., 193 Dutch book argument, 12–15 and consistency, 13–14 dynamic, 20–21, 129–131 Dutch book theorem, 13 dynamic consistency, 3, 7, 19–24, 54, 83, 102, 116, 118, 120, 123 dynamic probability, 112 Earman, J., 169 Easwaran, K., 132, 136, 138, 154, 156, 161, 165 Elga, A., 153–155, 157 epistemic accuracy, 131 epistemic disagreement, 149 epistemic rationality, 7, 23, 29, 53, 129–132, 139 Erev, I., 41 ex ante and ex post evaluation, 21, 130 exchangeability, 9, 27–28, 52, 64, 189 and periodic learning environments, 57 exchangeable random partition, 88–90 and inductive logic, 93 expected accuracy, 131–133 exploitation–exploration tradeoff, 36 Farmer, J. D., 68 Fenton-Glynn, L., 154, 156, 161, 165 fictitious play, 33–34, 39 in large worlds, 94–96 in periodic learning environments, 57–59 Field shift, 183 Field, H., 182 finite de Finetti representation theorem, 191

221

finite patterns, 73 finite state automata, 74 Fisher, R. A., 86 Fitelson, B., 153, 155, 156 Fortini, S., 62 Foster, D. P., 175, 176 foundationalist epistemology, 105 Freedman, D., 63, 169, 191 Fristedt, B., 35 Fudenberg, D., 33, 37, 59, 62 Gaifman, H., 171 Gaissmaier, W., 82 Gaunersdorfer, A., 64 Geanakoplos, J. D., 152 general foundations theory, 105 generalized learning, 7, 55, 102, 112, 126, 142, see also black box learning Genest, C., 155, 156 Gigerenzer, G., 82 Gilboa, I., 52 Gittins index, 36 Gittins, J. C., 36 Goldstein, M., 6, 115, 129 Good, I. J., 22, 28, 38, 105, 112, 114, 139, 182 Goodman, N., 11, 105 Greaves, H., 132, 136, 138 Guo, X., 138 Hacking, I., 22 Hájek, A., 107 Hammond, P. J., 46 hard Jeffrey shift, 179–181 Harman, G., 105 Harsanyi doctrine, 185 Harsanyi, J. C., 103, 150, 185 Hart, S., 65, 67, 199 Herrnstein, R. J., 41 Hirsch, M., 62 Hitchcock, C., 154, 156, 161, 165 Ho, T., 39 Hofbauer, J., 40, 54, 59, 62, 64 Hopkins, E., 40, 41, 59, 71 Hoppe urn, 91, 96 Howson, C., 13, 14, 19, 21, 30, 103 hypergeometric distribution, 192 inductive assumptions, 3, 34, 50, 52, 53, 55, 73, 83, 96, 98, 105, 118, 165 Jaynes, E. T., 103 Jeffrey conditioning, 6, 7, 30, 110, 163, 167 and merging of opinions, 178–184 Jeffrey, R. C., 4, 6, 12, 14, 30, 43, 79, 94, 102, 104, 106–114, 130, 182

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:06:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

222

Index

Jehle, D., 153, 155, 156 Johnson’s sufficientness postulate, 28, 40, 52, 94, 189 for Markov exchangeability, 63 Johnson, W. E., 28, 86, 190 Johnson–Carnap continuum of inductive methods, 29, 40, 42, 52, 107, 189–191 and martingales, 119 and the reflection principle, 117 Jonker, L., 59 Joyce, J. M., 6, 16, 17, 79–81, 100, 114, 132, 137, 138, 154, 179, 181, 182 judgment aggregation, 152 judgmental probability, 113–114 Kadane, J. B., 22, 141, 146 Kalai, E., 171, 174 Kelly, K. T., 64, 186 Kelly, T., 153, 182 Kemeny, J. G., 13 Kennedy, R., 17 Keynes, J. M., 103, 111 Kingman’s representation theorem, 91–93 Kingman, J. F. C., 91 knowledge partition, 80, 84 Kolmogorov, A. N., 121 Krantz, D. H., 16 Kuipers, T. A. F., 28, 60, 190 Kulkarni, S., 105 Kyburg, H, E., 128 Lévy, P., 193 Ladelli, L., 62 Lam, B., 153 Lane, D. A., 22 Lange, M., 182 Laplace, P. S., 25, 86 large world, 5, 78 large world problem for learning, 82–85 in decision theory, 79–82 Laslier, J.-F., 41 Lasonen-Aarnio, M., 164 learning operator, 48, 201 commutative, 5, 48–50 Lehrer, E., 171, 174 Lehrer, K, 155 Leitgeb, H., 14, 17, 132, 137 Levi, I., 21, 103, 124, 128 Levine, D. K., 33, 37, 59, 62 Lewis, D. K., 20, 106–108, 129 linear averaging, 155, 161 List, C., 152, 156 logic of decision, 79 logical probability, 103

Luce’s choice axiom, 5, 6, 43–47, 97 and conditional probabilities, 44 and independence, 45–46 and large worlds, 98–101 and random utility models, 47 Luce, R. D., 5, 16, 32, 43, 48, 98 Lugosi, G., 199 MacFadden, D., 47 Maher, P., 128 Mahtani, A., 130 Marden, J. R., 50 Markov exchangeability, 62–64 Markov fictitious play, 59–62, 65 and correlated equilibria, 61 Markov inductive logic, 60, 63 Markov learning operators, 72–73 commutative, 73 Markov reinforcement learning payoff based, 69–71 state based, 71–72 Marley, A. A. J., 49, 201, 203 Marschak, J., 47 martingale, 119, 177 martingale condition, 120–123 martingale convergence theorem, 120–121 martingale principle, 7, 102, 121 Mas-Colell, A., 65, 67, 199 maximally informed opinion, 33, 121, 169, 177 merging of opinions, 171–173 Miller, R. I., 175 Moss, S., 129 Mosteller, F., 101 Nachbar, J. H., 175 Narens, L., 44 Nash equilibrium, 58–59, 66 and convergence, 174–177 Nielsen, M., 181 objective prior, 103–105 paintbox process, 91 Pareto principle, 152 Paris, J., 103 partial exchangeability, 40, 196–198 and Markov exchangeability, 62 partition invariance, 81 and Luce’s choice axiom, 100 Paul, L. A., 47 payoff-based learning model, 5, 37, 84, 199 and Markov learning environment, 69 Pedersen, A. P., 107 peer disagreement, 7, 153 Peirce, C. S., 167, 181

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:06:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

Index

periodic learning environment, 57 perturbed best response rule, 193 Petris, G., 62 Pettigrew, R., 16, 17, 53, 132, 137 Pitman, J., 90 Poisson–Dirichlet distribution, 93 Polemarchakis, H. M., 152 Polya urn, 91 Posch, M., 41, 59, 71 Press, W. H., 35 Price, R., 77 principle of indifference, 30, 104 probabilism, 10–12, 103, 106 probability kinematics, 6, 30, 110, 114, see also Jeffrey conditioning and commutativity, 182–183 and merging of opinions, 178–184 probe and adjust, 50–52, 73 procedural rationality, 38 propensities in reinforcement learning, 5, 41, 48, 201 conditional, 69, 72 propriety, 138 Purves, R. A., 147, 170 Putnam, H., 5, 56, 57 quadratic loss function, 16, 132, 133 and best estimates, 133–137 qualitative probability, 16 quasi-additive abstract family, 201 quasi-additive learning models, 49 Rabinowicz, W., 124 radical probabilism, 6, 102, 104, 111–114, 149, 167 and convergence to the truth, 121, 177 and merging of opinions, 178, 186 Raiffa, H., 98 Ramsey, F. P., 11, 13, 19–20, 23, 139, 144 and radical probabilism, 111 random partitions, 88 Rawls, J., 105 reflection principle, 7, 102, 115–118, 123, 127 violations of, 128 Regazzini, E., 62, 146 regret learning, 73, 199–200 regularity, 106–108, 189 Reichenbach, H., 160 reinforcement learning, 39 and the reflection principle, 117 average, 36, 39–41, 73, 95, 194, 198–199 basic model, 5, 41–43, 204–205 Bush–Mosteller, 101 in large worlds, 96–98 Rescorla, R. A., 101

223

Rescorla–Wagner model of learning, 101 Robbins, H. E., 35 Romeijn, J.-W., 30, 156 Rosenkrantz, R., 17 Roth, A., 41 Rothschild, M., 36 Rubinstein, A., 38, 75 rule of succession, 25, 86 de Morgan’s, 87, 93 generalized, 26 Rusticchini, A., 69 Saari, D. G., 44 sampling of species problem, 6, 86 Samuelson, L., 40, 52, 75 Sanchirico, C. W., 175 Sandholm W. H., 62 Sarin, R., 101 satisficing, 50 Sato, Y., 68 Savage, J. E., 75, 76 Savage, L. J., 6, 11, 15, 22, 33, 39, 43, 78, 106, 139, 144 Schervish, M. J., 22, 141, 146, 155, 156, 169, 184 Schmeidler, D., 52 Schoenfield, M., 132 Seidenfeld, T., 14, 22, 46, 103, 141, 146, 169, 184 Selten, R., 137 Shamma, J. S., 50, 65, 67 Shapley game, 64 Shapley, L. S., 64 Shimony, A., 107 Sigmund, K., 40, 54, 59 Simon, H. A., 4, 30, 38, 50, 74, 82 Skyrms, B., 6, 13, 20, 22, 50, 52, 57, 61, 65, 74, 98, 102, 112, 120, 130, 139, 169, 177, 194 small world, 5, 78, 80 Snir, M., 171 social choice, 152 soft Jeffrey shift, 179, 182, 183 and divergence of opinions, 181–184 Spohn, W., 124 square integrable random variable, 134 Steele, K., 153, 154, 163 Stewart, R. T., 181 straight averaging, 155 and inductive logic, 156–161 strict coherence, 107 substantive rationality, 38 Sudderth, W. D., 22, 147, 170 sufficient statistic, 197 Suppes, P., 16, 46

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:06:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789

224

Index

symmetry, 3, 4, 9, 25, 27–29, 32, 50, 73, 118, see also inductive assumption Taking Turns game, 58, 61, 74 Talbott, W., 128 Taylor, P. D., 59 Teller, P., 20 Thalos, M., 128 Thorndike, E. L., 39 Thurstone, L. L., 47 Titelbaum, M. G., 129 Topol, R., 41 tower property of conditional expectations, 137, 157 Turing, A., 89 Tversky, A., 16, 46 Type I and Type II rationality, 38 Ulysses and the Sirens, 128, 130 uncertain evidence, 6, 109, 177, 179 and soft Jeffrey shifts, 182, 183 fluid, 7, 181, 186 solid, 7, 181, 186 uncoupled dynamics, 65, 67 Urbach, P., 14, 21, 103 utility and consistent preferences, 15 value of knowledge, 139–141 and reflection, 141–143 relation, 140

theorem, 139–140 van Fraassen, B. C., 6, 25, 102, 103, 115, 116, 127, 129, 130, 181 van Inwagen, P., 187 Vanderschraaf, P., 57, 61, 65, 74 Velasco, J., 154, 156, 161, 165 Vencovská, A., 103 Venn, J., 25 Wagner, A. R., 101 Wagner, C. G., 155, 182, 183 Wagner, E. O., 68 Wallace, D., 132, 136, 138 Walliser, B., 41 Wang, H., 138 Weisberg, J., 116, 144 Weyl, H., 25 White, R., 103 Williams, D., 120, 135 Williams, J. R. G., 18 Williamson, J., 103 Williamson, T., 103, 113, 144 Yellott, J. I., 47 Yildiz, M., 150, 185 Young, H. P., 33, 37, 50, 174–176, 199 Zabell, S. L., 28, 63, 86–94, 98, 103, 109, 111, 116, 120, 152, 170, 182, 190, 191, 199 Zollman, K. J. S., 35, 50

Downloaded from https://www.cambridge.org/core. The University of British Columbia Library, on 16 Jul 2018 at 16:06:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316335789