Against Knowledge Closure 9781108604093

435 67 2MB

English Pages [254] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Against Knowledge Closure
 9781108604093

Table of contents :
Chapter 10 Abominable Conjunctions, Contextualism, and the Spreading Problem
10.1 Arguing against Closure Denial
10.2 Abominable Conjunctions
10.3 Abominable Conjunctions and Contextualism
10.3.1 DeRose versus Heller
10.3.2 Extrapolating from High- to Low-Standards Contexts
10.3.3 The Felicity of the Denials of Skeptical Hypotheses in Ordinary Contexts
10.3.4 Comparative Judgments
10.3.5 Generalizing
10.4 Contextualism and Anti-Skeptical Sources of Warrant
10.4.1 Contextualism and Transmission
10.4.2 Contextualism and Front-Loading
10.4.3 Contextualism and Safety
10.4.4 Contextualism and Direct Warrant
10.4.5 Contextualism and Warrant Infallibilism
10.4.6 Contextualism and Warrant by Entitlement
10.5 Abominable Conjunctions and Interest-Relative Invariantism
10.6 Abominable Conjunctions and Classical Moderate Invariantism
10.7 Abominable Conjunctions and the Knowledge Rule
10.7.1 The Knowledge-Rule Explanation
10.7.2 Gettier Versions of Abominable Conjunctions
10.7.3 Transmission and Retraction
10.7.4 Third-Person Abominable Conjunctions
10.7.5 Asserting ‘‘I Don’t Know’’
10.8 Assumptions and Skepticism
10.8.1 Second-Order Skepticism
10.8.2 What Are Assumptions?
10.8.3 Dismissing versus Answering the Skeptic
10.8.4 Reasonable Assumptions
10.8.5 Summary
10.9 The Spreading Problem

Citation preview

AGAINST KNOWLEDGE CLOSURE

Knowledge closure is the claim that, if an agent S knows P, recognizes that P implies Q, and believes Q because it is implied by P, then S knows Q. Closure is a pivotal epistemological principle that is widely endorsed by contemporary epistemologists. Against Knowledge Closure is the first booklength treatment of the issue and the most sustained argument for closure failure to date. Unlike most prior arguments for closure failure, Marc Alspector-Kelly’s critique of closure does not presuppose any particular epistemological theory; his argument is, instead, intuitively compelling and applicable to a wide variety of epistemological views. His discussion ranges over much of the epistemological landscape, including skepticism, warrant, transmission and transmission failure, fallibilism, sensitivity, safety, evidentialism, reliabilism, contextualism, entitlement, circularity and bootstrapping, justification, and justification closure. As a result, the volume will be of interest to any epistemologist or student of epistemology and related subjects.  - is Professor of Philosophy at Western Michigan University. His work in epistemology, the philosophy of science, and the history of analytic philosophy has been published in numerous leading journals including Philosophy and Phenomenological Research, Philosophy of Science, Synthese, and Philosophical Studies.

Downloaded from https://www.cambridge.org/core. Columbia University - Law Library, on 23 Jan 2020 at 18:51:05, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

Downloaded from https://www.cambridge.org/core. Columbia University - Law Library, on 23 Jan 2020 at 18:51:05, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

AGAINST KNOWLEDGE CLOSURE MARC ALSPECTOR-KELLY Western Michigan University

Downloaded from https://www.cambridge.org/core. Columbia University - Law Library, on 23 Jan 2020 at 18:51:05, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

University Printing House, Cambridge  , United Kingdom One Liberty Plaza, th Floor, New York,  , USA  Williamstown Road, Port Melbourne,  , Australia –, rd Floor, Plot , Splendor Forum, Jasola District Centre, New Delhi – , India  Anson Road, #–/, Singapore  Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/ : ./ © Marc Alspector-Kelly  This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published  Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. A catalogue record for this publication is available from the British Library.  ---- Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Downloaded from https://www.cambridge.org/core. Columbia University - Law Library, on 23 Jan 2020 at 18:51:05, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

Contents

Acknowledgments 







page viii

Motivation, Strategy, and Definition

. . . . . . . .

Closure as Axiom Why Care? Strategy Defining Closure KC Transmission and Warrant The Problem with KC Transmission versus Penetration

       



Counterexamples

. . . . . . .



Zebra Basis Fallibilism Dretske Cases Vogel against the Counterexamples A Plethora of Inclinations The Argument by Counterexample The Chapters to Follow

      

Denying Premise : Skepticism



. . . . . . .

Why Skepticism? Downgrading Piecemeal and Wholesale Skeptical Hypotheses The Skeptical Closure Argument Front-Loading Underdetermination Conclusion

      

Denying Premise : Warrant Transmission



. Warrant Transmission and Williamson’s Insight . No Inevitable False Negatives . NIFN, Fallibilism, and Insensitivity

  

v

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 15 Feb 2020 at 05:48:07, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

Contents

vi . . .











Method Individuation Method Externalism NIFN and Other Dretske Cases

Transmission, Skepticism, and Conditions of Warrant

. . . . . .

Transmission and Skepticism Transmission and Conditions of Warrant Transmission and Safety Transmission and Reliabilism Transmission and Evidentialism Summary of the Last Two Chapters

Front-Loading

. . . . . . . .

Warrant Preservation without Transmission Front-Loading The Front-Loading Strategy The Buck-Passing Argument And Not Just Front-Loading A Safe Way Out? Explaining Transmission Failure The Closure Advocate’s Dilemma

Denying Premise : Warrant for P as Warrant for Q

. . .

Setting Aside Buck-Passing Putting Inference Out of a Job The Irrelevance of B to Q

Denying Premise : Warrant by Background Information

. . . . . . .

Outline of the Chapter Background Information and Wholesale Skeptical Hypotheses Background Information and Piecemeal Skeptical Hypotheses Explaining the Lottery Intuition Warrant Infallibilism and the Lottery Intuition Merricks’ Arguments for Warrant Infallibilism Summary

Denying Premise : Warrant by Entitlement

. . . . . .

Warrants by Entitlement Entitlement and Skepticism The Meaning of “Warrant” Strategic Entitlement Entitlement of Cognitive Project Conclusion

  

      



       

   

       

      

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 15 Feb 2020 at 05:48:07, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

Contents  Abominable Conjunctions, Contextualism, and the Spreading Problem . . . . . . . . .

Arguing against Closure Denial Abominable Conjunctions Abominable Conjunctions and Contextualism Contextualism and Anti-Skeptical Sources of Warrant Abominable Conjunctions and Interest-Relative Invariantism Abominable Conjunctions and Classical Moderate Invariantism Abominable Conjunctions and the Knowledge Rule Assumptions and Skepticism The Spreading Problem

vii          

 Bootstrapping, Epistemic Circularity, and Justification Closure  . . . . . .

Bootstrapping Bootstrapping and NIFN Bootstrapping and Epistemic Circularity More Easy Knowledge Justification Closure Justification, Skepticism, and Assumptions

References Index

     

 

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 15 Feb 2020 at 05:48:07, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

Acknowledgments

Two anonymous reviewers provided extensive feedback, for which I am grateful; the book is much improved as a result. It is also much improved thanks to the support and contributions, both analytical and editorial, of my wife, Tammy, and the participation of our sons, Ben, Daniel, and Jonathan, in countless conversations concerning cleverly disguised mules.

viii

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 20 Feb 2020 at 16:40:05, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093

 

Motivation, Strategy, and Definition

. Closure as Axiom Knowledge closure is, roughly, the principle that any agent who knows P and recognizes that P implies Q knows – or is in a position to know – Q. Although the principle has been challenged – most famously by Robert Nozick and Fred Dretske – most contemporary epistemologists remain committed to it. For many, indeed, that commitment is so firm that they view a theory that rejects closure to be seriously undermined if not refuted outright for that reason alone. A set S is closed under an operation O when applying O to members of S delivers members of S. The set of natural numbers, for example, is closed under addition: adding two natural numbers always produces a natural number. This follows from the Peano axioms for the natural numbers and the definition of addition. The set of one’s ancestors is closed under the parent-relation: any parent of one’s ancestor is also one’s ancestor. This is explicable by appeal to the fact that “x is an ancestor of y” means “y is a direct or indirect descendant of x.” And the set of true propositions is closed under deductive consequence: any consequence of a true proposition is also a true proposition. This is explicable by appeal to the nature of deductive consequence and a soundness proof. Closure principles typically have some sort of explanatory ground.

  



Refinements will come in §§.–.. The loci classici of closure denial are Dretske  and Nozick , chapter . “[T]he idea that no version of [closure] is true strikes me, and many other philosophers, as one of the least plausible ideas to come down the philosophical pike in recent years.” (Feldman , ) “[I]f a philosopher advances a view that forces us to reject closure, that should be taken as a reductio of that philosopher’s view.” (Fumerton , ) “Robert Nozick’s counterfactual analysis of knowledge is famously inconsistent with intuitive closure, but that is usually taken as a reason for rejecting the analysis, not for rejecting closure.” (Williamson b, ) Although not always. The Peano axioms themselves include two closure principles.



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

However, this does not seem to be the case for knowledge closure. There is no argument of the form: () () ()

Knowledge is like so . . . Inference is like so . . . Therefore, knowledge closure is true,

where the premises cite uncontroversial characteristics of knowledge and inference. Nor does every serious epistemological view imply closure. While the closure-denying sensitivity accounts of Nozick and Dretske face numerous objections, they nevertheless exert considerable intuitive pull. A belief is sensitive when, were the belief false, the agent would not believe it. Sensitivity is not closed under deductive entailment. Nevertheless, there is undoubtedly something unattractive about a belief’s counting as knowledge when the agent would still have believed it if it were false. Perhaps this intuition is illusory in some way. But the intuition is there, and strong enough to motivate a variety of successor views that attempt to reconcile sensitivity with closure. So closure advocates cannot claim that every intuitively plausible epistemological view implies closure. But there is also no argument from relatively uncontroversial truisms about knowledge and deductive inference on the table. This isn’t to say that there are no arguments for closure  







I will discuss a possible exception in §. and §.. This is the simplest version of sensitivity. Actual accounts are more elaborate. Some relativize to method: if the belief were false and the agent were to employ the same method, then the agent would not believe it by that method (Nozick ). (Nozick also includes an “adherence” condition: were the belief true and the agent to employ the same method, the agent would believe it by that method.) Others attach the modality to the agent’s reason for belief rather than the belief itself: if the belief were false, the agent would not have the reason she has for believing it (Dretske ). Finally, others couch sensitivity in probabilistic rather than modal terms: the probability that the belief is true given that the agent believes it is  (Dretske ) or high but not necessarily  (Roush , who also includes a probabilistic version of Nozick’s adherence condition). And there are many other versions. Nozick called sensitivity “variation,” and called the conjunction of variation and adherence “sensitivity” (or “tracking”). Nevertheless, “sensitivity” is reserved for variation in the subsequent literature, a terminological tradition that I follow here. “I have hands” is sensitive: if I were not to have hands – because of an unfortunate accident with a tablesaw, for example – then I would not believe that I do. But “I am not a handless brain in a vat (BIV) stimulated to have the very experiences I do have” is not sensitive: if I were a BIV, I would still believe that I am not a BIV (since my reasons for believing this, whatever they might be, would remain). Nevertheless, “I am not a handless BIV” follows from “I have hands.” Sosa a suggests that the intuition results from a confusion of sensitivity with its contrapositive, safety. These are not equivalent since both incorporate subjunctive conditionals, which are not truthpreserving under contraposition. See, for example, DeRose , , and ; Roush  and ; Baumann ; Black a; and Murphy & Black .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



at all; far from it. But they typically proceed by presenting considerations, not directly in favor of closure, but rather against its denial. Those arguments deserve serious attention. But it is prima facie surprising that this widely endorsed principle isn’t derivable from undisputed characteristics of knowledge and inference. Closure is instead typically treated as an independent epistemological axiom. As such, it is thought to be warranted in the way that axioms often are: it is intuitively obviously correct. But if closure is a primitive epistemological axiom, it’s an unusually complex one. Like the parallel postulate in Euclidean geometry, whose complexity motivated attempts to derive it from the other simpler axioms, it stands out among seeming truisms concerning knowledge (“what you know you believe,” “what you know is true,” “what you know can’t be accidentally true,” and so on) as begging for derivation from simpler axioms concerning knowledge and deductive inference. The parallel postulate proved not to be so derivable. It also proved to be eliminable in favor of alternatives, giving rise to non-Euclidean geometries. Euclidean geometry is both intuitive and reasonably accurate as a representation of local observable space. But the parallel postulate turned out to be a dispensable theoretical posit rather than an unassailable geometric primitive in the representation of the geometry of the physical universe overall. I suggest that a similar situation holds with respect to closure. Closure is an expression of an undeniable truth: deductive inference is an excellent way to extend one’s knowledge. But that undeniable truth is compatible with closure’s strict falsehood as a universal feature of epistemic space; excellence is not undermined by failure under exceptional circumstances. Just as caution must be exercised when extrapolating from the apparent geometric characteristics of our local space to the universe overall, so caution must be exercised when extrapolating from the undeniable truth that closure reflects to its supposed status as a universal epistemic truth. Closure is also a dispensable theoretical posit rather than an immobile pivot around which the epistemological landscape must turn.   

See Chapter . “This principle seems to me something like an axiom about knowledge.” (Cohen , ). “That something like [closure] is true, I will be taking as a primitive epistemic fact. I’m unable to formulate an argument that [closure] is true, just as I cannot provide an argument that killing innocent children without cause is morally wrong. But just as I nevertheless take it to be obviously true that we shouldn’t kill innocent children without cause in spite of my inability to argue for this truth, so I will be taking the truth of [closure].” (Dodd , ).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

. Why Care? Many riches flow from the renunciation of closure. A prominent skeptical threat – that one can’t know, for example, that one has hands unless one knows that one is not a handless brain in a vat – is defused in a way that respects our intuition that we don’t know that skeptical hypotheses are false while preventing the spread of that ignorance to more pedestrian knowledge claims. There is no need to countenance highly unintuitive “easy knowledge” inferences. A plausible solution to the problem of “bootstrapping” becomes available. And there is no need to resort to various theoretical, semantic, or pragmatic maneuvers in order to articulate views that incorporate closure while at the same time conceding that most, if not all, of our knowledge is acquired from fallible sources. But fundamentally at stake, for me at any rate, is an untenable conception of the demands that an agent must satisfy in order to know. One knows by courtesy of internal and external conducive circumstances: you don’t know where your car is by seeing it in the parking lot unless you remember what your car looks like, there’s adequate lighting, light travels in a straight line, and so on. Call these enabling conditions. Such conditions must be in place for knowledge acquisition. But must the agent also know that they are in place? It’s hard to see why. There’s no reason in general why an agent S’s standing in a particular relation R to some fact, which requires that condition C is realized, requires that she also stand in R to C itself. My successfully maneuvering a car through an obstacle course requires that the brake pedal be appropriately connected to the brakes. But it doesn’t require that I connected them. Why should S’s knowing that P, which requires that enabling condition C is satisfied, require also that she know that C is satisfied? Some – but not all – of the enabling conditions for S’s knowledge of P are implied by P itself. Suppose, for example, that P is “the gas tank is empty,” which S believes as a result of consulting the gas gauge whose needle points at “E.” An enabling condition of S’s knowing that the tank is   

See Chapter . I examine the skeptical closure argument itself in Chapter . See Chapter  for discussion of bootstrapping and easy knowledge. These include externalist accounts of both evidence and method, contextualism, pragmatic encroachment views, and safety accounts, among many others. It also includes brute-force reconciliations of closure with views that are not, on their face, closure-friendly by simply appending a closure principle; Sherrilyn Roush’s  tracking-with-closure account is an example.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



empty this way is that the needle isn’t stuck on “E.” If it were stuck on “E,” it would be so either while the tank isn’t empty or while it is, coincidentally, empty. “The tank is empty” implies that the former possibility is not realized: if the tank is empty, then it’s not the case that the tank isn’t empty while the needle is stuck. Since it’s an enabling condition of S’s knowledge that the tank is empty that the needle isn’t stuck (whether or not the tank is empty), it’s also an enabling condition of that knowledge that the needle isn’t stuck while the tank isn’t empty. This generalizes. For any enabling condition C, since knowledge of P is not compatible with the failure of C, it is also not compatible with the failure of C while some other fact is true, including ~P. So ~(~C & ~P) is also an enabling condition, one that is implied by P. (~C & ~P) is incompatible with knowledge of P for two reasons: it is incompatible with P itself – and so with the facticity of knowledge – and it is incompatible with C, a condition of S’s knowledge of P given how she acquires that knowledge. Finding closure intuitive, one could insist that S need only know that those enabling conditions that do follow from P are satisfied. So she needs to know that it’s not the case that the needle is stuck while the tank isn’t empty (if, at least, she recognizes the inferential relation), but she doesn’t need to know that it’s not the case that the needle is stuck while the tank is empty (and so also doesn’t need to know that the needle isn’t stuck simpliciter). But this is intuitively arbitrary, with respect to both what S needs to know and what she is in a position to know. It’s prima facie unintuitive that she needs to know that it’s not the case that the needle is stuck while the tank isn’t empty but not that the needle isn’t stuck. And it’s similarly unintuitive that she could be in a position to know that it’s not the case that the needle is stuck while the tank isn’t empty but not in a position to know that the needle isn’t stuck. But if she does need to know that the needle isn’t stuck then skepticism seems the inevitable result. The same considerations apply to any enabling condition. So S needs to know that each such condition is satisfied. That knowledge will, moreover, have its own enabling conditions. So S must know that those conditions are satisfied as well, and that each condition of



That is, for any such condition C, there is a condition ~(~C & ~P) that follows from P, and it seems correspondingly arbitrary that S needs to, and can, know that without also knowing ~(~C & P) and so, simply, C.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

that knowledge is satisfied, and so on. The imposition of such requirements seems destined for skepticism.

. Strategy It’s hard to argue against a principle that is widely treated as an epistemological axiom grounded in intuition. Even if a closure denier were to develop a theoretical account that rejects closure and that successfully answers every other conceivable objection (unlike those of Nozick and Dretske), she remains susceptible to the critique that her view doesn’t preserve closure. She would likely be accused of acquiring the fruits of her theory by theft over honest toil: of course she has an answer to the skeptic, for example, but only because she hasn’t done the necessary hard work of providing such an answer that is compatible with closure. In order to proceed under such dialectical circumstances, the arguments I will offer against closure are, I suggest, not just intuitively compelling, but are so from a variety of epistemological standpoints. I will also show that the intuition behind closure is not as forceful as it seems at first glance, and that ultimately it does not support closure. Finally, I will show that the abominable conjunction and spreading problems directed against closure denial can be answered. The closure denier does owe an account of when, and why, closure fails. Such an account might attempt to isolate all and only closure failures; one would certainly expect this of a full-blooded, closure-denying theory of knowledge. A more modest aim is to identify conditions under which closure fails, without claiming that these are the only such conditions. The result might be less satisfying than a “closure fails if and only if X” account. But, for the closure denier’s purposes, no more than the “if” direction is required: it would suffice, and is more secure, to present a





Indeed, as we’ll see in Chapter , skepticism results from the demand that S need only know that those conditions hold that follow from P. (At this point I’m only describing a motivation for resisting closure, not an argument for doing so. The argument comes in the rest of this book.) This implies that some restricted version of closure is true. Since closure is surely not a property randomly distributed over inferences, there is some general characterization of those cases in which it does hold. Such a principle would, however, apply in a more restricted class of cases than would closure principles that are typically endorsed by those who identify themselves as closure advocates, and so will still count as closure denial in the relevant sense. (However, some philosophers who so identify themselves offer versions of closure that are in fact more restricted than those typically endorsed by mainstream closure advocates; Baumann  and Roush  are examples. It is, as a result, disputable whether they really should count as closure advocates.)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



sufficient-but-perhaps-not-necessary account with broad theoretical and intuitive appeal. Developing a view satisfying the “only if” direction as well is likely to require a full-blooded theory of knowledge, in which case it will then run into the theft-over-honest-toil objection. I will not, therefore, attempt to derive closure failure from some particular theory of knowledge (or class of such theories). This might disappoint some closure advocates since many of the objections against closure have in fact been directed against particular theories – especially those of Dretske and Nozick – that imply closure failure. But, on the face of it, such an argumentative strategy is inadequate; that T, which entails ~C, is false does not imply that C is true. So I will claim that there are conditions such that, when they are realized, closure fails, although there may well be other conditions with the same effect. Moreover, the cost of endorsing closure under those conditions will be very high indeed. The overall result will be that, far from closure denial’s being a theoretical disadvantage, it is incumbent on any defensible theory of knowledge that it accommodates closure failure. In the remainder of this chapter I take up the challenging task of formulating a defensible closure principle. The next chapter presents a version of Dretske’s argument by counterexample, which appeals to putative counterexamples to closure. That argument will then structure the discussion for Chapters –, in which I examine the different strategies the closure advocate might adopt in way of responding to Dretske’s argument. The conclusion of Chapter  is that each such strategy fails, and so Dretske’s argument succeeds. In Chapter  I will respond to two popular arguments against closure denial: the abominable conjunction problem and the spreading problem. In the course of doing so I’ll also examine closure-preserving contextualism and the non-skeptical invariantist closure denier’s response to skepticism. In Chapter  I’ll examine the bootstrapping problem, epistemic circularity, and the relationship between knowledge and justification closure.



 

Much of Hawthorne’s  defense of closure, for example, is less an attack on closure denial per se than the presentation of counterexamples to Dretske’s conclusive-reasons account of knowledge (naturally enough, since he was responding to Dretske). I provide a more detailed description of those chapters at the end of Chapter , after Dretske’s argument is in place. Contextualists claim that the semantic value of “know” varies across contexts of knowledge attribution. Invariantists deny that there is such variation.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

. Defining Closure Notwithstanding broad agreement that some sort of closure principle is true, it turns out to be very difficult to formulate a principle that is immune to counterexamples that are recognized as such by both friends and foes of closure. I will hereafter limit attention to single-premise closure, which concerns only inferences with one premise. As is well known, there are objections to multi-premise closure that don’t apply to single-premise closure (but not vice versa). So, if single-premise closure is undermined, then so is closure overall. The simplest version of closure – appearing primarily in studies of epistemic logic – is that, if S knows that P and P implies Q, then S knows that Q. But this is an obviously inadequate description of actual epistemic agents. If such an agent has no grip whatsoever on the fact that P implies Q, it is highly implausible that she nevertheless must know that Q. A common formulation declares that if S knows both that P and that P implies Q, then S knows that Q (call this the Classical Formulation). But it doesn’t follow from the antecedent that S even believes Q; knowledge of Q, however, requires belief that Q. And, even if S believes Q, it is compatible with this formulation that she doesn’t do so because it follows from P. She could believe Q solely on the basis of wishful thinking and so, presumably, would not know it. A more recent, and widely adopted, formulation is offered by John Hawthorne, inspired by Timothy Williamson’s suggestion that closure is an expression of the capacity of deductive inference to increase what one knows. “Williamson has an insightful take on the root of epistemic closure intuitions,” says Hawthorne, “namely the idea that ‘deduction is a way of extending one’s knowledge’.” Call this Williamson’s insight. Here is Hawthorne’s formulation, with its scope and necessity made explicit and the clauses labeled for convenience:





For similar reasons, I will not consider closure over inductive inferences. The problem for multipremise closure is that small probabilities of error for each premise can add up so that, while each premise is probable, the conclusion is not. On probabilist conceptions of knowledge, according to which knowing P requires that P is probable on one’s evidence, closure can fail as a result. This does not apply to single-premise inference; if P implies Q, the probability of Q is at least as high as P. (Nevertheless, Lasonen-Aarnio  argues that the same problem can be extended to singlepremise inferences.)  Hawthorne  and . Hawthorne , , fn. , quoting Williamson b, .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



Hawthorne’s Formulation Necessarily, for all agents S and propositions P and Q: if (a) S knows that P and (b) competently deduces Q from P, thereby (c) coming to believe Q, while (d) retaining her knowledge that P throughout, then (e) she knows that Q.

Clause (d) is designed to exclude cases wherein, during the course of performing the inference, the agent somehow loses her knowledge of P (because, perhaps, the performance somehow brings misleading evidence to light). Note that closure, so formulated, is diachronic: clause (a) refers to S’s knowledge at one time and clause (e) refers to her knowledge at a subsequent time. S’s performance of the inference takes up the intervening time.

. KC Clause (b) of Hawthorne’s Formulation replaces “knows that P implies Q” in the Classical Formulation. The extent of its departure from that formulation depends on how “competent deduction” is to be interpreted. A competent deduction might consist in only a single inferential step from P to Q or a sequence of such steps from P to Q. It is, however, better to characterize the latter as involving successive instantiations of closure rather than a single instantiation. After all, an intermediate conclusion in the sequence is the next inference’s premise. If S doesn’t know that intermediate conclusion, then she doesn’t know the premise of the next inference. But, if so, it is unintuitive that she, nevertheless, must know the subsequent conclusion inferred from that premise. So knowledge of the ultimate conclusion requires that closure succeeds for each inferential step. We might as well, then, construe closure as applying to single-step inferences from the outset. But a single-step inference seems to involve no more than the recognition that P implies Q, which is synchronic: one recognizes that P implies Q at a time, rather than across an interval of time. One might think that some span 



Hawthorne , . In Hawthorne ,  he substituted “comes to know that Q” for (e). However, and as Hawthorne recognized in the earlier work, an agent could know Q already, before performing the inference, and so satisfy the antecedent without satisfying the consequent (Hawthorne , , fn. ). But such a case should obviously not count as a counterexample to closure. So I cite his earlier formulation here. For this reason, Hawthorne’s Formulation is not, strictly speaking, a closure principle, since closure principles specify conditions on set membership (at a time). For reasons that will soon be apparent, I won’t attempt to revise it further.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

of time is involved, since S initially believes P and then acquires her belief in Q by performance of the deduction. But “S competently deduces Q from P” should not be understood to imply that S believes either P or Q. The former is the purpose, in part, of clause (a), and the latter of clause (c). Appeal is also made to closure in way of explaining “retraction” phenomena, wherein S, taking herself to not know Q and realizing that Q follows from P, proceeds to deny that she knows P, despite having previously claimed knowledge of P. S’s competent, single-step deduction, then, just consists in her recognition that P implies Q, without commitment to either. To recognize that P implies Q is not merely to know that it does (and so this does not amount to a reversion to the Classical Formulation). S might know that P implies Q by testimony from a logician, without having any grip on the inferential relation herself. Some might think that this would suffice. But suppose S knows P, and knows that P implies Q by testimony. How do these pieces of knowledge fit together in order to deliver her knowledge of Q? Presumably by a modus ponens inference: she recognizes that, since P is true and P implies Q, Q is true as well. But perhaps she only knows that (P and (P implies Q)) implies Q by testimony as well. Then how do these pieces of knowledge fit together in order to deliver her knowledge of Q? Presumably by another MP inference from (P and (P implies Q)) and (if (P and (P implies Q)) then Q) to Q. But perhaps she only knows this by testimony as well . . . If we model S’s relation to the fact that P implies Q as merely something S knows, and so just another proposition that she believes and can wield as a premise then, like Lewis Carroll’s tortoise, she will never be in a position to detach the conclusion. S’s recognition that P implies Q cannot be construed as merely something that S knows and so believes, so that an independent disposition to infer Q from her beliefs that P and that P implies Q must be postulated. Rather, to recognize that P implies Q is to be inherently disposed to believe Q if one believes P, and not-P if one believes not-Q; if S does not have those dispositions then S does not recognize that P implies Q. That disposition is then manifested by S’s believing Q when she believes P. 

 

Carroll . Closure is sometimes represented as involving a modus ponens inference from S’s beliefs that P and that P implies Q to Q. As per Carroll’s story, however, that’s a mistake. S infers from P to Q, not from P and (P implies Q) to Q. Lasonen-Aarnio  makes essentially the same point against knowledge-of-inference formulations of closure, albeit in the course of arguing for a competent-deduction formulation of closure. Does recognition imply knowledge? Perhaps not. Suppose that S receives excellent, though misleading, testimonial evidence to the effect that P doesn’t imply Q. On some views, excellent

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



This suggests the following definition of closure: Knowledge Closure (KC) Necessarily, for every agent S and propositions P and Q: if (a) S knows that P while (b) recognizing that P implies Q, and (c) believes Q on the basis of her belief that P and recognition that P implies Q, then (d) S knows that Q.

Knowledge closure, so formulated, is a synchronic closure principle, imposing a condition on the set of propositions known by an agent at a time. This is, I think, the most defensible formulation of knowledge closure. Unfortunately, however, there are prominent views in the literature that are universally taken to preserve closure but are incompatible with KC. So KC – and, indeed, any principle that identifies knowledge itself as the epistemic property that is closed over inference – still doesn’t suffice as the target principle for the closure debate. However, a closely related closure principle, one which concerns warrant rather than knowledge, avoids the problem. To see the problem with KC, we need to introduce two distinctions: between closure and transmission, and between knowledge and warrant. The next section will do so, and will provide definitions of warrant transmission and warrant closure. With those definitions in hand, I will describe the problem with KC in §.. The upshot is that the knowledge closure debate really concerns warrant closure (and transmission).





but misleading evidence that R is false destroys knowledge of R, so that S no longer knows that P implies Q. But she still seems to recognize that it does, even while having evidence to the effect that it doesn’t. If so, recognition does not imply knowledge. However, it might reasonably be thought that such misleading evidence also undermines her acquisition of knowledge of Q in virtue of that recognition. If so, recognition (as cited in the closure principle) would require knowledge of the implication relation, and thereby rule out the existence of such undermining evidence. The original clause (d) – that S retains her knowledge of P throughout – is no longer required; its purpose – ensuring that S knows P while competently inferring – is achieved by the word “while” in clause (a) of KC. I have also substituted Hawthorne’s “coming to believe Q” with clause (c). Suppose that S believes Q before recognizing that it follows from P but does not (yet) know it. (Perhaps she believes it for no good reason.) Having now recognized that it follows from P, if she doesn’t then know Q, that would surely be a counterexample to closure. But Hawthorne’s Formulation won’t count it as such, since the antecedent isn’t satisfied: she doesn’t come to believe Q, since she already believed it. For the sake of convenience, I will often speak hereafter of S’s inferring from P to Q. But doing so is only shorthand for S’s believing P, recognizing that P implies Q, and believing Q on the basis of that belief and recognition.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

. Transmission and Warrant Closure is importantly distinct from transmission. Knowledge transmission takes place when knowledge of the conclusion is acquired as a result of S’s recognition that P implies Q. But it is compatible with transmission failure – S can’t acquire knowledge of Q by recognition that it follows from P, even if she knows P – that S inevitably ends up knowing Q anyway, so that closure is preserved. Clause (d) of KC doesn’t specify how S comes by her knowledge of Q, and so doesn’t require that she acquires it in virtue of her recognition that it follows from P. Crispin Wright has long argued that there are cases of this sort; indeed, some of his examples include the very cases that have long been cited as closure failures, such as Dretske’s famous zebra case. While there is considerable disagreement with respect to the source of transmission failure, many closure advocates nevertheless agree with Wright that transmission does fail, and that its doing so helps explain why the inferences in zebra-style cases seem to go awry. Transmission-failure cases (if they exist) are more perspicuously represented by reflecting on what it is that is preserved when transmission succeeds. S’s recognition that P implies Q is obviously not responsible for the truth of the conclusion when knowledge transmits. Since the inference is valid and the premise true, the conclusion is true whether or not S recognizes the implication. Nor does such recognition secure knowledge because it secures belief in the conclusion or because it ensures that the agent believes the conclusion on the basis of that recognition. Clause (c) of KC is included precisely because an agent could recognize the implication and, nevertheless, fail to believe the conclusion (or fail to believe it in response to that recognition). The epistemic property transmitted is, rather, warrant in Alvin Plantinga’s sense: that which makes for the difference between mere true belief and knowledge. This implies no theory of warrant. Warrant may or may not imply belief or truth, and it may be external or internal or have



 

See the references to Wright’s work in Chapter , fn. . In fact, Wright is concerned with the transmission of warrant rather than knowledge and, indeed, with the transmission of the legitimacy of claims to warrant rather than warrant itself. I will explore Wright’s view in Chapters  and , and passim. I will discuss transmission failure and its explanation in Chapters  and . Plantinga a and b.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



elements of both. Plantinga-warrant – P-warrant, hereafter – is merely a placeholder for whatever it is that differentiates a truly believed proposition from one that is known. I can’t acquire new knowledge of Q by inference when I know it already. I can, however, acquire a new warrant for Q that way, even if I already possess a warrant for it. Warrants are essentially ways of knowing, and I might well know Q in more than one way. The question whether transmission fails is the question whether there can be cases wherein S doesn’t acquire an additional warrant for Q in virtue of her recognition that it follows from her warranted belief in P, whether or not she had a warrant for Q to start with. If she does inevitably acquire such a warrant, then the following is true: Warrant Transmission (WT) Necessarily, for every agent S and propositions P and Q: if (a) S’s belief that P is warranted while (b) S recognizes that Q follows from P, then (c) S acquires a warrant for Q in virtue of (a) and (b).

If S then believes Q on the basis of that warrant, then not only does S have a warrant for Q, but her belief in Q is warranted. The former only requires that S has such a warrant available to her, but does not imply that it is exploited as a basis for her belief; the latter does so imply. This is analogous to the standard distinction between propositional and doxastic justification. Warrant closure can now be formulated in such a way that it follows from WT, as it should, by weakening the consequent: Warrant Closure (WC) Necessarily, for every agent S and propositions P and Q: if (a) S’s belief that P is warranted while (b) S recognizes that Q follows from P, then (c) S has a warrant for Q.

While WT implies WC, the reverse implication doesn’t hold: clause (c) in WC does not imply that S’s warrant for Q is a result of her recognition that Q follows from P as required by clause (c) of WT. If Wright is correct,

  

In §§.–. I will argue that warrant in Plantinga’s sense does, in fact, imply truth. Even if so, however, this is not a consequence of its definition alone. Unless I somehow lose that knowledge in the interim. (c) should not read “S has a warranted belief in Q” because she might not believe Q at all, or only believe it as a result of wishful thinking rather than because it follows from P. Nor, I think, should (a) read “S has a warrant for P.” Notwithstanding her having such a warrant, she might nevertheless believe P only as a result of wishful thinking. It strikes me as far less plausible that she must end up with a warrant for – a way to know – Q that, she recognizes, follows from P that she only believes as a result of wishful thinking.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

then there are cases in which S does not acquire a warrant in virtue of that recognition, so that WT is false, although she does have a warrant for Q from some other source, so that WC remains true.

. The Problem with KC Not only are closure and transmission more perspicuously characterized in terms of warrant; doing so is unavoidable. For it turns out that KC – unlike WC – doesn’t allow for a view like Wright’s. But whatever one might think of such a view, it surely should not be foreclosed by the definition of closure itself. To see this, recall that KC reads as follows: Necessarily, for every agent S and propositions P and Q: if (a) S knows that P while (b) recognizing that P implies Q, and (c) believes Q on the basis of her belief that P and recognition that P implies Q, then (d) S knows that Q.

Notice that (c) specifies the basis of S’s belief: she believes Q because P implies it. Suppose that she only believes Q for this reason. Q is also true, since she knows P – which requires that P is true – and P implies Q. Of course, if WC is false, the antecedent of KC doesn’t ensure that she has a warrant for Q as needed in order for her to know Q; KC requires that WC is true. So suppose that WC is true. Then, since she is warranted in believing P (as required by her knowing P), and recognizes that P implies Q, she also has a warrant for her true belief in Q. But that doesn’t imply that she knows Q. Knowledge surely requires, not merely that one have a warrant for one’s true belief, but that one’s belief is warranted, that is, it is based on the warrant one has for it. Otherwise, one could count as knowing a proposition that one believed only on the basis of, for example, wishful thinking. If transmission fails in this case, S doesn’t acquire a warrant for Q in virtue of her recognition that it is implied by P. WC does require that she, nevertheless, has a warrant for Q (by, therefore, non-inferential means). But, we have supposed, she doesn’t believe Q on the basis of that warrant; she only believes Q because it is implied by P. So she doesn’t believe Q on the basis of any warrant she has; her belief isn’t warranted. So she doesn’t know Q. KC fails. 

In a similar vein, it is widely recognized that one’s belief is not justified if one merely has a justification for that belief; one’s belief must also be based on a justification one has for it. That’s the point behind the distinction between propositional and doxastic justification. The same point surely applies to knowledge.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



KC is only preserved if WT is true, that is, if recognition that P implies Q always delivers a warrant. Then clause (c) of KC will ensure that her belief is based on that warrant. She will then have a true, warranted belief in Q, and, therefore, know it. But to insist that WT is true just is to deny that transmission ever fails. So KC is true only if transmission never fails. And that rules out a position like Wright’s – according to which transmission occasionally fails, although closure succeeds – by fiat. KC would still be viable if Wright’s view were understood to require, not only that S must have an independent warrant for Q when she is warranted in believing P and transmission fails, but also that her belief in Q is warranted, that is, that it is based on a warrant that she has. Since Q is true (because P is true, as per clause (a) of KC), she would have a warranted, true belief in Q and so know it, as required by KC. But this additional condition should not be foisted on Wright. It would make the position much less plausible, since it would require that S believes Q. It is utterly implausible that real agents have so much as contemplated every Dretske-style Q proposition following from their ordinary beliefs; they may well not even have the conceptual resources to do so. There being no obvious reason to think that Wright’s position needs this additional condition, it would be unfair to him – and to the closure advocate in general who wishes to reconcile closure with the concession that transmission fails in some cases – to impose it. At this dialectical stage, at least, Wright’s view should not be foreclosed by the definition of closure itself. So the scenarios that view conceives of as possible – wherein, although transmission fails, S has a warrant for Q – should not count against closure. The only way to ensure this is to characterize closure in terms of warrant rather than knowledge, and so by WC rather than by KC.

. Transmission versus Penetration Transmission is sometimes characterized as requiring that the warrant for the premise is itself carried through the inference to the conclusion, so that the very same warrant for the premise becomes a warrant for the conclusion. Dretske, for example, so characterizes transmission in  

See also Lockhart , §. Silins , §. presents a similar argument. This is not a problem for KC in particular; it applies to any principle that identifies knowledge as the epistemic property closed over inference (and so also applies to the Classical Formulation as well as to Hawthorne’s Formulation).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

Dretske (, ). But this is stronger than transmission – at least, it is stronger than I intend the term here. In Dretske () he argued that a reason for believing P does not necessarily constitute a reason for believing Q: I might see that there is wine in the bottle without seeing that there isn’t colored water in the bottle, although the former implies the latter. He called these “penetration” failures, claiming that they demonstrate that knowledge does not inevitably transmit, and so that closure fails. But, as Klein (), Luper () and others have pointed out, S’s warrant for Q is grounded in her recognition that P implies Q, which is not how her warrant for P was acquired. So their warrants are distinct; failure of penetration in Dretske’s sense is, therefore, unsurprising, and does not imply that S does not acquire a warrant for Q in virtue of her recognition that P implies Q. One might think that S’s warrant for P must, nevertheless, be available to S as a warrant for Q, at least when transmission succeeds. After all, Q is logically weaker (or, at least, no stronger) than P. If her warrant for P doesn’t suffice for Q, and yet a warrant is generated by S’s recognition that Q follows from P, then that recognition would have the seemingly magical effect of creating a new warrant for the weaker Q that was not there already in the warrant S had for the stronger P. Some also cite the fact that any evidence that makes P probable must render Q at least as probable. If warrant is (at least in part) a matter of probabilistic evidential support, then this would appear to suggest, not only that S’s belief in Q is warranted, but that it is warranted directly by the same evidence that delivers P’s warrant (whether or not it is also warranted in virtue of S’s recognition that it follows from P). However, it is relatively easy to come up with cases in which the warrant for P is clearly not a warrant for Q, and yet a warrant for Q is generated by inferring it from P. Presumably I can know – and so acquire a warrant 

 



Tony Brueckner, for example, characterizes this configuration of claims as “extremely odd” (Brueckner , , referencing Klein , who endorses these claims). Both Brueckner and Klein are concerned with justification closure rather than knowledge (or warrant) closure; but the point at issue applies equally well to the latter. Klein  so argues, for example, on p. . This is the one positive argument for closure referenced in fn. . Ironically, Klein offers this in support of closure in the very article – Klein  – to which Brueckner responds, nevertheless failing to notice that it does not sit well with his claim that the justification for P might not be available as a justification for Q. For those willing to endorse the reasoning in Dretske’s wine case from “there is wine in the bottle” that one knows by seeing the wine to “there is not colored water in the bottle,” that case already stands as an example (Dretske , ). For Dretske is presumably right to claim that one does not see that there is not colored water in the bottle by looking at it.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

Motivation, Strategy, and Definition



for – “the liquid in the cup is water” on the basis of how it looks, tastes, and smells. Knowing also that water is composed of HO molecules, I infer that there are HO molecules in the cup. Presumably I can come to know that there are HO molecules in the cup as a result. But I surely don’t acquire a warrant for “there are HO molecules in the cup” on the basis of how it looks, tastes, and smells alone. Someone without background knowledge that water is HO – or who had that background knowledge but nevertheless failed to recognize that it, together with the liquid’s being water, implies that the liquid is HO – would not have a warrant for “there are HO molecules in the cup.” Another example: I again know that there is water in the cup on the basis of how it looks, tastes, and smells. This implies that it’s not the case that both there isn’t water in the cup and the universe is expanding. But it’s bizarre to suggest that I can learn the latter solely on the basis of how the liquid looks, tastes, and smells, and so without recognition that the negated conjunction is a logical consequence of what I learned on that basis (namely, that it is water). And another example: logical truths follow from everything. So “that’s water” implies “it’s not the case that snow is and isn’t white.” But the basis of my warrant for the former – that the liquid looks, tastes, and smells like water – surely does not, on its own, serve to deliver a warrant for an unrelated logical truth. Moreover, if the argument for closure is based on the claim that one’s warrant for P is available as a warrant for Q – as the appeals to logical strength and probability suggest – then there is no need for S to recognize that Q follows from P at all. That recognition, therefore, plays no role in ensuring that S has a warrant for Q. The closure principle this supports is, then, as follows: if S’s belief in P is warranted and P implies Q, then S has a warrant for Q. But this is obviously false. If S doesn’t even recognize that P implies Q, then she certainly doesn’t inevitably end up with a warrant for Q, even if she has a warranted belief in P. The intuitive motivation



Some might object to the mobilization of background knowledge. But such mobilization is common; in Dretske’s zebra case, for example, “that’s a zebra” is widely taken to imply “that’s not a disguised mule,” although this depends on the background knowledge that zebras are not mules. And even if we treat the background knowledge as part of the warrant, so that my putative warrant for “there’s HO in the cup” is based on its looking, tasting, etc., like water in conjunction with my background knowledge that water is HO, I obviously don’t need that background knowledge in order to be warranted in believing that there’s water in the cup. So what I do need for the latter warrant doesn’t suffice on its own to warrant “there’s HO in the cup.” That background belief is only relevant to my warrant for “there’s HO in the cup” in virtue of its role in facilitating the inference from “there’s water in the cup” to that proposition.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001



Against Knowledge Closure

behind closure, after all – that S’s recognition that P implies Q allows for S’s acquisition of a warrant for Q – is now irrelevant. Indeed, one might as well just point out that, since P implies Q, anything that makes P true makes Q true as well, and offer that as an argument for closure. It supports the same untenable version of closure, after all. And while it is disputable whether knowledge of P requires that P be probable on one’s evidence, it is beyond dispute that knowledge requires truth. But an argument for closure that appeals solely to the fact that valid inference is truth-preserving is no argument at all (or is, at best, a question-begging one). So transmission, as I will use the term, only requires that S acquire a warrant for Q as a result of inference from P; it does not require that S’s warrant for P itself suffices as a warrant for Q. This does mean that Dretske’s penetration-failure argument for closure failure doesn’t succeed; the argument of this book does not appeal to that argument. It also, however, undermines appeal to logical strength or probability in defense of closure. Chapter  presents another of Dretske’s arguments, one which appeals to prima facie counterexamples to closure. That argument will structure the examination of the options available to the closure advocate in Chapters –.  

See Williamson . For more on the suggestion that warrant for P suffices as a warrant for Q, see Chapter .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:38:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.001

 

Counterexamples

. Zebra Dretske’s penetration-failure argument is not his only argument for closure failure. Another rests on a simple appeal to intuition: cases are describable that seem to violate closure. The flagship example is Dretske’s famous zebra case (hereafter Zebra). In front of S is a paddock at the zoo in which there is a zebra with typical distinctive stripes. S presumably knows that it’s a zebra on the basis of its appearance, and so is warranted in believing this. Since it’s a zebra, it’s not a mule, and therefore not a mule cleverly disguised to look just like a zebra. But, even if S recognizes that its not being a disguised mule follows from its being a zebra, it seems silly to suggest that she could acquire a warrant for its not being a disguised mule on that basis. However, S appears to have no other source of warrant for its not being a disguised mule. Her perceptual experience seems irrelevant: it is “neutralized,” as Dretske put it, since if it were a disguised mule it would look the way it does. Her background knowledge seems inadequate: while she might have reason to think that zoos aren’t likely to disguise their animals, that seems a far cry from her knowing that this animal in particular, in this zoo in particular, has not been disguised. And she has no other obvious source of warrant: she has not washed the animal, has not conducted a DNA test, and so on. So S has no source of warrant for its not being a disguised mule. But if WC is true then, since she is warranted in believing that it’s a zebra and recognizes that this implies that it isn’t a disguised mule, she must have a warrant for its not being a disguised mule. Therefore, WC is false. (A list of similar cases is given in §..) The primary aim of this chapter is to present a generalization of Dretske’s argument (§.). I won’t suggest that this argument succeeds on its own. I will use it, instead, as an organizational tool: different 

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure

strategies for defending closure can be distinguished by identifying which premise of the argument they deny. Chapters – will then examine each of these strategies in detail; a brief outline of those chapters is given in §.. We’ll begin by identifying the kind of fallibilism that is crucial in cases like Zebra.

. Basis Fallibilism Cases like Zebra exploit a certain kind of fallibilism attending most (if not all) of what we know. S believes that it’s a zebra because it looks like one. In a natural but difficult-to-capture sense, her belief is based on the animal’s appearing the way it does, and her knowing that it is a zebra is due, in part, to her responding to its appearing that way by forming that belief. But it could appear that way without being a zebra; it could, for example, be a disguised mule. So its appearing that way doesn’t guarantee that it is a zebra; its looking like a zebra, which prompts S’s belief, does not, on its own, ensure that the belief it prompts is true. The relation between the zebra’s appearance and S’s belief is akin to, if not the same as, the basing relation grounding the standard distinction between propositional and doxastic justification. S is propositionally justified when she has a justification for believing P, although she might not believe P, or not believe it on the basis of that justification. And she is doxastically justified when she believes P on the basis of a justification (reason, evidence) that she has. This leaves open what sorts of things can count as bases. Some will insist that bases be individuated internally, some that they be individuated externally, and some will allow both. In Zebra, the basis for S’s belief that it’s a zebra might be characterized internally – as an inner experiential state of S caused by the markings on the animal’s coat – or externally – as the pattern of those markings (and the outline of the animal, shading gradients, and so on) themselves that are visible from S’s viewpoint. In the interest of being as ecumenical as possible, I won’t pick sides here (see §.. for more on this issue) and insist only that, in some coherent sense of “looking like,” S’s belief is a response to the animal’s looking  

So understood, propositional justification is conceptually prior to doxastic justification; for a contrary view see Turri . It also leaves open how the basing relation itself is to be characterized. It’s plausibly at least a causal relation: S believes that it’s a zebra because it looks like one. I won’t attempt to delineate it more precisely than that here.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples



like a zebra – thereby providing a plausible answer to the question “why does S believe that it’s a zebra?” – and that it could look that way without being a zebra. For much of what we know, the basis of one’s belief seems easily identifiable: S believes that it’s a zebra because it looks like one, I believe that the Broncos won because the newspaper report said so, you believe that the tank is out of gas because the gas gauge reads “E,” and so on. In some cases the identification is much more difficult: I would, for example, be hard-pressed to precisely identify the basis for my belief that the structure of DNA is helical. Perhaps, in some cases, it isn’t possible for the basis to exist when the belief is false; perhaps my belief that I am in pain is an example. I will be focusing here on those cases wherein a basis does seem relatively easily identifiable, and where it is possible for that basis to exist and yet the belief be false. When we do acquire knowledge in such cases, our doing so is due, in part, to our responding to the relevant basis by forming the belief. My coming to know that it is three o’clock, for example, is due, in part, to my responding to the clock’s reading “:” by believing that it is three o’clock. But my so responding does not, on its own, suffice for knowing, not even if my belief is true. The clock could read “:” because it is stuck at that time, and I could be coincidentally consulting it at three o’clock; I presumably don’t know that it is three o’clock. This is a Gettier case; and a Gettier case is one in which a true belief is not known. So even though I have the same basis that I would have if I knew that it is three o’clock by consulting a working, accurate clock, I still don’t know that it is. Since my



  

Note that this doesn’t require an internalist characterization of “looking like.” As per above, its appearance can be identified as a feature of the animal rather than of S’s experience. (She isn’t, after all, hallucinating when she sees the disguised mule.) The basis internalist can insert “seems to” where appropriate. Perhaps we also have some knowledge without any basis at all; some of our beliefs might be warranted by default. See Chapter . The basis, however, may need to be characterized in such a way that it does not fix the content of the resulting belief, but only identifies it as a proposition of a certain kind. Suppose I believe a particular true mathematical proposition M solely on the basis of testimony from someone, S, who knows little about mathematics and nothing about the proposition in question. That basis is, intuitively, highly fallible. But the proposition is necessary. So it is not possible for me to have that specific basis – S’s affirming M – when that specific proposition is false. This problem can be avoided if the basis is identified as “S affirms a mathematical proposition,” without specifying which proposition it is that S affirms. It is certainly possible for S to affirm a mathematical proposition when that proposition, whatever it is, is false. This then raises the question how broad the characterization of the basis should be; I won’t attempt to resolve that issue here. See Reed , Brueckner , and Hetherington  for related issues pertaining to the characterization of fallibilism.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure

belief is true, it is unwarranted (where warrant is the difference between true belief and knowledge). So a belief’s having a basis that, in some circumstances, produces a warranted belief doesn’t ensure that it does so in all circumstances. One’s basis may, therefore, be fallible – it is possible for the belief to be false, even though one believes it on the same basis – even though one’s warrant is not; basis fallibilism does not imply warrant fallibilism. Nor is basis fallibilism necessarily the same as evidence or justification fallibilism. Some accounts of both justification and evidence are infallibilist: justification J or evidence E doesn’t suffice for belief in P unless J or E strictly implies P. If justification infallibilism is correct, then the animal’s looking like a zebra doesn’t constitute S’s justification (or all of it); and if evidence infallibilism is correct, then the animal’s looking like a zebra doesn’t constitute S’s evidence (or all of it). Nevertheless, S’s belief that it’s a zebra is prompted by its looking like one; in that sense, at least, its looking that way is S’s basis for believing that it is. And it can look like a zebra without being one, whether or not its looking that way constitutes S’s justification or evidence for believing that it is one. Basis fallibilism, in this sense, is true of S’s belief, however things pan out for evidential and justification fallibilism.

. Dretske Cases Cases that are structurally analogous to Zebra, and so constitute prima facie counterexamples to closure, abound in the literature. Call these Dretske cases. The conclusion in all such cases is the denial of a scenario in which, although the agent’s basis for her belief remains, the belief itself is false. So all such cases exploit basis fallibilism, as characterized in §.. Here is a sample of such cases: (a) Car: S knows where her car is parked; she remembers having parked it in spot B half an hour ago. But she seems in no position to know that it hasn’t just been stolen, even if she infers “it hasn’t been stolen” from “it’s in B,” despite the fact that her belief in the former would

 



This is not to say, however, that warrant is infallible. (Or, at least, not yet. See Chapter .) See §.. for discussion of views of evidence along these lines. Timothy Williamson argues that one is justified only if one knows (Williamson, forthcoming); since knowledge is factive, one can then only be justified in believing P if P is true. Note again that this doesn’t require basis internalism. See §.. for more on the issue.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples

(b)

(c)

(d)

(e)

(f )

(g)

(h)

(i) (j)





be true if she did so infer (since knowing that the car is parked in B requires that it is in B, and so not stolen). President: S knows that Trump is the current president; she watched the inauguration on TV. But she seems in no position to know that Trump hasn’t had a fatal heart attack in the last five minutes and so is no longer the president. Restaurant: S knows that “Burger Barn” is a good place to get a hamburger for lunch near the office; she ate there just last week. But she seems in no position to know that a fire didn’t burn the place to the ground yesterday. Bridge: S, currently in Boston, knows that the Golden Gate Bridge stands at the mouth of the San Francisco Bay; she learned this long ago in school. But she seems in no position to know that the bridge wasn’t just demolished by a falling meteorite. Gas Gauge: S knows that her car is out of gas by reading the gas gauge, which reads “empty.” But she seems in no position to know that the gauge’s needle didn’t recently become stuck on “empty” while there is still some gas left in the tank. Cruise: S knows she won’t be going on an around-the-world cruise this summer; it’s far beyond her financial means. But she seems in no position to know that the lottery ticket she bought won’t win, thereby providing her with the requisite means. Shopping: S knows that her husband is grocery shopping because he left a note to that effect on the kitchen table. But she seems in no position to know that he didn’t get in a car accident and is now in the hospital. Keys: S knows where her car keys are; she accurately remembers having left them on the hook by the back door last night. But she seems in no position to know that that they’re not in her coat pocket instead and she’s actually remembering where she put them the night before. Red Table: S is in a furniture store looking at a red table in white light, and knows thereby that it is red. But she seems in no position to know that it isn’t a white table under a cunningly hidden red light. Directions: S knows that highway  goes to South Haven, having just asked a knowledgeable and honest gas station attendant. But she

I’ll omit “even if she infers the latter from the former, despite the fact that her belief in the latter would be true if she did so infer” and leave it implicit in the rest of the examples to avoid repetition.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure

seems in no position to know that the attendant didn’t lie to her for the sheer malicious fun of it. (k) Misprint: S knows that the Broncos won last night’s game, having read that result in the newspaper’s accurate report of the game. But she seems in no position to know the report wasn’t an erroneous misprint. (l) BIV: S knows that she has hands (by seeing them). But she seems in no position to know that she’s not a handless brain in a vat.

. Vogel against the Counterexamples (a) through (d) originate in Jonathan Vogel’s paper “Are There Counterexamples to the Closure Principle?” His response to them is, in part, to suggest that any epistemic defect manifested by the relevant Q is inevitably shared by P so that, if that defect is taken to imply that Q is unknown, then it implies the same for P, notwithstanding the intuition that P is known. For example, one might suggest that S doesn’t know that her car wasn’t stolen in Car because it is likely that some car will be stolen – rarely does a night go by without a theft from some lot in the city – and she has no reason to think that her car is less vulnerable than the others. The background assumption might be that, all other things being equal, it is unjustified to accept any member of a set of propositions L, such that the members of L are equiprobable and the subject knows (or has good reason to believe) that at least one member of L is false.

But then, suggests Vogel, that assumption also undermines S’s knowing that her car is where she parked it. For there is a set of propositions to which this belongs – “my car is where I parked it,” “my neighbor’s car is where he parked it,” “my colleague’s car is where she parked it,” and so on – that are, given her evidence at least, also equiprobable, where she also has reason to believe that at least one is false (because one of those cars is stolen). So if this assumption undermines her knowing that her car wasn’t stolen, it also undermines her knowing where her car is parked. As it stands this is not so much an argument as a challenge: “[t]he critic of the Closure Principle has to identify some way in which beliefs in lottery propositions are epistemically defective, and this defect must not be shared 

Vogel a.



Vogel a, .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples



by the mundane beliefs whose contents, in Car Theft cases, are known to entail those lottery propositions.” Vogel is unsurprisingly pessimistic about the critic’s prospects. Such a putative defect has been on the table for some time, however, namely failure of sensitivity. P, in at least some of the cases (a) through (l), appears to be sensitive while Q is insensitive. It does, prima facie, seem to be an epistemic defect of one’s belief that one would believe it even if it were false; and that does distinguish at least some of the Ps from the Qs in the way that Vogel’s challenge requires. He does note that Nozick’s view could account for the difference between P and Q, but indicates that “a discussion of Nozick’s work is outside the scope of this essay.” He doesn’t reference Dretske’s conclusive-reasons account (or his successor information-theoretic account) at all, even though Dretske’s zebra example prompts much of the discussion in Vogel’s essay. Vogel suggests that (a) through (d) share certain characteristics, paradigmatically displayed by Cruise, whose Q is “S will lose the lottery.” He, therefore, calls the relevant Q propositions “lottery propositions,” an expression now in common use. I will nevertheless refer to them hereafter as Vogel propositions, retaining “lottery proposition” for the eponymous lottery proposition “S will lose the lottery.” First, the possibilities that appear to defeat knowledge of the implied proposition, although improbable, are not abnormal. (Compare changing the second sentence in Car to “S doesn’t seem to know that her car was not evaporated by alien ray guns.”) Second, there is a statistical reason (albeit a small one) to think that the possibility will be realized: people do win lotteries, cars have been stolen, restaurants have succumbed to fire, and so on. And, third, it would be arbitrary to discount that possibility in the case  

  

Vogel a, . See two paragraphs below for a general characterization of “lottery propositions.” In fact, some of the cases might run afoul of the prohibition against backtracking counterfactuals. We intuitively hold the past constant when assessing counterfactuals. (This applies to Car itself: it might be claimed that, if S’s car were not in the spot she thinks it is in, she would still believe that it is, because we hold constant her having parked it there and so her subsequent memory.) For what it’s worth, I think that the prohibition is not as rigid as this critique requires. And there are versions of sensitivity – Dretske’s information-theoretic version, for example, which couches the relevant principle in probabilistic rather than modal terms – that are not susceptible to it. At any rate, many Dretske cases can be defended by appeal to sensitivity, even if not all. Vogel a, , fn. . To be fair, Vogel has presented a number of criticisms of sensitivity and similar views in other work. See esp. Vogel , , and . We have a statistical reason to believe that A is B, Vogel says, when on the basis of “relative frequencies, counting cases, and so on” we can infer that the statistical probability that A is B is greater than zero (Vogel a, fn. ). It’s not clear how this applies to Bridge. We do have some statistical reason to believe that a meteorite will fall, since they occasionally do; but it is less clear

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure

at hand (there is no reason to think that your ticket in particular is a loser as opposed to the others, that your car in particular definitely won’t be stolen although someone’s is likely to be, etc.). So, if the cases are treated consistently, you only know Q in your case if you know it in the other cases as well. But you don’t know it in all the other cases; the not-Q possibility will be (or will probably be) realized in one of those other cases. So you don’t know Q. Vogel suggests that “it’s not a disguised mule” doesn’t satisfy these characteristics, since it would be abnormal for zoos to disguise their animals and there is no statistical evidence that this does sometimes happen. So, he thinks, “it’s not a disguised mule” might be a better candidate for a known proposition than Dretske suggests it is. This might be thought to ground a divide-and-conquer strategy. In some cases – including Zebra, claims Vogel, as well as BIV – Q is not a Vogel proposition, and as a result it is more plausible that Q is in fact known; whereas, for those cases wherein the Q proposition is a Vogel proposition, S admittedly doesn’t know Q but, for the same reason, also doesn’t know P. So there aren’t really any counterexamples to closure. However, examples (a)–(k) are all cases in which the agent intuitively knows P and Q is a Vogel proposition. If this implies that the Q proposition is not known, then closure requires the same fate for the corresponding P propositions: S doesn’t know where her car is parked, who the president is, where to go for lunch, how much gas is left in her tank, and so on. A Vogel proposition, moreover, follows from many – plausibly, the vast majority – of the propositions that we claim to know. Seemingly wellmeaning testifiers have lied, reliable eyewitnesses have misidentified, reasonably accurate memory has led astray, etc. One would be hard-pressed to identify many sources of information upon which we rely that do not misfire on occasion. If, as per the divide-and-conquer strategy, we don’t



 

that we have such a reason to believe that one big enough to take down the Golden Gate Bridge will do so. In fact, this has happened: a zoo in the Gaza Strip disguised (not a mule but) a donkey as a zebra by painting it (“Gaza Zookeepers Draw Crowds with Painted Donkeys After Zebras Die,” Telegraph. co.uk, The Telegraph, October , . Web. November , ). A zoo in China, moreover, passed off a mastiff as a lion. (“Chinese Zoo’s ‘African Lion’ exposed when Dog Substitute Barks,” abc.net.au, ABC News, August , . Web. December , ). Perhaps we should restrict the relevant class to “reputable zoo” or the like, although we would have to be careful to ensure that a zoo’s counting as “reputable” doesn’t on its own require that it doesn’t disguise its animals. Again, with the possible exception of (d). As Vogel recognizes, herein lies an important difference between local skeptical scenarios of the sort denied by the Q propositions of (a)–(k) and that of (l), namely, “S is not a handless BIV.” We have no evidence that anyone has actually been, or is likely to have been, envatted.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples



know such Q propositions, then we don’t know the corresponding P propositions as well. Skepticism on a global scale is the result. Vogel himself does not claim that Vogel propositions are not known. He does not, in fact, attempt to provide an account of what is known in such cases; he suspects “that such an account may not be available at all. For it may be that the Car Theft Cases together with the problem of semiskepticism reflect deep-seated, unresolved conflicts in the way we think about knowledge.” However that may be, if Vogel is not a skeptic, his commitment to closure requires that the Q propositions of (a) through (k) must count as known, despite being Vogel propositions. But then his discussion of Vogel propositions as susceptible to skeptical reasoning is ultimately beside the point. His view would have to be that such reasoning is incorrect: whatever the correct characterization of knowledge actually is, it will endorse knowledge of both P and Q, notwithstanding intuitions to the contrary that are particularly forceful when Q is a Vogel proposition. But then these cases hardly constitute evidence in favor of non-skeptical closure preservation; to insist that such a position is correct is to impose a certain interpretation on such cases that is far from supported by our intuitive reactions to them. But doing so simply presupposes that closure is true; there is no theory-neutral demonstration here that these are not the counterexamples to closure that they seem to be. At least, there is no such demonstration unless Vogel claims that closure is preserved because skepticism is true. But that presumably isn’t what he’s claiming.

. A Plethora of Inclinations Vogel does concede that “[t]he Car Theft Case and its analogues provide counterexamples to the Closure Principle if we take our intuitions about such cases at face value.” But he also points out that, when confronted with the possibility denied by the Q proposition, people often retract their earlier claim to know P. This, he claims, is “just what the Closure Principle







Vogel a, –. The problem of semi-skepticism is the concern (for the closure denier) that any plausible explanation for why S does not know the Vogel propositions in Dretske cases will apply also to the corresponding P propositions. Appeal to contextualism does not alter this, since the anti-skeptical contextualist must claim that S knows Q, notwithstanding Q’s being a Vogel proposition, in low-standards contexts (and similarly with respect to pragmatic encroachment views). Vogel a, .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure

would require.” So the closure denier owes an error theory to explain why such retractions, nevertheless, occur. However, if the closure advocate is not a skeptic, she can’t take the retractions at face value either, since, according to her, S knows both P and Q in at least pedestrian contexts; it’s only the skeptic who endorses the retraction. So retraction is not, in fact, just what the closure principle would require, at least not if the closure advocate is not a skeptic: she would expect endorsement of Q rather than retraction of P. So the nonskeptical closure advocate owes an error theory too. Vogel recognizes this, and offers some possible explanations of retraction. However, unless he concedes that the only defensible closure-advocating position is the skeptical variety, he can’t reasonably appeal to our inclination to retract as evidence in favor of closure. Both the non-skeptical closure advocate and denier repudiate our inclination to retract knowledge of P as somehow erroneous. The difference is only that the closure denier endorses our inclination to deny knowledge of Q, while the closure advocate treats that as erroneous as well. So closure denial is, in fact, more faithful to this pattern of intuitive inclinations, at least if we’re not skeptics. Meanwhile, the skeptic owes an error theory for our initial preretraction inclination to affirm that we know P. Everyone has some explaining to do. There are, it seems to me, the following reactions to Dretske cases, particularly when the Q proposition is a Vogel proposition. We are inclined to: (a) initially affirm that S knows P; (b) deny that S can learn Q by inference from P (given how S came by her knowledge of P in the first place); (c) deny that S’s background knowledge suffices for knowledge of Q; (d) deny that S knows Q; (e) deny that S knows P when confronted by the possibility denied by Q (and so when S comes to consider the question whether she knows Q); (f ) view the proposition denied in (e) and affirmed in (a) as the same, so that the former denial conflicts with, and so constitutes a retraction of, the latter assertion; and  

  Vogel a, . See Chapter . Vogel a, §. One might take inclinations (b) and (c) to explain (d), which would presumably also require a further inclination to affirm that there are no other sources of knowledge of Q than those cited in (b) and (c). But since (d) also might be a primitive inclination, I list it here separately.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples



(g) suspect that there has been some subtle change in the subject matter, or in the circumstances in which the claim that S knows P is asserted or thought, so that, notwithstanding (e) and (f ), the earlier claim that S knows P was, in some sense, appropriate. Inclinations (a)–(d) support non-skeptical, invariantist closure denial, while (e) and (f ) (and possibly (g)) threaten it. (b)–(f ) support skeptical closure advocacy, while (a) (and possibly (g)) threatens it. (a), (d), (e), and (g) support non-skeptical closure-advocating contextualism, while (f ) counts against it. ((b) and (c) are arguably neutral; although they are predicted by the contextualist in high-standards contexts, they might be taken to count against the contextualist account of lowstandards contexts.) All of (b)–(g) count against non-skeptical, invariantist closure advocacy; the retraction in (f ) only supports closure when combined with an error theory to explain why we aren’t instead inclined to assert that S knows Q (and so why we have the inclinations in (b)–(g)). Nobody, that I am aware of, can claim support from all of (a) through (g). Under these circumstances, no one can claim victory on the basis of our inclinations in Dretske cases. To that extent, Vogel is correct: while these cases do present prima facie counterexamples to closure, they do not do so cleanly. Certainly they do not do so to such an extent that they provide decisive evidence for closure denial as against these other views. I don’t, as a result, appeal to them as such. One aim in presenting them, instead, is to emphasize how ubiquitous they are. Another is to use them as an organizational tool: closure-preserving views can be distinguished by where their advocates understand Dretske’s argument based on such examples to go off the rails.

. The Argument by Counterexample To that latter end a generalization of Dretske’s argument is below. For reasons canvassed in Chapter , it is couched in terms of warrant rather than knowledge.

 

Invariantism is the claim that “S knows that P” is not ambiguous across contexts of knowledge attribution (as proposed by the contextualist). The same applies to interest-relative invariantism.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002



Against Knowledge Closure ..

The Argument by Counterexample

In at least some Dretske cases wherein S recognizes that P implies Q: () () ()

() () () () ()

S is warranted in believing P. However, transmission fails: S can’t acquire a warrant for Q in virtue of her recognition that it follows from P and her warrant for P. Moreover, her warrant for P itself does not directly warrant Q, where that warrant is direct if S’s warrant for P suffices on its own to deliver a warrant for Q, and so regardless of whether she recognizes that Q follows from P; her background information does not deliver a warrant for Q; and she has no default warrant for Q. Therefore, S has no other, transmission-independent, source of warrant for Q. (–) But if WC is true then, if S is warranted in believing P, either transmission succeeds from P to Q or she has another source of warrant for Q. So WC is false. (, , , )

Assuming that the possible sources of warrant for Q aside from transmission from P are exhausted by the options canvassed in premises –, those premises imply : S has no source of warrant for Q other than by transmission from P. The closure advocate can’t deny ; it’s a direct consequence of WC. So the closure advocate must deny one (or more) of premises –. That is, for every Dretske case she must either:





 

If, as Dretske and others suggest, transmission requires penetration – the warrant for P becomes a warrant for Q – then premise  would imply transmission failure, and so premise . But, as per §., that suggestion is false; the availability or otherwise of the warrant for P as a warrant for Q is an independent issue. It is important to remember here that “warrant” is P-warrant: the difference between knowledge and true belief. Having some reason or justification to believe Q does not imply having such a warrant. S has a default warrant for Q if she has a warrant even while having no evidence, grounds, support, reason, etc. in favor of Q. Suppose that S is warranted in believing P and has no independent warrant for Q. Then she can only be warranted in believing Q in virtue of her recognition that P implies Q. Suppose now also that transmission fails: she cannot acquire a warrant for Q in virtue of her recognition that P implies Q. So, even if she does recognize that P implies Q, she will not have a warrant for Q. But WC requires that, if she is warranted in believing P and recognizes that P implies Q, she will have a warrant for Q. So WC implies that either transmission succeeds or she has some other source of warrant for Q.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

Counterexamples



(a) deny that S is warranted in believing P (denial of premise ); (b) insist that transmission does succeed from P to Q (denial of premise ); (c) suggest that S’s warrant for P also suffices as a direct warrant for Q (denial of premise ); (d) claim that S’s background information delivers a direct warrant for Q (denial of premise ); or (e) Suggest that S has a default warrant for Q (denial of premise ).

. The Chapters to Follow In Chapters –, we’ll examine each of these options. Chapter  concerns option (a), which, as we’ll see, amounts to endorsing skepticism. Chapters  and  concern option (b). Chapter  will present an argument against transmission in Dretske cases, one that does not rely on any particular theory of warrant. Chapter  will defend that conclusion further by exploring the limitations of option (b) as a response to skepticism, and also by arguing that three popular conditions on warrant don’t transmit in Dretske cases. Chapter  will present an argument – one that also does not rely on any particular theory of warrant – to the conclusion that options (c), (d), and (e) all imply skepticism. As a result, no non-skeptical attempt to preserve closure by proposing transmission-independent sources of warrant for Q can succeed. Notwithstanding that result, Chapter  will examine option (c) and Chapter  option (d): we’ll see that the suggestion that S has either a direct warrant for Q or a warrant for Q on the basis of background information is untenable. Chapter , finally, will contest option (e): particularly with respect to Q propositions that deny certain “piecemeal” skeptical hypotheses, the suggestion that S has a default warrant for those propositions is also untenable, even if we again put aside the skeptical threat identified in Chapter . The overall result will be that the argument by counterexample succeeds: Dretske cases constitute counterexamples, not only to transmission, but also to closure. Chapter  will defend closure denial against the abominable conjunction and spreading problems. We will also consider the relationship between contextualism (and similar views) and closure, as well as the non-skeptical invariantist closure denier’s response to skepticism. Chapter , finally, will apply the results of the previous chapters to bootstrapping, epistemic circularity, and justification closure.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:35:37, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.002

 

Denying Premise  Skepticism

. Why Skepticism? Closure is preserved, at least with respect to Dretske cases, if premise  of the argument by counterexample – that S knows P – is denied. Since such cases can be easily generated for virtually any piece of knowledge grounded on a fallible basis, and given that most if not all of our knowledge is so grounded, this response is tantamount to skepticism: we know little or nothing of what we ordinarily take ourselves to know. It is worth noting that the skeptical strategy is prima facie unattractive as a response to the argument by counterexample, and not just because very few epistemologists are skeptics. The intuitive force behind closure is, after all, Williamson’s insight: deductive inference is a way of extending one’s knowledge. But insisting on closure at the price of reducing one’s knowledge to the point where one knows nothing is an odd way to respect that intuition. It is surely more attractive to concede that there are exceptions to a putatively universal method of extending knowledge than it is to concede that there is no knowledge to extend. According to the closure denier, the skeptic is half-right. She is correct – and in agreement with intuition – to claim that we don’t know the relevant Q. But that ignorance can no longer infect our knowledge of P through a conduit opened by the inferential relation between P and Q. Knowing P does not inevitably put us in a position to know Q; so not being in a position to know Q does not inevitably undermine our knowledge of P. We can, thus, reconcile our intuitive ignorance of the falsehood of the skeptical hypothesis with our intuitive possession of ordinary knowledge. The optimistic closure advocate, however, hopes for more: we know, in at least some contexts and circumstances, both P and Q. Meanwhile, the skeptical closure advocate hopes to wield closure to convince the closureaffirming optimist to come over to her side. 

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



In §. I will first consider a possible compromise between the skeptic and the optimist who hopes to protect ordinary knowledge from skeptical assault, and find it wanting. Next, I will draw a distinction between different kinds of skeptical hypotheses that the skeptic might hope to wield, one that will play a significant role in the rest of this book (§.). With that distinction in hand, I then turn to the famous skeptical closure argument, which mobilizes our apparent failure to know that skeptical hypotheses are false, in conjunction with closure, against our mundane claims to know (§.). I conclude that the skeptical closure argument fails, and that alternative arguments that the skeptic might offer in its stead rest on far more disputable principles than closure (§.–.).

. Downgrading A possible compromise between skeptic and optimist is to concede to the skeptic that we don’t know either P or Q, while suggesting that there are related claims that we do know, namely that Q, and so P, are probably true. After all, while it is intuitive that we don’t know Q, our background knowledge seems – at least in Dretske cases whose Q propositions are Vogel propositions – to provide the wherewithal to recognize that Q is likely. While S’s background knowledge concerning the infrequency of car theft may not suffice for her to know that her car wasn’t stolen, for example, it does arguably suffice for her to know that it probably wasn’t stolen; correspondingly, while she doesn’t know that her car is in space B, she does know that it is probably there. Call this downgrading. The costs of downgrading are, however, prohibitive. For one thing, it isn’t really much of a compromise from the optimist’s standpoint. In light of the ubiquity of Dretske cases, the result will still be that we know very little, if anything, of what we take ourselves to know. As exemplified by Misprint for example, there are precious few, if any, testimonial sources of information that are so reliable that there isn’t some reason to think that those sources occasionally, if rarely, produce misinformation. On the contrary, there is very typically reason to think that they do so; misprints in newspapers, for example, do occur. But it’s a pretty substantial capitulation to the skeptic to concede that we don’t know the enormous body of information that we take ourselves to have acquired by the various sources of testimony we exploit, even when mitigated by the 

See Nelkin , for example.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

concession that we know it to be probable. The same goes for other putative sources of information. Correspondingly, downgrading lines up badly with everyday uses of “know.” Claims to know where our car is parked, that the Broncos won, that the animal is a zebra, and so on, are utterly commonplace. On the view under consideration, most (if not all) such claims are false. The downgrader might respond that, although downgrading is strictly speaking always correct, there are practical reasons to act as though it isn’t. It would, after all, be tedious to keep adding “probably” to everything we say; we can communicate just as effectively by dropping the qualification and engaging in the “loose talk” that we do engage in. However, the downgrading iterates. S’s belief that Trump is probably the president, for example, depends on background knowledge; and that background knowledge is also susceptible to downgrading. Suppose that S’s estimation of the probability that Trump is president reflects the fact that Trump could have a sudden, fatal heart attack. That estimation depends on background information concerning Trump’s age and condition and the frequency of such attacks for those of that age and condition. But that background information itself will imply various Q propositions that S will, on this view, only probably know. As per Misprint, the contents of the newspaper report from which S learned Trump’s age and condition, for example, imply that that report was not an erroneous misprint. But since, as S is aware, newspapers do occasionally publish misprints, S only knows that a misprint is improbable. So, according to the downgrading response, S only knows what Trump’s age and condition probably are. So she doesn’t know outright that Trump is probably the president; she only knows that it is probable that he is probably the president. The basis for her belief that newspapers rarely publish misprints, whatever it is, will presumably also be subject to error, however small. So analogous reasoning implies that she only knows that it is probable that misprints are rare. So she only knows that it is probable that it is probable that Trump is probably the president. And so on. 



Nelkin  doesn’t seem to recognize that this is so. She claims that “we are not in Bill’s situation very often,” where Bill, who knows that Mary intends to be in New York tomorrow, recognizes also that she might, very improbably, win the lottery for which she has a ticket, in which case she won’t be in New York. Nelkin is content to deny that Bill can know that Mary will be in New York, although he can know that she will probably be there. But Vogel propositions follow from all manner of ordinary knowledge claims; the same treatment will then ultimately deliver the skeptical position described above. Analogous iterated downgrading will apply to her putative knowledge that people of Trump’s age and condition only rarely have heart attacks.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



Perhaps there are propositions to the effect that something is probable that survive downgrading. But they certainly won’t be the propositions that we ordinarily claim to know; S doesn’t know outright, for example, that Trump is probably the president. It was, however, by claiming that we do at least know that those ordinary propositions are probably true that the downgrader hoped to mitigate the skeptical implications of her view.

.

Piecemeal and Wholesale Skeptical Hypotheses

The skeptic hopes to wield closure in her defense: since S doesn’t know that the skeptical hypothesis is false – and so doesn’t have a warrant for it – WC implies that she doesn’t have a warrant – and so doesn’t know – mundane propositions that imply that it is false. We’ll examine that argument in detail in §.. But it is first worthwhile to distinguish between two types of skeptical hypothesis to which the skeptic might appeal. The skeptic does not typically rely on such local skeptical hypotheses as that the animal is a disguised mule, the car was stolen, Trump had a heart attack, and so on. Instead, she hopes to wield a one-size-fits-all skeptical hypothesis: one might be a brain in a vat, dreaming, the plaything of an evil genius, a victim in the Matrix, and so on. Call this wholesale skepticism, in contrast to the piecemeal skeptical approach that appeals to our seeming ignorance of Q in the particular circumstances described in Dretske cases like (a) through (k). However, no one hypothesis is such that its denial follows from all of the mundane knowledge claims that the skeptic hopes to undermine. “I am a brain in a vat” (BIV), for example, won’t serve in way of undermining knowledge that I have hands, since I could be such a brain to which hands are still attached (as per The Matrix). So the hypothesis must be “I am a handless BIV” in order to ensure that the requisite inferential relation is in place; and similarly for other target pieces of mundane knowledge. There is, therefore, not so much an all-purpose skeptical hypothesis as an hypothesis sketch whose instances are tailored 

Strictly speaking, S could fail to know that the skeptical hypothesis is false while, nevertheless, having a warrant for it: the hypothesis could be true (in which case she obviously doesn’t know that it is false); she could not believe that it is false; or she could believe that it is false but not on the basis of that warrant. But the skeptic does not claim that the skeptical hypothesis is true; she doesn’t deny that S believes that it is false; and she doesn’t claim that S’s belief isn’t based on a warrant that she nevertheless has. The skeptic is gunning for S’s putative warrant for the claim that the hypothesis is false.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

to fit various ordinary claims to know: one is a BIV-in-circumstances-inwhich-P-is-false, whatever P might be. Indeed, the requisite hypotheses can’t even share the property of positing that I am a BIV, since “I have a brain” and “vats exist” (and any other proposition implied by my being a BIV, such as “material objects exist,” “there are neurons,” etc.) don’t imply that I’m not a BIV. But surely my beliefs that I have a brain, that vats exist, etc. shouldn’t escape the skeptic’s clutches. So, for at least some ordinary beliefs, the BIV scenario can’t function as even an outline for skeptical hypothesis construction. There are, moreover, significant advantages – from the sceptic’s standpoint – in appealing to piecemeal hypotheses. For one thing, although many do report an intuition that they do not know wholesale hypotheses to be false, intuitions to the contrary are sometimes affirmed. But it’s my impression, at least, that intuitions to the effect that we don’t know that piecemeal hypotheses are false – that S does not, in her circumstances, know that her car isn’t stolen, that the newspaper report isn’t a misprint, and so on – are somewhat more secure. Wholesale hypotheses are also not typically Vogel propositions. Of the three characteristics Vogel identified for such propositions – the state of affairs it describes is not abnormal, there is at least some statistical evidence for its occurrence, and it would be arbitrary to endorse its denial over alternatives in which it is true – only the last might apply. Some epistemologists suggest that hypotheses that do not display such characteristics are precisely those that we do not need to rule out in order to know the mundane propositions that imply their falsehood. Klein (), for example, suggests that we only need to rule out skeptical hypotheses when we have some evidence in their favor. And it is, he suggests, in the nature of a wholesale skeptical hypothesis like “I am a BIV” that we cannot have evidence for it. The only possible such evidence is empirical evidence – I can have no purely a priori reason to believe that  





See Roush  for an argument that no suitably general BIV-style hypothesis will serve the skeptic’s purposes. Nor will the dream hypothesis provide the requisite generality. “I dreamt last night” is true in both the ordinary and dream scenarios. And if I’m an ordinary embodied person doing the dreaming, then many other ordinary propositions are true as well. Perhaps the evil genius hypothesis will do the trick (although “there are those who wish to deceive me,” at least, is true in both scenarios). “To many of us it just does not seem so uniformly plausible that one cannot be said correctly to know that one is not at this very moment being fed experiences while envatted.” (Sosa a, ) See also DeRose , chapter . The dream hypothesis is a possible exception, since we do dream.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



I am not a BIV – and if I am a BIV then all empirical evidence is pseudoevidence, since all such putative evidence is only the result of computergenerated stimulations. But the same cannot be said of Dretske cases whose Q propositions are Vogel propositions. Not only is evidence for the relevant piecemeal hypotheses entirely conceivable, we have some such evidence: cars in relevantly similar circumstances have been stolen, relevantly similar gauges have broken, and so on. So this response cannot be wielded against such hypotheses. Others suggest that we know the falsehood of wholesale skeptical hypotheses by default. Like Wittgenstein’s “hinge” propositions, they constitute an essential backdrop against which our intellectual exchange with the world takes place. They, therefore, occupy a unique place in our intellectual lives as preconditions of inquiry rather than products of it; we must, therefore, treat them as known by courtesy of that unique role, on pain of having no intellectual life at all. However attractive or otherwise such a line of response might be against wholesale hypotheses, it is far less plausible when applied against piecemeal hypotheses. Such local, specific, contingent claims as “that is not a disguised zebra,” “my car wasn’t stolen,” and the like are poor candidates for hinge proposition status. So the default-knowledge strategy is particularly implausible when directed against piecemeal skepticism. The nearest possible world in which wholesale skeptical hypotheses are realized is also distant, at least as our technological capacities stand now: we are a long way from being able to actually envat a brain. However, the same cannot obviously be said of the hypotheses contemplated in (a)–(k), since cars are stolen, people have heart attacks, and so on. So those tempted to respond to skeptical hypotheses by suggesting that their realization occurs in possible worlds too distant to trouble our ordinary knowledge cannot easily offer the same response to the piecemeal skeptic.

    

Klein claims that transmission from mundane propositions to the falsehood of the skeptical hypothesis succeeds, so that closure is preserved. Klein is well aware of this, claiming that background information suffices to counter that evidence. See Chapter  concerning the plausibility of the claim that background information does so suffice. Strategies along these lines are pursued by Wright  and , and Pritchard  and , among others. I will explore these strategies – and Wright’s approach in particular – in Chapter . The dream hypothesis is again a plausible exception. See Sosa  for a safety theorist’s concession that appeal to safety alone won’t neutralize this hypothesis. See §...

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

The skeptic’s claim that we don’t know Q propositions when those propositions deny piecemeal skeptical hypotheses is, therefore, immune to a wide variety of challenges that have been directed against the skeptic’s appeal to wholesale hypotheses. This is not to say, however, that the skeptic has to choose. The optimistic closure advocate will have to contend with the fact that we are inclined to affirm that we don’t know the denial of both wholesale and piecemeal skeptical hypotheses. Since both follow from pedestrian knowledge claims, she must insist that we are not really so ignorant. She then needs to explain how we come by that knowledge, and provide an error theory accounting for our intuitions to the contrary. The variety of skeptical hypotheses makes this all the more challenging.

. The Skeptical Closure Argument ..

Defending the Skeptical Closure Argument

We can now consider the skeptical closure argument. The argument almost invariably invokes wholesale skeptical hypotheses. But in light of the advantages for the skeptic in appealing to piecemeal rather than wholesale skeptical hypotheses highlighted in the last section, I’ll illustrate the argument by utilizing a piecemeal hypothesis, namely, “it’s a disguised mule” from Zebra. The standard presentation of the argument (as applied to Zebra) runs as follows: () () ()

S doesn’t know that the animal is not a disguised mule. If she doesn’t know that it’s not a disguised mule, then she doesn’t know that it’s a zebra. Therefore, she doesn’t know that it’s a zebra.

Premise  is supposed to be grounded in closure. The obvious way to so ground it is to treat it as closure’s contrapositive, so that the relevant closure principle is “If S knows P (and P implies Q), then S knows Q.” But, as noted in §., that principle is obviously false. The skeptic might try the Classical Formulation, “if S knows P and knows that P implies Q, then S knows Q.” But as per that same section, that principle is false too. The skeptic might instead try KC: Necessarily, for every agent S and propositions P and Q: if (a) S knows that P while (b) recognizing that P implies Q, and (c) S believes Q on the basis of her belief that P and recognition that P implies Q, then (d) S knows that Q.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



The relevant contrapositive is: If S doesn’t know that Q, and recognizes that P implies Q and believes Q on the basis of her belief that P and recognition that P implies Q, then S doesn’t know that P.

But, as Warfield and David () indicate, the skeptic can only wield this argument against believers who not only don’t know that the skeptical hypothesis is false, but also satisfy the other clauses in the antecedent. So the argument applies only to those agents who recognize that P implies Q, believe Q, and base that belief on their belief in P and recognition that P implies Q. Warfield and David then point out that these are highly implausible as general claims about ordinary believers. Q might well, after all, have never occurred to them; they may not even have the conceptual resources needed to consider it. And, even if they do, it is very unlikely that they would believe it on the basis of its following from P. It would, for example, be intuitively bizarre for S to come to believe that it’s not a disguised mule by inferring this from its being a zebra when she believes the latter on the basis of its looking like one. Warfield and David conclude that it is difficult (or impossible) to formulate closure in such a way that it can be wielded against ordinary knowledge. But I suggest a different lesson. For suppose that the implausible is true: S does acquire her belief that it’s not a disguised mule by inference from her belief that it’s a zebra. Nevertheless, it is undoubtedly not in the spirit of the skeptical argument that her knowledge that it’s a zebra could be rescued from the skeptic’s clutches by her either ceasing to believe that it’s not a disguised mule or believing it on some other basis than its following from “it’s a zebra.” The idea is not that she doesn’t know that it’s not a disguised mule, but that she can’t know that, and the fact that she can’t implies that she doesn’t know that it’s a zebra, whether or not she believes that it’s not a disguised mule or acquires that belief by inference from its being a zebra. The skeptic can mold this into an argument by pointing out that, if S can’t know that it’s not a disguised mule – if she can have no source of warrant for that proposition whatsoever – then she can neither know it by means of inference from “it’s a zebra” nor by any other means. But KC ensures that one of these ways must be available if S knows that it’s a zebra: she must either be able to learn that it isn’t a disguised mule by inference from “it’s a zebra” or can know it in some other way. So S doesn’t know that it’s a zebra. It now doesn’t matter that S doesn’t in fact satisfy clauses (b) and (c) of KC; even if she had done so, she wouldn’t know that it’s not a disguised

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

mule (since she can have no warrant for that proposition whatsoever). And that can only be so, given KC, if she doesn’t know that it’s a zebra. Since the argument really concerns the warrants that are available to S for “it’s not a disguised mule” – and since, as we saw in §., closure needs to be formulated in terms of warrant anyway – I’ll use WC in setting out the argument. The Skeptical Closure Argument () No warrant whatsoever for “it’s not a disguised mule” is available to S (given her circumstances). () Therefore, she can’t acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra” and she has no other source of warrant for it (). () However, if S’s belief that it’s a zebra is warranted, then she must either be able to acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra” or have another source of warrant for it (WC). () So S’s belief that it’s a zebra is not warranted (, ). () Therefore, S doesn’t know that it’s a zebra (). Of course, the skeptic is not only gunning for S’s belief that it’s a zebra; she intends the argument to apply to all, or at least the vast majority, of our ordinary claims to know. So the argument requires a generalization step asserting that, for each knowledge claim that is included in the skeptic’s target set, there is a skeptical hypothesis to which premise  (and so also  and ) applies. Wholesale hypotheses are precisely designed to apply to a wide variety of empirical claims, although, as we saw in §., there are limits to that application. The ease with which piecemeal hypotheses can be tailored for specific claims taken individually also provides reason to believe that the generalization step is justified. I won’t explore the issue further here. .. Premise  Needs Argument The skeptic, then, has an argument to offer that mobilizes a plausible closure principle in premise , and so that answers Warfield and Davis’ 

Suppose the antecedent of  is true and the consequent false: S’s belief that it’s a zebra is warranted as per clause (a) of WC, but she has no source of warrant for “it’s not a disguised mule,” not even by inference from “it’s a zebra.” Then, even if she recognized that the former follows from the latter as per clause (b) of WC, S would still have no warrant for the former, and so would violate the consequent (c) of WC, despite having satisfied its antecedent. But WC is necessary, so this can’t happen. So  follows from WC.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



objection. But why believe premise ? Why believe that S not only doesn’t have a warrant for “it’s not a disguised mule,” but can’t have one? The skeptic might be tempted to baldly appeal to the intuition that S can’t have such a warrant given her circumstances and leave it at that. But the optimist can counter by appeal to the bald intuition that she does know, and so has a warrant for, “it’s a zebra.” The result appears to be a stalemate. And, as pointed out in §., the optimist seems to come out on top in that dialectical situation. For, again, Williamson’s insight underlying closure concerns the capacity of inference to extend knowledge, not to eliminate it. If the choice is between extending knowledge to the denial of skeptical hypotheses and conceding that there is no knowledge to extend, the optimistic closure advocate might insist that the former response to this conflict of intuitions is the more reasonable one. So it behooves the skeptic to argue for premise . The inevitable approach begins by pointing out that, if it is a disguised mule, it will still look like a zebra; the denial of the skeptical hypothesis implies that she has the same evidence that she actually has. The skeptic then argues that this shows that S can’t know that it’s not a disguised mule. I’ll examine that argument – which I’ll call the indistinguishability argument – in §§.. and ... But first I’ll consider – and reject – a response to it offered by advocates of externalist views of evidence. ..

Evidential Externalism

The prevalence of wholesale (versus piecemeal) skeptical hypotheses has significantly skewed discussion of the indistinguishability argument – and of skepticism overall – by skeptics and anti-skeptics alike. The argument is very typically presented in terms of the identity of one’s experiences in the ordinary non-skeptical and wholesale skeptical scenarios. This shows, the skeptic insists, that one’s evidence in the two scenarios is identical. But, since one’s evidence doesn’t discriminate between them, it can’t deliver a warrant for the hypothesis that one does not occupy the skeptical scenario versus the hypothesis that one does. This way of presenting the argument has invited a variety of responses that deny that one’s evidence in the two scenarios really is the same. If those responses are correct, the skeptic’s argument doesn’t get off the ground. The disjunctivist, for example, claims that S’s perceptual evidence in the ordinary “good case” – wherein she isn’t in a skeptical scenario – and in the 

See Moore a.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

“bad case” – wherein she is in a skeptical scenario – is not the same (and so is not limited to her internal states). On John McDowell’s version of this approach this is because, in the good case, S’s perceptual evidence is that she sees that she has hands, where this evidential state is factive: she can’t see that she has hands unless she has them. In the bad case, however, she doesn’t of course see that she has hands, since she doesn’t have any hands to see. So, contrary to the skeptic’s claim, her evidence in the good and bad cases is not the same; evidence is not limited to internal states. On Timothy Williamson’s E = K proposal, one’s evidence consists in all and only what one knows. In the good case, S knows that she has hands; so “I have hands” is part of her evidence in the good case. But in the bad case she doesn’t know that she has hands (since she doesn’t have hands). So, again, S has evidence in the good case that she does not have in the bad case; evidence is not limited to internal states. Piecemeal skeptical hypotheses don’t, however, require that S’s evidence be limited to internal states, at least not if the basis B constitutes that evidence. For in many cases B can be taken to concern some feature of S’s external environment: what’s on the surface of the zebra, the report in the newspaper, where the gas gauge needle points, the contents of a bank account, and so on. There is no obvious restriction to impose on B, except that S’s belief in P is related to B in whatever way is required for that belief to constitute a response to B. B could even concern states that are unobservable to the naked eye. It is, for example, widely accepted that the universe is expanding. The standardly cited basis for believing this is the cosmological red shift. That the universe is expanding implies that it is not the case that the universe isn’t expanding and the red shift is in fact a result of an increase in photon mass. That latter claim is a piecemeal skeptical hypothesis vis-à-vis the expansion hypothesis. But the red shift is certainly not naked-eye detectable, and not in any way a description of the astrophysicist’s experience (although it is accessible to the astrophysicist through sophisticated experimental design). It’s no surprise that, on the wholesale skeptical approach, the basis – and so S’s evidence, if the basis is taken to constitute that evidence – is limited to S’s internal states. For the one-size-fits-all approach of the wholesale   

 See, for example, McDowell  and . See Williamson a and b. See §.. This alternative has been seriously postulated; see Cartwright, John, “Cosmologist Claims the Universe may not be Expanding,” Nature: International Journal of Weekly Science,  July , www.nature.com/news/cosmologist-claims-universe-may-not-be-expanding-..

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



skeptic requires that the skeptical hypothesis – or at least its sketch – be applicable to just about any proposition concerning the “external” world. And that requirement pushes the basis – which is to remain in the skeptical scenario by design – inside S’s mental life. But that is an artifact of the wholesale approach; evidential internalism is not presupposed by the closure-based skeptical argumentative strategy when it mobilizes piecemeal skeptical hypotheses. The evidential externalist might, however, insist that, when his view is applied to the red shift example, S’s evidence still implies, or includes, the universe’s expansion. S cannot therefore have the evidence she has when the universe is not expanding. If E = K, for example, and (we’ll assume) astrophysicists know that the universe is expanding thanks to the red shift, then “the universe is expanding” is part of their evidence in the good case that is not present in the bad case. And, similarly, S’s evidence includes the fact that it is a zebra and not just that it looks like one, that the tank is empty and not just that the gauge reads “E,” that the Broncos won and not just that the report said so, and so on. So, if E = K and S knows P, then no alternative skeptical hypothesis can be constructed wherein S doesn’t know P but her evidence remains the same. On E = K, there can’t be a world in which S’s evidence for P is the same as it actually is, and yet P, which S actually knows, is false. For, if P is false in a world, S doesn’t know it in that world. So she doesn’t have the same evidence that she has in the actual world. So there can’t be a skeptical hypothesis that describes such a world. But basis fallibilism remains true. For there are certainly worlds in which S doesn’t know P and in which the basis remains: the zebra is disguised, the newspaper reported last week’s win, the red shift is due to an increase in photon mass, and so on. And it is still intuitively disturbing that, if the skeptical scenario is realized, the basis – the state of affairs to which S responds by believing P – remains. The skeptic can just give Williamson his expansive concept of evidence and suggest that it is still true that S’s “shmevidence” – the basis of S’s warrant for P – is 

This is, to my mind, a serious problem with E = K. Whenever we learn that a proposition P is true on the basis of some body of evidence E that we know, P itself joins our body of evidence (because we now know it). So before we learn P, E did not include P, but after learning P – on the basis of E – it does, and is much more decisive as a result (since E, which includes P, implies P). But this seems crazy; the evidence we had for P before we noted its support for P and thereby learned P is surely the same evidence that we have for P after we learn it. And it seems equally crazy to suggest that, after learning P by appeal to E, we have decisive – indeed, trivially supporting – evidence for P, for every P we know.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

present in both the good and bad cases and that fact still seems to threaten S’s knowledge of Q, whether or not the threat is expressed in evidential rather than shmevidential terms. At least in the piecemeal cases, this requires no commitment to evidential (or shmevidential) internalism. The same goes for the McDowellean approach. If it implies that identification of the red shift constitutes the “detection of the universe’s expansion” – which is factive, implying that the universe is expanding – so that her evidence, including that detection, implies that it is expanding, then there is no world in which we can have that evidence and in which it isn’t expanding. And, again, the skeptic can point out that it is possible for the red shift itself – that which, after all, the scientists themselves refer to as the evidence for the expansion – to exist without expansion, which is all she needs for the construction of her skeptical hypothesis. The threat presented by that hypothesis is, at least intuitively, undiminished by the disjunctivist’s insistence that S’s evidence, unlike her basis, implies that the universe is expanding. Disjunctivism, at least as standardly presented, is a theory of the agent’s direct perception of her proximate physical environment, contrasted with views that posit an internally specifiable experiential state common to both ordinary and wholesale skeptical scenarios. As such, it need not be extended in the manner contemplated above to such indirect sources of information as that provided by the red shift. It might well be that disjunctivists would not wish to push their disjunctivism out so far. But to the extent that they don’t do so, they can provide no response to the piecemeal skeptic who appeals to the evidence present in both good and bad cases when the bad cases lie beyond the boundary to which disjunctivists extend their factive view of evidence. It is, nevertheless, true that the wholesale skeptic does appeal to the agent’s experience as an internally specifiable basis (whether or not this is identified as S’s evidence) in common between the good and wholesaleskeptical bad cases. Again, that is an artifact of the intended scope of such hypotheses and need not apply to the piecemeal cases. But perhaps the disjunctivist can at least block the wholesale skeptic’s appeal to a common evidential ground. Disjunctivists do, however, concede that S is unable to distinguish the good from the wholesale-skeptical bad case. So, even if S’s evidence is characterized as a factive state only manifested in the good case, the skeptic 

See McDowell , for example.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



can point out that, although that evidential state itself is not realized in the bad case, a state that is indistinguishable to S from such a state is realized. She can then utilize this fact in an argument against S’s knowing that the wholesale skeptical scenario is false. In any event, the availability of a mundane external, non-factive basis manifested in piecemeal skeptical scenarios seems to suffice for the skeptic’s purposes, whatever the fortunes of wholesale skeptical hypotheses might be at the hands of the evidential externalist. .. The Indistinguishability Argument We’ll now put aside the evidential externalist response and examine the skeptic’s indistinguishability argument in support of the first premise of the skeptical closure argument, namely, that S doesn’t know that it’s not a disguised mule. Here is a version of the argument: Indistinguishability Argument () S would have the same basis that she actually has for “it’s a zebra” if it were a disguised mule. () If S would have the same basis for “it’s a zebra” that she actually has if it were a disguised mule, then that basis does not suffice to warrant belief that it’s not a disguised mule. () So S does not have a warrant for believing that it’s not a disguised mule. Having put aside the evidential externalist response, we can concede premise  to the skeptic. The question now is why we should believe premise . There are two general approaches to consider. On one, the fact that S would have the same basis for “it’s a zebra” that she actually has if it were a disguised mule initially undermines S’s warrant for her belief that it’s a zebra. For if she doesn’t have a warrant for her belief that it’s a zebra, then she can’t acquire a warrant for its not being a disguised mule by inference from it. So she doesn’t have a warrant for its not being a disguised mule; the first premise of the skeptical closure argument is true. 

See Wright  and  for such an argument. Moreover, since the basis need not be construed as S’s perceptual evidence (or all of it), the facticity of S’s perceptual evidence need not imply the facticity of that basis. It is not obvious – to me, at any rate – that the postulation of a common internal basis with respect to wholesale skeptical hypotheses is equivalent to the positing of an evidential “highest common factor” mental state to which disjunctivists object.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

But this is an unpromising way to interpret the indistinguishability argument. For it means that the conclusion of that argument – S doesn’t have a warrant for its not being a disguised mule – proceeds through the intermediate conclusion that she doesn’t have a warrant for its being a zebra. But that is also the conclusion of the skeptical closure argument itself. So, an intermediate conclusion of the indistinguishability argument – which argument is intended to establish the first premise of the skeptical closure argument – already establishes the conclusion of the skeptical closure argument. There is no sensible dialectical role for the closure argument to play. So the skeptic can’t view the indistinguishability argument as establishing, on its own, that S doesn’t have a warrant for the claim that it’s not a zebra; for all that argument shows, the skeptic must concede, she might still have such a warrant. In support of this approach, she might point out that its not being a zebra doesn’t imply that S would have the basis she has if it is a zebra; most things that aren’t zebras, after all, don’t look like zebras. Whereas, the falsehood of the skeptical claim that it’s a disguised mule does imply that it still looks like a zebra. So, perhaps, the fact that her basis would remain if the claim that it isn’t a disguised mule were false undermines her warrant for that claim in a way that doesn’t necessarily undermine her warrant for its being a zebra. But why should the optimist concede that S’s warrant for “it’s not a disguised mule” is undermined by the fact that she would have the same basis if it were a disguised mule? The skeptic might be tempted to appeal to underdetermination: the hypothesis that it’s not a disguised mule predicts the same evidence as does the hypothesis that it is a disguised mule; and whenever two competing hypotheses predict the same evidence, the skeptic might claim, that evidence can’t warrant either hypothesis. But this applies equally well to the contest between “it’s a zebra” and “it’s a disguised mule”: the disguised-mule scenario implies that S has the same evidence – construing her basis as that evidence – that one would





Peter Klein calls this “virtual circularity” and claims that it implies that the closure argument can’t deliver a warrant for its conclusion (Klein ; his claim actually concerns justification rather than warrant, but I suspect he would apply it to warrant as well). It’s not obvious to me that a warrant can’t be delivered this way. But, regardless, it does mean that there is no point in proceeding with the closure argument. Of course, the skeptic intends to establish that S doesn’t, after all, have a warrant for “it’s a zebra.” But that’s the conclusion of the skeptical closure argument. As we’ve just seen, her argument for one of its premises – the indistinguishability argument – can’t already foreclose the possibility that she has such a warrant without rendering the closure argument pointless.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



expect if it’s a zebra, namely, that it looks like one. So if the fact that two hypotheses predict the same evidence prevents that evidence from delivering a warrant for either, it counts against S’s warrant for “it’s a zebra.” But that is, again, the conclusion of the overall skeptical closure argument. So, if appeal to underdetermination succeeds at all, it also renders the skeptical closure argument superfluous. The skeptic might be tempted to appeal instead to the insensitivity of S’s belief that it isn’t a disguised mule: if it were a disguised mule, S would still believe that it isn’t. That doesn’t apply to “it’s a zebra” since, if it weren’t a zebra, it very likely wouldn’t look like one; the nearest world in which it isn’t a zebra, presumably, isn’t one in which it’s a disguised mule, but instead one in which it is a flamingo, or hippo, or . . . But, of course, classical sensitivity implies closure failure. So a skeptic using this reasoning will have shot themselves in the foot: they will have appealed to a sensitivity condition on warrant that implies closure failure, in the course of arguing for skepticism on the basis of closure. The upshot is that the skeptic’s appeal to the indistinguishability argument, offered in support of the first premise of the skeptical closure argument – that S doesn’t have a warrant for “it’s not a disguised mule” – is on unstable ground. The indistinguishability argument can’t establish that S has no warrant for “it’s a zebra” on its own, for there will then be no point in continuing with the closure argument directed toward the same conclusion. So the indistinguishability argument must establish that S doesn’t have a warrant for “it’s not a disguised mule,” without also establishing that she doesn’t have a warrant for “it’s a zebra.” But the considerations that she might be tempted to offer in support of that argument’s second premise seem to either apply equally well against her warrant for “it’s a zebra” (as does underdetermination) or undermine closure itself (as does insensitivity). So appeal to indistinguishability in support of the first premise of the closure argument – that S doesn’t have a warrant for “it’s not a disguised mule” – is unpromising. But it’s hard to see what else the skeptic might offer in support of that premise. 



In fact, it applies better to the contest between “it’s a zebra” and “it’s a disguised mule.” For while “it’s a zebra” predicts that it will look like a zebra – we would expect it to look like a zebra if it is one – “it’s not a disguised zebra” grounds no prediction at all. Since almost everything that exists isn’t a disguised mule – and doesn’t look like a zebra – there is no common appearance that those things that aren’t disguised mules can be expected to present. See §.. Klein  points this out on p. . There are views that attempt to reconcile closure with sensitivity; see the citations in fn. . But these are explicitly anti-skeptical views that endorse knowledge of (and so warrant for) the denial of the skeptical hypothesis; the skeptic can hardly appeal to them in the course of arguing for skepticism.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure .. Indistinguishability and Transmission

I turn now to a different problem for the indistinguishability argument. Its conclusion asserts that S has no warrant for “it’s not a disguised mule.” But appeal to the fact that it would look like a zebra if it were a disguised mule seems, at best, to license the more limited claim that its appearance cannot directly warrant S’s belief that it’s not a disguised mule. She can’t acquire a warrant for its not being a disguised mule by appeal to its looking like a zebra, the thought goes, because it would look precisely that way if it were a disguised mule. Even if correct, however, that more limited conclusion leaves open the possibility that S could acquire a warrant for “it’s not a disguised mule,” not directly by appeal to its appearance, but indirectly, by inference from “it’s a zebra.” Recall that the indistinguishability argument must be interpreted in such a way as to leave open the possibility that the animal’s appearance could deliver a warrant for “it’s a zebra” (for otherwise, proceeding with the closure argument would be pointless). So, for all that argument establishes, S might have a warrant for “it’s a zebra” on the basis of its appearance. But if she had such a warrant, she should then be able to acquire a warrant for “it’s not a disguised mule” by inference from it. As we noted in §., transmission is not – and does not imply – penetration: that S acquires a warrant for “it’s not a disguised mule” by inference from “it’s a zebra” doesn’t mean that the warrant she has for “it’s a zebra,” on the basis of its appearance, must suffice as a direct warrant for “it’s not a disguised mule.” So an argument to the effect that the animal’s appearance doesn’t directly warrant “it’s not a disguised mule” does not undermine the possibility that S could acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra.” So even if the indistinguishability argument does establish that the animal’s appearance doesn’t deliver a direct warrant for “its being a disguised mule,” the skeptic needs another argument, one that rules out the possibility that S could acquire a warrant by inference from “it’s a zebra.” However such an argument might go, it can’t proceed by suggesting that the reason S can’t acquire a warrant for “it’s not a disguised mule” by transmission from “it’s a zebra” is that she has no warrant for “it’s a zebra” 



Recall that to say that the warrant from B to Q is direct is to say that S’s warrant from B to Q is not generated by inference from P, itself warranted in virtue of B, but by unmediated appeal to B itself. So S would remain warranted in believing Q even if she did not recognize that P implies Q. I will be presenting an argument to this effect myself in Chapter .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



(so that there is no warrant to transmit). For the conclusion of that initial sub-argument – that S has no warrant for “it’s a zebra” – would again replicate the conclusion of the overall skeptical closure argument. If the closure argument is to retain a dialectical point, the argument against transmission must establish that S couldn’t acquire a warrant for Q by inference from P even if she were warranted in believing P. That is, it would have to establish that transmission fails in this case (as well as in other skeptical arguments against ordinary knowledge). Even if, however, an argument against transmission does succeed, that still doesn’t show that S has no warrant for “it’s not a disguised mule.” For she might have relevant background information: given the disastrous consequences for a zoo’s reputation that would result if the deception were discovered, it is unlikely that this particular zoo would engage in such deception. Perhaps that suffices on its own to deliver a warrant for “it’s not a disguised mule.” Or perhaps S has a default warrant: it is the sort of proposition that, in S’s circumstances at least, is automatically warranted even when she has no basis that delivers a warrant. So the skeptic needs arguments against these possible sources of warrant as well.

. Front-Loading ..

Front-Loading and Closure

Suppose, then, that the skeptic offers an argument against transmission from “it’s a zebra” to “it’s not a disguised mule” and provides other arguments against warrants by background information and by default. An optimist who concedes that these arguments succeed has two options: continue to endorse closure and swallow the skeptical consequences of that commitment, or deny closure. The skeptic, of course, urges the first option. But why not take the second? Because, of course, closure is highly intuitive. But as we’ve seen, the intuition behind closure – Williamson’s insight – concerns the ability of deductive inference to extend knowledge; that is, it concerns transmission. But the skeptic must claim that transmission fails: even if S had a warrant for “it’s a zebra,” she couldn’t acquire a warrant for “it’s not a disguised  

As noted in §., the latter is implausible vis-à-vis piecemeal skeptical hypotheses; but perhaps we at least have such warrants for the denials of wholesale skeptical hypotheses. I will be presenting such arguments myself in Chapter  (against warrant by background information) and Chapter  (against warrant by default).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

mule” by inference from it. And the optimist (we are assuming) concedes that the skeptic is right about this. So why should the optimist continue to affirm closure when doing so has such disastrous skeptical consequences, and when she has already conceded that the intuition behind closure – that deductive inference expands knowledge – doesn’t apply? The skeptic might respond by claiming that S’s warrant for “it’s a zebra” requires that she already has a warrant for “it’s a disguised mule.” Since it would look like a zebra both if it were a zebra and if it were a disguised mule, she needs to rule out the possibility that it is a disguised mule – by having a warrant against it – before appeal to the animal’s appearance can deliver a warrant for “it’s a zebra.” The skeptic then claims that S has no such warrant. Call this front-loading, since it requires a prior warrant for the conclusion in order for S to acquire her warrant for the premise. If front-loading is required then closure is preserved: S can only have a warrant for “it’s a zebra” if she also has a warrant for “it’s not a disguised mule.” So, even though she can’t acquire a warrant for the latter by inference from the former, she still can only be warranted in believing either both or neither, as closure requires. It no longer matters whether S can acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra”; any such warrant would come too late to provide the background warrant for “it’s not a disguised mule” that S’s warrant for “it’s a zebra” requires. So the skeptic can now drop her opposition to transmission, resting her case solely on the claim that a prior warrant for “it’s not a disguised mule” is required for S’s warrant for “it’s a zebra.” The skeptic does, however, need to argue that S has no other source of warrant for “it’s not a disguised mule,” and so that no such warrant is provided either by background information or by default. But, as per the last paragraph of the previous section, she needs those arguments anyway.   



As per the last paragraph of the previous section, this needs argument. I will discuss front-loading in detail in Chapter . The skeptic might suggest that it is precisely because warrant for “it’s not a disguised mule” is a precondition of S’s warrant for “it’s a zebra” that transmission fails. This is essentially Crispin Wright’s approach, which I will consider in Chapter . However, the skeptic need not, strictly speaking, make this explanatory claim: she could suggest, instead, that, although warrant for “it’s not a disguised mule” is a condition of her warrant for “it’s a zebra,” and transmission fails, these are independent features of the case. There is actually a third option consistent with closure: although S is warranted in believing that it’s not a disguised mule, she’s not warranted in believing that it’s a zebra. Obviously, however, any argument for such a position would not employ closure running from the premise that she has no warrant for “it’s not a disguised mule” to the conclusion that she has no warrant for “it’s a zebra.”

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



The resulting skeptical argument runs as follows: The Skeptical Front-Loading Argument () S’s warrant for “it’s a zebra” – on the basis of the animal’s appearance – does not suffice as a direct warrant for “it’s not a disguised mule” (indistinguishability argument). () S’s background knowledge does not suffice to warrant “it’s not a disguised mule” (argument to be provided). () “It’s not a disguised mule” is not warranted by default (argument to be provided). () S has no inference-independent warrant for “it’s not a disguised mule” (–). () Prior (and so inference-independent) warrant for “it’s not a disguised mule” is a condition of S’s warrant for “it’s a zebra” (front-loading). () S has no warrant for “it’s a zebra” (, ). () S doesn’t know that it’s a zebra (). Ironically, however, this argument doesn’t appeal to closure at all. If S needs a prior warrant for “it’s not a disguised mule” in order to acquire her warrant for “it’s a zebra,” and can’t have it, that alone implies that she isn’t warranted in believing that it’s a zebra. While premise  – the frontloading requirement – does ensure that closure is preserved in this case (and other analogous Dretske cases), it is not implied by closure. Closure only requires that, whenever S is warranted in believing P and recognizes that P implies Q, she is warranted in believing Q. She could, consistently with closure, acquire her warrant for Q by transmission from P, or end up with a warrant for Q in some other way, even if she need not have it to start with in order to acquire her warrant for P. Indeed, it’s compatible with closure that she is warranted in believing “it’s a zebra” without having a warrant for “it’s not a disguised mule” at all. If she doesn’t recognize that the former implies the latter, for example, then she won’t satisfy the antecedent of WC. Her failure to satisfy its consequent – by not having a warrant for “it’s not a disguised mule” – won’t then violate that principle. Closure requires nothing so demanding as that S’s warrant for “it’s a zebra” can only be acquired if she already has a warrant for “it’s not a disguised mule.” 

Does front-loading (as the skeptic wields it) imply closure? Not necessarily. The skeptic does not – had better not – insist that knowledge of any P requires a prior warrant for every proposition that follows from P. So, even if the denial of every skeptical hypothesis must be front-loaded, this leaves open the possibility that there are other propositions that are implied by propositions we know, but for which front-loading is not required. But transmission to them might also fail: we might not be

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

So, as with the indistinguishability argument, appeal to front-loading renders the skeptical closure argument superfluous yet again. ..

Problems with Front-Loading

It is, moreover, difficult to see how the front-loading claim – that S’s warrant for “it’s a zebra” requires a prior warrant for “it’s not a disguised mule” – can be defended without appeal to basis infallibilism. The obvious reason why someone would suggest that front-loading is required is that the animal’s looking like a zebra is to be expected if it is either a zebra or a mule disguised to look like one. So that basis, the thought goes, can only take S as far as that disjunction; to get from there to “it’s a zebra” requires eliminating “it’s a disguised mule.” So she needs a warrant for “it’s not a disguised mule.” However, for any proposition P, if S’s basis B for believing P is fallible, then there is an alternative scenario in which P is false that is compatible with B. So, if the skeptic is correct, then for every fallibly based belief P, S must possess a preceding warrant for the denial of every ~P & B scenario, which warrant presumably rests on another basis B’. But if B’ is also fallible, then a (~P & B) & B’ scenario is possible. So S needs a warrant against that scenario . . . This sequence can only terminate in warrants with infallible bases. It’s no surprise that front-loading delivers skepticism. But no basis fallibilist should concede it. Front-loading is also far more contentious than closure, even putting aside its commitment to infallibilism. The skeptical closure argument is appealing precisely because it mobilizes a principle – closure – that has very wide appeal. But front-loading is much less attractive. It’s far from obvious that S can only acquire the knowledge that it’s a zebra on the basis of its looking like one if she already has a warrant against its being a disguised mule. Recall that it is P-warrant at issue, the difference between true belief and knowledge. Even if she must be justified in believing that it is not a disguised mule in order to acquire her warrant for “it’s a zebra” – which is already contentious – that won’t suffice for warrant; Gettier cases demonstrate that warrant requires something more (or something different) than justification.

 

able to acquire a warrant for the implied proposition by inference from the proposition we know. And we may have no other source of warrant for them. That skeptical hypotheses must be frontloaded rules none of this out. But then it’s compatible with closure failure.  This is Wright’s reasoning for front-loading. See §.. See §.. See §. for a more detailed version of this argument.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism



S would also need a prior warrant against every other piecemeal and wholesale skeptical hypothesis. The reasoning that suggests that she needs a prior warrant against “it’s not a disguised mule” applies equally well to those hypotheses, even if they haven’t ever occurred to her and even if she doesn’t have the requisite conceptual resources to consider them. As noted above, closure doesn’t require anything like this. So, while closure had some chance of being a principle that is sufficiently widely endorsed that the skeptic could appeal to it without being accused of begging the question, the same cannot be said for front-loading. Closure doesn’t imply it, it is far less intuitive, and it presupposes basis infallibilism; the optimist would be within her rights to accuse the skeptic of invoking a principle that, far from constituting common ground between them, is as contentious as skepticism itself.

. Underdetermination .. Underdetermination and Front-Loading The skeptic might try to bolster her case for front-loading by appeal to underdetermination. Since S’s evidence – the animal’s appearance – is to be expected if it is either a zebra or a disguised mule, that evidence doesn’t favor the one hypothesis over the other. But one can only be warranted in believing an hypothesis if one’s evidence favors it over competitor hypotheses. So S needs a warrant against its being a disguised mule in order to acquire a warrant for its being a zebra on the basis of its appearance. A plausible reason for believing that underdetermination blocks warrant is the suggestion that warrant for P requires that P is probable on one’s evidence. This is controversial; warrant – the difference between true belief and knowledge – might not be a matter of probability on one’s evidence at all. But suppose that it is. Competitor hypotheses are contraries: they can’t both be true. The probability axioms imply that the probability on one’s evidence of a contrary to “it’s a zebra” – such as “it’s a disguised mule” – is no higher than  minus the probability of “it’s a zebra” on that evidence. So if the probability of “it’s a zebra” is greater than . – as required, surely, in order for it to be high enough for warrant – then the probability of “it’s a disguised mule” is below ., and so less than that of “it’s a zebra.” So, if S has a warrant for “it’s a zebra” – requiring, we’re assuming, that it is 

If P and Q are contraries then P implies ~Q. So p(P)  p(~Q). And p(~Q) =  So p(P)   p(Q). That implies (by algebra) that p(Q)   p(P).

p(Q).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

probable on her evidence – then that evidence must favor this claim over “it’s a disguised mule,” as the underdetermination claim suggests. If one’s belief is underdetermined, then, given that assumption, it is unwarranted. The axioms also imply that the probability that a competitor to “it’s a zebra” is false – such as “it’s a disguised mule” – must be at least as high as the probability of “it’s a zebra.” Assuming still that warrant requires high probability, this might be taken to indicate that, if one has a warrant for “it’s a zebra,” one must also have a warrant for “it’s not a disguised mule.” But it is uncontroversial that one could fail to have a warrant for a proposition despite its being probable on one’s evidence. Gettier cases often concern propositions that are probable on the agent’s evidence, although the agent doesn’t know them (and so doesn’t have a warrant for them). So S can have a warrant for “it’s a zebra” – it’s probable on her evidence and she isn’t Gettiered – and yet not have a warrant for “it’s not a disguised mule” because, although it’s also probable on her evidence, she is Gettiered. So the claim that warrant requires high probability on one’s evidence doesn’t support front-loading. It can only do so if high probability is sufficient as well as necessary for warrant; but Gettier cases demonstrate that it isn’t sufficient. And, of course, high probability on one’s evidence might not even be necessary for warrant. Appeal to underdetermination doesn’t support front-loading. The skeptic might insist that one can’t have a warrant for an hypothesis unless one has a warrant for the falsehood of its competitors; that they are improbable on one’s evidence doesn’t suffice. But this just amounts to insisting that S’s warrant for “it’s a zebra” requires a prior warrant for “it’s not a disguised mule.” And that’s front-loading; the underdetermination argument, so interpreted, just is the front-loading argument itself, with all its difficulties, rather than a more plausible argument in support of it.   

See fn. . Gettier . In Gettier cases the relevant belief is true; so failure to know implies lack of (Plantinga-) warrant For example: S believes that it’s a zebra because it looks like one, and it looks like one because it is (and so not because it’s been disguised). That belief is presumably warranted; it isn’t Gettiered. She also believes that it is not a disguised mule, not by inference from “it’s a zebra” – she thinks that zebras are a kind of mule – but on the basis of a microscopic analysis of the animal’s coat, which indicates that the coloring is natural and so not a disguise. That result is correct. And it is probable on her evidence; she’s well aware that the analysis is very reliable. But it so happens that it was not administered properly, and so wouldn’t reveal that the animal was disguised if it were; it is only coincidentally correct. S is Gettiered, and so doesn’t have a warrant for its not being a disguised mule, despite being warranted in believing that it is a zebra.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

Denying Premise : Skepticism ..



Underdetermination Alone

The skeptic should also resist the temptation to appeal to underdetermination directly, foregoing the skeptical closure and front-loading arguments. The underdetermination argument for skepticism claims that S has no evidence favoring “it’s a zebra” over “it’s a disguised mule” and concludes that S has no warrant for the former claim. But much will depend on what counts as S’s evidence. If only the fact that it looks like a zebra so counts, then the underdetermination argument is implausible. For there are other considerations – not now counted among S’s evidence – that nevertheless plausibly favor the former over the latter claim. S’s background information concerning the likelihood that zoos would engage in deception, for example, seems relevant. If S’s evidence does incorporate other such considerations, however, there is little reason to think that S’s evidence overall doesn’t favor one hypothesis over the other, since it may well include considerations over and above the animal’s appearance (such as background information). The upshot is that the front-loading and underdetermination arguments mobilize claims that are, arguably, as disputable as skepticism itself, so that only the skeptic would be inclined to endorse them. The optimist could then reasonably contend that the skeptic’s reliance on them is questionbegging.

. Conclusion The skeptic hopes to offer an argument that appeals to a principle – closure – that she can take to be common ground between her and her optimist opponent. But she hopes in vain. She needs an argument for her premise that S doesn’t know that it’s not a disguised mule. But the considerations she might offer in support of that premise either make the closure argument itself superfluous or undermine closure. And even if the optimist is initially inclined to endorse closure, the commitments she 

It is more difficult to argue that we have, or could have, such relevant background information against wholesale skeptical hypotheses. It is, in fact, not obvious to me that we could not have such information; see §.. But, even if correct, other considerations might come into play: perhaps the ordinary hypothesis is simpler or more explanatory than the skeptical hypothesis, or perhaps the denial of the skeptical hypothesis is so fundamental to our intellectual lives that we are licensed to favor it for that reason alone. Note that to say that these considerations favor the ordinary over the skeptical hypothesis is not to say that they deliver a warrant for (or knowledge of ) either the ordinary hypothesis or the denial of the skeptical hypothesis.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003



Against Knowledge Closure

would need to take on in order to concede that the skeptic’s argument succeeds – in particular, that transmission fails – seriously undermine that inclination. The alternative argumentative strategies that the skeptic might offer – the front-loading and underdetermination arguments – invoke claims that are far less widely endorsed than is closure; so much so, indeed, that they plausibly beg the question. However, the optimist who hopes to reconcile closure with S’s warrant for “it’s a zebra” does owe an account of how she can have a warrant for “it’s not a disguised mule.” The options are: () () () ()

that transmission succeeds, so S can acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra”; that S’s warrant for “it’s a zebra” itself suffices as a direct warrant for “it’s not a disguised mule”; that S’s background information suffices to provide a warrant for “it’s not a disguised mule; or that “it’s not a disguised mule” is warranted for S by default.

In Chapters  and , we will consider the first option; Chapters – will take up the rest.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:40:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.003

 

Denying Premise  Warrant Transmission

. Warrant Transmission and Williamson’s Insight The argument by counterexample against closure, recall, runs as follows: () S is warranted in believing P. However, () transmission fails: S can’t acquire a warrant for Q in virtue of her recognition that it follows from P and her warrant for P. Moreover, () her warrant for P itself does not constitute a warrant for Q; () her background information does not deliver a warrant for Q; and () she has no default warrant for Q. () Therefore, S has no other, transmission-independent, source of warrant for Q. () But if WC is true then, if S is warranted in believing P, either transmission succeeds from P to Q or S has another source of warrant for Q. () So WC is false. This chapter and Chapter  concern the closure-defending strategy of denying premise , and so claiming that transmission succeeds in Dretske cases. That strategy amounts to affirming WT from Chapter . WT states that, necessarily, for every agent S and propositions P and Q: if (a) S’s belief that P is warranted while (b) S recognizes that Q follows from P, then (c) S acquires a warrant for Q in virtue of (a) and (b).

Closure – WC in particular – requires only that an agent who satisfies (a) and (b) ends up with a warrant for Q by some means or other; it does not require that she acquires a warrant in virtue of her recognition that Q follows from P in particular. To affirm WT is, therefore, to endorse a stronger principle than WC: if WT is true, then WC is true as well (but not vice versa). So if WT is true, closure is preserved. 

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

In Chapter  I referenced Williamson’s insight that “deduction is a way of extending one’s knowledge.” Williamson – as well as Hawthorne, who appeals to Williamson’s insight when defining closure – takes that insight to support closure. It can only do so, however, if it is read as admitting of no exceptions. Nobody denies that deduction very typically extends knowledge. That concession is nevertheless compatible with closure failure in rare cases, and so compatible with the views of closure deniers; the closure denier does not, of course, deny that inference typically delivers new warrants, but only that it invariably does so. So Williamson’s insight, insofar as it does ground a defense of warrant closure, can only do so if its scope is universal: inference from a warranted belief inevitably delivers a new warrant. So read, Williamson’s insight just is an endorsement of WT: the suggestion is that we intuit that warrant transmission inevitably succeeds. The claim that transmission does succeed in Dretske cases – and so that WT is true – is, however, very unintuitive. The suggestion that S could learn “it’s not a disguised mule” by inference from “it’s a zebra” when the latter is known on the basis of its appearance, that one could learn that the newspaper report is not a misprint by inference from what one read in the report, and so on, strikes one, not just as false, but as downright bizarre. Especially when Dretske cases are kept in view, it is not obvious – to me, at any rate – that there really is as robust an intuition as Williamson and Hawthorne must claim there to be. Notice that a bare intuition that closure itself holds under all circumstances would not suffice. Since WT is stronger than closure, closure could hold in circumstances wherein transmission fails. Perhaps, whenever transmission fails, the manner of its failure somehow ensures that the agent ends up with a warrant for Q. I’ll explore that possibility in Chapters  and . The point for now is that a bare intuition that closure is true, even if taken to be probative, doesn’t on its own ensure that it is true because the stronger principle WT is true as well. Suppose the WT advocate concedes that, although we intuit that inference very typically delivers new warrants, it isn’t obvious that we also intuit that there can be no exceptions. She might, nevertheless, insist that we do intuit that closure itself admits of no exceptions, and that the only 

New knowledge of Q won’t be delivered by inference from P if S knows Q already. But, as noted in §., one can acquire a new warrant for a proposition that one already knows. So, having a warrant for Q is no bar to acquiring another one; if inference inevitably delivers knowledge of Q except when one already knows Q, then it inevitably delivers a new warrant in every case, as required by WT.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



way the latter intuition can be respected is by affirming WT. As we noted in §., we are also inclined to deny that S has a warrant for Q from other sources than by inference in Dretske cases. The WT advocate might endorse that inclination as well. But then S will have no warrant at all for Q unless it is conceded that she acquires one by inference from P. So, under the circumstances, we can only honor the intuition that closure is true by affirming the stronger WT, even if we don’t have a robust intuition that transmission inevitably succeeds. The conviction that closure is true, together with the concession that we have no inferenceindependent warrant for Q in Dretske cases, leaves no other option. But the result is a three-way clash of intuitions; it is far from clear why we should resolve it in the manner proposed by the WT advocate. We are inclined to: (i) affirm closure, (ii) deny that S has an inference-independent warrant for Q in Dretske cases, and (iii) deny that transmission succeeds in those cases. These can’t all be correct. The WT advocate recommends that we reject (iii), notwithstanding the fact that it is highly unintuitive to do so. But the closure advocate could instead reject (ii), and so insist that S does have an inference-independent warrant for Q. And, of course, the closure denier recommends instead that we reject (i). The WT advocate can’t claim decisive intuitive support for her choice, since some intuition or other has to be rejected, and there is no obvious reason why it is (iii) that has to go. That option is not even required in order to do justice to closure. Alternatively, the WT advocate could insist that we do have a robust intuition, not only that closure is true, but also that transmission inevitably succeeds, so that WT is true as well. Indeed, she might not unreasonably claim that the intuition that closure is true is precisely the intuition that transmission does inevitably succeed. After all, the closure principle – WC – explicitly cites S’s recognition that P implies Q. What could be the point of its doing so, if not that her recognition of that implication inevitably delivers a warrant? Recall also from §. that KC – and any other closure principle that identifies knowledge as the epistemic property closed over inference – only 

“Here’s the way to think about closure. Sometimes we know the consequent by inferring it from the antecedent and sometimes we know it prior to knowing the antecedent. But however we know the consequent, it remains impossible to know the antecedent without being in a position to know the consequent because if one did not already know the consequent, one could still infer it” (Cohen , ). Since prior knowledge of the consequent is no bar to the acquisition of another warrant by inference, Cohen’s suggestion that we are inevitably in a position to know the consequent – that is, that we can have a warrant for it – requires that we can always acquire such a warrant by inference, so that WT is true.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

succeeds if WT is true. Although closure can be formulated so as not to require WT – that is, by WC – the fact that closure has, up to now, been invariably formulated as a condition on one’s knowledge is itself evidence that the intuition behind closure is indeed that WT is true. Suppose that we do have a robust intuition that WT is true. We, nevertheless, also have a competing, robust intuition that transmission fails in Dretske cases. The WT advocate would have to insist that Williamson’s insight – construed as the intuition that WT is true – overrides the intuition that transmission fails in Dretske cases. Given the strength of the latter intuition, this is an heroic course. However, it is very difficult to adjudicate a battle of intuitions between one affirming a general principle and others affirming exceptions to it. In light of this, I won’t rest on appeal to the bare intuition that WT fails in Dretske cases, but will instead offer an argument for its failure. In §. I will present that argument and illustrate its application to Zebra. I will then defend it against various objections (§.–.) and indicate how the argument applies to other Dretske cases (§.). Then, in Chapter , I will argue that epistemologists need not be dismayed by the failure of WT. Far from it: WT contributes nothing to the fight against skepticism, and a number of popular conditions on warrant don’t transmit.

.

No Inevitable False Negatives

In Zebra, S notes that the animal looks like a zebra and believes that it’s a zebra as a result. She also recognizes that its being a zebra implies that it’s not a mule, and so that it is not a disguised mule, and believes that it is not a disguised mule on that basis. Suppose that it is in fact a disguised mule. Suppose also that S evaluates the question whether it’s a disguised mule as before: she identifies the 



This isn’t quite correct. There are debates concerning the status of justification closure and warrant closure, although the concept of warrant invoked in the latter debate, prompted primarily by Wright’s work on transmission failure, is more akin to justification than Plantinga-warrant. But the debate concerning knowledge closure has a life of its own, and the closure principle inevitably invoked within it identifies knowledge itself as the property closed over inference rather than Plantinga-warrant. To say that it is disguised to look like a zebra is not to say that it looks like a zebra to most people, but specifically that it looks like a zebra to S. That is, it is disguised in such a way that, given S’s vantage point, discriminatory capacities, and background species-classification dispositions, she will judge that it is a zebra. If she is examining the animal close up, the disguise is correspondingly detailed. If she’s a zoologist – so that some attempts at disguise that would fool an ordinary visitor won’t fool her – then the disguise is correspondingly more sophisticated (it includes the zebra’s distinctive bristle mane, for example). Otherwise, her basis for believing that it is a zebra – which

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



animal’s species on the basis of its appearance and determines what follows concerning whether it is a disguised mule. Under those two suppositions, S will arrive at the result that it isn’t a disguised mule. It’s almost irresistible to express this in counterfactual terms: if it were a disguised mule, then S, using the same method to evaluate whether it is a disguised mule that she actually uses, would still arrive at the belief that it isn’t a disguised mule. This is in contrast to S’s belief that it’s a zebra. If it weren’t a zebra, it would be, not a disguised mule, but a hippo, or a flamingo, or the like. So she would not believe that it’s a zebra on the basis of its appearance if it weren’t a zebra. And the sensitivity analysis of transmission and, perhaps, closure failure is off and running. But the counterfactual expression is much weaker than that licensed by S’s situation. Assuming the standard Lewis/Stalnaker possible worlds semantics for counterfactuals, that expression requires only that, in the nearest disguised-mule world wherein S uses the same method to evaluate whether it is a disguised mule, she will arrive at the result that it’s not a disguised mule. It is compatible with this that, in disguised-mule worlds further out, that method doesn’t generate the same result. But there are no such worlds. In every disguised-mule world in which S does what she actually does to evaluate whether it’s a disguised mule – that is, she consults the animal’s appearance to identify its species and determines what that implies concerning whether it’s a disguised mule – she will arrive at the result that it isn’t a disguised mule. The subjunctive conditional is true; but it is true because what it requires of the nearest disguised-mule worlds is true of any such world. This is a priori: given only a (complete enough) description of her method, one can discern that it will inevitably deliver that conclusion whenever that conclusion is false. If it isn’t a disguised mule, then it either isn’t a mule or isn’t disguised to look like a zebra (or both). This would be so if, for example, it’s a flamingo





might include a bristle mane – would be absent in the disguised-mule scenario. But it’s constitutive of a Dretske case that the conclusion’s being false implies that S has precisely the same basis for believing the premise that actually led her to affirm it. See Stalnaker  and Lewis . On their account, a counterfactual conditional – were P true then Q would be true – is correct when, in the nearest possible world(s) in which P is true, Q is true as well. That’s compatible with worlds farther out in which P is true but Q false. Note that this requires that she is in position to perform the relevant inference. If she is somehow impaired in such a way that she cannot infer from “it’s a zebra” to “it’s not a disguised mule,” then she is not in a position to evaluate the latter proposition in the same way that she actually does, and so cannot employ the same method for that evaluation. Worlds in which she does employ that method are worlds in which she has that requisite inferential capacity and applies it in order to evaluate whether it is a disguised mule. (They are also worlds in which she judges it to be a zebra on the basis of its appearance; see fn. .)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

that looks like a flamingo, a mule that naturally looks like a mule, a donkey disguised to look like a zebra, or a zebra that looks naturally like a zebra. But in any such case it either looks like a mule or looks like something else. If it looks like something else (a zebra, flamingo, or donkey), then S, appealing to its appearance to identify its species and determining what follows with respect to whether it is a disguised mule, will still believe that it’s not a mule and so infer that it’s not a disguised mule. But if it looks like a mule, her inference can’t route through a belief concerning the animal’s species acquired on the basis of its appearance to either the truth or falsehood of “it’s not a disguised mule.” So her method for evaluating “it’s not a disguised mule” is inapplicable. So, although S’s method will, when applicable, deliver the result that it isn’t a disguised mule when it isn’t, it is guaranteed to deliver the result that it isn’t a disguised mule when it is. That it will deliver the right answer (if applicable) when it isn’t a disguised mule is, presumably, unproblematic. That it is guaranteed to deliver the same answer whenever it is a disguised mule, is, however, deeply disturbing. Consider an analogous case. A house inspector is trying out a new test to determine whether there is lead paint on the wall. Unbeknownst to the inspector, however, the test will deliver the result “lead is absent” whenever it is applied to any paint, whether or not lead is present. This is not a result of the inspector’s mishandling of the test, but is, rather, a consequence of the very nature of the test itself: anyone who understands how the test works will realize that it inevitably delivers one and only one answer. Such a test is, in this sense, constitutively guaranteed to deliver a false negative result in every case in which lead is present. No such test can inform the inspector that there’s no lead paint, even if there isn’t. S’s method to evaluate whether it’s a disguised mule is also guaranteed to deliver a false negative result whenever it is a disguised mule. It is, for that same reason, also incapable of indicating to S whether it is a disguised mule. No method to evaluate a proposition that is constitutively guaranteed in this way to deliver the result that the proposition is false whenever it is true can deliver a warrant – a way to know – that it is false.

 

She can, however, still infer from “it looks like a mule” to “it doesn’t look like a zebra,” and from there to “it’s not a mule disguised to look like a zebra.” I’m treating the proposition being evaluated as “it’s a disguised mule” (and “it’s lead paint”). One could equally well treat the proposition under evaluation as “it’s not a disguised mule” (and “it’s not lead paint”). If so, the method inevitably delivers false positives and is equally objectionable. Nothing substantial hinges on whether we treat the relevant method as investigating P versus ~P.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



Nothing changes if “it’s a disguised mule” is unlikely, or if the nearest world in which it is true is distant. To continue the analogy, suppose that no, or few, of the houses in the neighborhood (or city, or country, or . . .) in which the inspector works were painted with lead paint, so that it’s improbable that the paint being tested has lead in it. Also, this house was built after  when lead paint was banned, so that the nearest world in which the tested paint contains lead is remote. On some views, improbability or world-remoteness alone ensures that the house inspector has a warrant for believing that there is no lead paint. On other views, the inspector would need background information concerning the use of lead paint in the region or the age of the house. But, in either case, a warrant is not acquired solely as a result of his use of a test that is inherently guaranteed to deliver the result that lead is absent. If the inspector does have a warrant, it is probability, modality, or background information that delivers it; it is not delivered by the test. Or consider an extension of Zebra: the “Zoo-Testing-R-Us” inspector, charged with investigating whether this particular zoo engages in animal deception, peers within every cage and paddock, believes that the animal is what it looks to be, and infers that it is not something-else-disguisedto-look-like-that. He then delivers the zoo a clean bill of health: every animal is what it seems to be. In fact, he delivers that same bill to every zoo he visits; he has yet to run into a case of animal deception (which explains his popularity). That is, of course, because he can’t; his method is constitutively blind to the possibility that the animals are disguised. The suggestion that his optimistic assessment is warranted – and so that he knows that no deceptions have taken place – is ludicrous, even if he’s right. I submit that no method of inquiry into whether P that is constitutively guaranteed to deliver ~P whenever P is true can deliver a warrant for – a way to know – ~P. Any method of knowledge acquisition concerning whether P must, surely, involve at least enough discriminatory capacity that that very method itself doesn’t guarantee that it will inevitably deliver false negatives in every positive case. Call this the no inevitable false negatives (NIFN) rule:  

Improbability is not the same as world-remoteness. As lottery cases illustrate, it can be improbable that P is true – one wins the lottery – even though the nearest world in which it is true is nearby. It would do so, for example, on a simple safety account according to which S’s belief is warranted when it is true in every nearby world in which S believes it.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure NIFN If a putative method M to evaluate P is such that M itself strictly implies that it will evaluate P as false whenever P is true, then M cannot deliver a warrant for ~P.

Given NIFN and the fact that S is warranted in believing that it’s a zebra – which assumption is appropriate since we are investigating a non-skeptical defense of closure – WT fails. S can’t acquire a warrant for “it’s not a disguised mule” by inference from “it’s a zebra” that she is warranted in believing on the basis of its appearance. Aside from its obvious intuitive plausibility, I have no further argument for NIFN, since any further argument would require appeal to a particular theory of warrant. In an effort to deliver results that have probative force regardless of one’s theoretical viewpoint, I can’t appeal to any particular such viewpoint in way of arguing for transmission failure. And I suppose some closure advocates might insist that closure is so forcefully intuitive that they are prepared to bite even this bullet. But recall that it’s not closure that is at stake at the moment but transmission. Williamson’s insight – construed as the claim that deductive inference inevitably generates a warrant – doesn’t only confront certain intuitive exceptions; it confronts a highly intuitive limitation on possible sources of warrant. Since WT is stronger than WC, its failure remains compatible with WC. At the very least, the advocate of WT cannot override appeal to Dretske cases as exceptions to WT solely by appeal to the putative intuition that WT is true when there is another general principle that is at least as intuitively plausible – NIFN – implying that they are the exceptions that they seem to be.

. NIFN, Fallibilism, and Insensitivity Notice that violation of NIFN is much worse than mere basis fallibilism. Basis fallibilism implies that one can know P despite the fact that there is at least one possible world in which P is false and yet one’s basis remains. But not only is there at least one such world for S; in every world in which it is a disguised mule, S’s basis for believing that it isn’t remains. Although some find fallibilism disconcerting, it is not nearly so intolerable as is violation of NIFN. 

I do, however, argue in Chapter  that transmission fails on three popular conditions of warrant whenever NIFN is violated.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



Violation of NIFN is also much worse than mere insensitivity. As a result, standard counterexamples to sensitivity are not counterexamples to NIFN. Consider Kripke’s “red barn” case: in fake barn country there are many blue barn façades but no red barn façades. S believes that the structure she’s looking at is a red barn and infers that it’s a barn. The former belief is sensitive: there being no red barn façades, it would only look like a red barn if it were. But the latter is not: if it weren’t a barn it would (or might) be a blue barn façade, in which case S would believe that it is a blue barn and still infer that it is a barn. So, on the sensitivity account, S knows that it’s a red barn but doesn’t know that it’s a barn. This is widely viewed as an embarrassment for sensitivity views (although it is only so if sensitivity is both necessary and sufficient for warrant). But NIFN doesn’t require that S does know that it’s a red barn; it is not a theory of warrant at all, but only rules out certain ways of acquiring warrant. So it’s consistent with NIFN that closure is preserved in this case because S doesn’t know either proposition. It’s also consistent with NIFN that S knows both propositions. The falsehood of the conclusion is “it’s not a barn.” In the nearest worlds in which the conclusion is false S is still in fake barn country, so it will (or might) look that way when it isn’t a barn. But not every world in which it’s not a barn is one in which S is in fake barn country; in some of those worlds, she’s in a normal countryside wherein structures that look like barns are barns. In such worlds, the structure S is looking at (if there is one) will not look like a barn to S. So she won’t believe that it is a barn of a particular color, and won’t infer that it’s a barn. So NIFN is not violated. (Compare Zebra: in every world in which it’s a disguised mule, it will look (to S) like a zebra, so that she will be able to infer that it’s not a disguised mule, and will do so if she evaluates that proposition in the same way she actually does.) So appeal to NIFN neither implies closure failure in the red barn case nor implies either that she does or doesn’t know both propositions. Nor does NIFN run afoul of DeRose’s “simple skepticism.” As he points out, “I don’t falsely believe that I have hands” is insensitive – if I did falsely so believe, I would still believe that I don’t falsely so believe – and yet we intuitively judge that I know it. But it doesn’t violate NIFN. Suppose I infer this from “I have hands,” and I believe that because I seem to see them. That I do falsely believe that I have hands – I don’t have them and yet believe I do – doesn’t imply on its own that this method 

Kripke .



DeRose , §. See also Vogel ’s “Omar’s new shoes” example.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

will deliver the result that I don’t falsely believe that I have hands, because it doesn’t imply that I seem to see hands. In some worlds in which I falsely believe I have hands, I might do so for entirely different reasons (by refusing to accept the horrific implications of the evidence of my senses, for example). Nor is NIFN violated in standard backtracking cases against sensitivity. In Vogel’s ice cube case, for example, I believe that the ice cubes in a glass that I put on the picnic table an hour ago in  F weather have melted (because I know what happens to ice cubes in that temperature and I remember putting the glass on the table). I plausibly know that they have melted. And yet, “if the ice cubes hadn’t melted, I wouldn’t believe that they had by the same method” seems false: if they hadn’t melted, this would be because someone put the glass in a cooler, or in the fridge, or some such thing. Unaware of the rescue, I would still believe that the ice has melted by the same method, namely, appeal to my memory. But “the ice cubes have melted” doesn’t imply on its own that the method I use to evaluate this – namely, recalling what I did with the glass – will deliver the result that the ice cubes haven’t melted. For there are worlds in which I didn’t take the glass outside in the first place but instead put it in the fridge, and so in which consultation of my memory leads me to believe that the ice hasn’t yet melted. While the prohibition against backtracking eliminates treating such worlds as the nearest wherein the ice hasn’t melted for the purpose of evaluating the subjunctive conditional, it does not eliminate such worlds altogether. The same goes for other typical cases in which the belief in P appears to be insensitive despite the fact that the agent intuitively knows P. While the nearest not-P worlds are arguably ones in which the agent still believes P by the same method, there is nevertheless a plethora of not-P worlds in which the agent’s method does not deliver P. It certainly does not follow from the very method S employs to evaluate P that ~P will be delivered in every world in which P is true, as is the case when NIFN is violated. The epistemic failing manifested by violation of NIFN is far worse than mere insensitivity.  

 Vogel . In Alspector-Kelly , §, I discuss an example of knowledge by induction. One more example. In John Hawthorne’s salmon case, I eat less than a pound of salmon and infer that I have eaten less than  pounds (Hawthorne , –). In the nearest world in which I don’t eat less than a pound, I would eat . pounds, and wouldn’t believe that I had eaten less than a pound. However, if I had eaten  pounds, I would hallucinate that I had eaten less than a pound, and so infer that I had eaten less than  pounds. So “I ate less than a pound” is sensitive but “I ate less than  pounds” is not. As with the red barn case, this is taken to be an

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



. Method Individuation I’ve characterized S’s method for evaluating “it’s a disguised mule” as, first, identifying the animal’s species on the basis of its appearance and, second, determining what follows concerning whether it is a disguised mule. One might object that this is a conveniently selective description of S’s method, designed to secure the result that it will output “it’s not a disguised mule” whenever it is a disguised mule. But her method could be characterized in such a way that this result is not secured. For example, perhaps her method is simply “determine what follows from ‘it’s a zebra’ concerning whether it’s a disguised mule.” Or perhaps it’s “determine what follows concerning whether it’s a disguised mule from its species.” Or perhaps it’s “determine what follows concerning whether it’s a disguised mule from what I know.” Or . . . S’s method, under any of the latter descriptions, does not inevitably deliver the result that it’s not a disguised mule when it is. If her method is, for example, “determine what follows concerning whether it’s a disguised mule from ‘it’s a zebra’,” whether that delivers the result that it’s not a disguised mule when it is a disguised mule depends on her basis for “it’s a zebra.” If that basis is the animal’s appearance, then “it’s not a disguised mule” does inevitably result. But if it is, instead, the results of a DNA test then, if it is a disguised mule, she will not believe that it’s a zebra, and so not infer that it’s not a disguised mule from that belief. This is a version of the generality problem that afflicts all appeals to method. There are a variety of proposed solutions to the problem, the evaluation of which would take us too far afield here.

 

embarrassment for sensitivity views (Dretske’s in particular). But, as with that case, NIFN doesn’t imply that I do know that I have eaten less than a pound, since it is not a theory of (a sufficient condition of ) warrant. Nor does it imply that I don’t know that I’ve eaten less than  pounds. In some worlds in which I have eaten  pounds I don’t suffer such hallucinations, and so won’t have the same basis for believing that I have eaten less than  pounds. Analogous comments apply to the other examples that Hawthorne wields against Dretske. And if the result is that it is a zebra and she does so infer, that inference intuitively transmits. For some recent proposals, see Baumann , Becker , Comesaña , McEvoy , Wallbridge , and Wunderlich . It might strike some as unintuitive to describe S’s method as “identify its species on the basis of its appearance and determine what follows concerning whether it is a disguised mule.” But that’s because it would be bizarre for anyone to take this seriously as a way to learn anything about whether it’s a disguised mule. Suppose that S heard a rumor that the animal in the zebra paddock is a disguised mule. She investigates by performing a DNA test, which informs her that it’s a zebra. She then infers that the rumor is false. It’s perfectly natural to characterize her method as one of identifying the animal’s species by a DNA test and then determining what follows concerning whether it’s a disguised mule.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

In the present context, however, this problem doesn’t need to be resolved. To see why, consider a variation of the lead paint test example. Suppose that the test is somewhat better than as described above. If the wall paint being tested is oil-based, then the test reads “lead is absent” when and only when it is absent and “lead is present” when and only when it is present. However, the test will read “lead is absent” whenever applied to latex paint, whether or not lead is present. This is a perfectly good test for the presence of lead in oil paints, but no test at all for latex paints. One might characterize the situation by saying that there is one test – a lead paint test – that works when applied to some paints but not when applied to others. Or one might treat it as effectively two different tests: an oil-paint test, which works perfectly well when correctly applied (to oil paint), and a latex-paint test, which doesn’t work at all (when applied to latex). It doesn’t matter; in either case, the test as applied to latex paint is utterly useless because it is guaranteed to produce the wrong result whenever lead is present. Similarly, we might characterize S’s method for evaluating whether it’s a disguised mule more generally (as “inference from what she knows,” for example). If we do so, then the method is objectionable when it is applied in circumstances that are such as to guarantee that she will get the wrong answer whenever it is a disguised mule (as in Zebra). NIFN could be revised to accommodate this, as follows: NIFN-C If a putative method M to evaluate P is such that, in the circumstances in which it is applied, M itself strictly implies that it will evaluate P as false whenever P is true, then M does not deliver a warrant for ~P in those circumstances.

Or, we can fold those circumstances into the method, so that it is “identify the animal’s species on the basis of its appearance and determine what follows concerning whether it’s a disguised mule,” in which case the method is objectionable because it will inevitably deliver the wrong answer when it is a disguised mule. So long as there exists a specification of S’s method that generates this result, then it will violate NIFN under that specification. NIFN could be revised to accommodate this as follows: NIFN-S If there exists a specification of S’s method M such that M itself strictly implies that it will evaluate P as false whenever P is true, then M does not deliver a warrant for ~P. 

It will also continue to do so under any more detailed specification.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



Since the circumstances referenced in NIFN-C can be rolled into a specification of S’s method as per NIFN-S, they amount to the same principle. And, as the lead paint example suggests, they are as intuitive as is NIFN itself. I will stick with NIFN hereafter to keep things simple.

. Method Externalism Tim Black advocates a method-relative sensitivity account that, he suggests, preserves closure. The key to his view is an externalist account of method: methods are not individuated only by internal features of the agent, but also by features of the agent’s interaction with her environment. So, when we assess the sensitivity of the agent’s method, we must consider worlds in which those external features that are relevant to the identity of the method are the same as well. When S sees that she has hands, for example, she uses the method of visual perception. And visual perception requires, among other things, eyes; the blind do not see anything. But a BIV has no eyes. So anybody who uses visual perception can’t be a BIV. So, in any – and, therefore, the nearest – world in which S uses the same method she actually uses to acquire the belief that she has hands, she isn’t a BIV. She is, therefore, also not a BIV in any world in which she uses the same method she actually uses to acquire the belief that she isn’t a BIV. For that method consists in her inferring that she is not a BIV from her having hands, which she learns by visual perception. So in any – and, therefore, the nearest – world in which she is a BIV, she doesn’t believe that she isn’t a BIV by the same method she actually uses. Her belief that she isn’t a BIV is, therefore, method-sensitive. But so is her belief that she has hands. If a method-sensitive true belief is known, then she knows both that she has hands and that she isn’t a handless BIV; closure is preserved.  



Black a. Black actually suggests that S might believe that she is not a BIV either directly by perception or via inference from “I have hands” to “I’m not a (handless) BIV.” But I don’t understand the former suggestion. That I’m not a BIV doesn’t seem like the sort of thing I can directly see, even if seeing is externally individuated. It’s not like my reassuring myself that I’m not a Kafkaesque insect by looking in the mirror; it’s because of what insects looks like, and the fact that I don’t look like that, that what I see is of use. But I can’t look in a mirror (or anywhere else) and see that I’m not a BIV in the same way. In the nearest world in which she doesn’t have hands – as a result of an unfortunate accident with a table saw, for example – she doesn’t believe that she has hands by the same method as the actual, namely visual perception (of her handless arms). So her belief that she has hands is methodsensitive.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure

For the same reason, a sighted person’s belief that she is not a BIV – when inferred from her belief that she has hands, itself acquired by visual perception – doesn’t violate NIFN. Her method is certainly not guaranteed to deliver the result that she is not a BIV when she is a BIV; if she is a BIV, she can’t use that method at all. In any world in which she can use that method she isn’t a BIV, since only sighted people can use visual perception. As with evidential externalism discussed in §.., however, this response doesn’t transfer to the piecemeal skeptical hypotheses invoked in other Dretske cases. S is surely using perception, even when characterized in externalist fashion, when she looks inside the zebra paddock and sees the disguised mule, reads the newspaper’s erroneous report that the Broncos won, notes the reading on the stuck gas gauge, and so on. The method externalist could respond by insisting that these are not in fact cases in which the same method is used; while these methods are external, they are not external enough. While S does see, for example, she does not see a zebra, or an accurate newspaper report, or a working gas gauge. But, in order to apply to every Dretske case, this response would have to imply method infallibilism: method M can’t deliver P unless P is true. For suppose otherwise. Then there is a world in which, although P is false, use of M delivers the verdict that P is true. Suppose also that S occupies such a world, and so acquires belief in P by M when P is false. Suppose, finally, that she infers from P to the denial of the claim that she believes P by M when P is false. The claim being denied is a skeptical hypothesis; call it SK. S has, then, inferred from P to ~SK. But she occupies a world in which SK is true: she does believe P by M when P is false. Her method for arriving at ~SK is, first, using M to arrive at P and, second, inferring from P to ~SK. But in any SK world in which she does this, she will arrive at ~SK. So her belief in ~SK will violate NIFN. So violation of NIFN can only be avoided if, contrary to our original supposition, M can’t deliver P unless P is true. But then every method for the evaluation of any proposition is guaranteed to get the right result. That’s an obviously untenable conception of method; surely it’s possible, of at least some methods that we employ, that they could deliver an incorrect result. Black himself surely doesn’t intend this. But then there are Dretske cases that do run afoul of NIFN; appeal to method externalism won’t rule them out. 



Consider, for example, Directions: S asks a gas station attendant for the route to South Haven, who gives her incorrect directions for the sheer malicious fun of it. Surely S’s method is the same as it would be if the attendant had given her the correct directions, namely, asking the attendant for directions. Violation of NIFN also implies method insensitivity: if the method delivers Q in every ~Q world wherein S uses the same method, then it does so in the nearest such world. So, on Black’s account,

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

Denying Premise : Warrant Transmission



. NIFN and Other Dretske Cases The features we’ve identified in the zebra case are exhibited by every other Dretske case (a)–(l) of §.. In Car, for example, in any world in which (i) S’s car was stolen and so is now not in B, (ii) S evaluates whether it is in B by appeal to her memory (which informs her that she parked it in B), and (iii) S determines what follows from this concerning whether her car was stolen, S will infer that it isn’t stolen. In all these cases there are three basic components. First, there is the proposition P from which the conclusion is inferred. Second, there is the basis B in response to which S acquires the belief that P. And, third, there is a feature E which explains how P could be false despite B: the fact that the animal was disguised explains how it could not be a zebra despite looking like one; that the car was stolen explains how it could not be in B despite S’s remembering having parked it there; that the gauge is stuck explains how the tank could not be empty despite the gauge’s reading “empty”; and so on. These are “how-possibly“ explanations: they explain how it is possible that P could be false despite B’s being true. Q – the proposition inferred from P – is then ~(D & E & B), where D is or implies ~P (possibly given background knowledge). In the zebra case, P is “it’s a zebra,” B is “it looks like a zebra,” E is “it’s disguised to look the way it does” (which explains how it could look like a zebra and yet not be one), and D is “it’s a mule” (which, given background knowledge that mules are not zebras, implies “it’s not a zebra”). So Q is “it’s not a mule that is disguised so that it looks like a zebra.” Table . illustrates P, B, E, and Q for some other Dretske cases. In each case ~Q implies B. So S, applying the same method that she actually used – appealing to B in order to evaluate P and then inferring what she can about whether ~Q – will arrive at Q. By NIFN, S will not have acquired a warrant for Q that way. But S knows P, we are assuming,





she doesn’t know Q. So, assuming S’s true belief in P is known, closure fails on his account as well, unless he embraces method infallibilism. (See, however, Black b.) The possibility of such a world demonstrates that the warrant for P gleaned in virtue of B is basisfallible. As far as I can see, every such world is one in which a condition of warrant has failed. But this entails that there is no ~P & B world in which all such conditions are in place; warrant itself is infallible. I think that this is correct (keeping in mind that we are here speaking of Plantingawarrant). But I won’t press that claim here. (See, however, Chapter .) Recall from §. that this does not imply basis infallibilism, since that depends on whether there are any ~P & B worlds. In the cases listed – and unlike “it’s a mule,” which implies that it’s not a zebra – D just is ~P.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004



Against Knowledge Closure Table . Dretske case structure

Case

P

B

E

Q

Car

The car is in B.

I remember parking the car in B.

The car was stolen.

Gas Gauge

The tank is empty.

The gas gauge reads “empty.”

The gauge is stuck.

Cruise

I can’t afford a cruise vacation.

There is a modest amount of money in my bank account.

I won the lottery.

BIV

I have hands.

I seem to see hands.

I’m a BIV.

It’s not the case that the car was stolen (E) and so not in B (~P) while I still remember having parked it in B (B). It’s not the case that the gauge is stuck (E) on “empty” (B) while the tank is not empty (~P). It’s not the case that I won the lottery (E) and so can afford a cruise vacation (~P) notwithstanding my modest bank balance (B). I’m not a handless (~P) BIV (E) stimulated to have the experience of having hands (B).

and so is warranted in believing it. So, although S satisfies the antecedent of WT – her belief in P is warranted while she recognizes that Q follows from P – she does not satisfy the consequent of WT: she does not acquire a warrant for Q in virtue of that belief and recognition. Dretske cases are, therefore, exceptions to WT: transmission fails. That doesn’t necessarily mean that closure fails; WT is stronger than WC, so failure of WT doesn’t imply the same fate for WC. But, if WC doesn’t fail, then some other source of warrant must be available to S. The closure advocate is well-advised to propose some other such source. Chapter  will reinforce this by showing that no harm will come to a variety of epistemological views by the renunciation of WT.



Denying that S knows P is the skeptical strategy reviewed in Chapter ; we are now considering non-skeptical strategies.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:43:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.004

 

Transmission, Skepticism, and Conditions of Warrant

. Transmission and Skepticism In Chapter  I argued that WT fails. In this chapter I argue that epistemologists have little reason to find that conclusion distressing. I do so, first, by arguing that WT cannot be relied on to provide a response to skepticism (this section). Second, I review three popular conditions on warrant – safety, reliabilism, and evidentialism – and show that they all fail to transmit across inference in Dretske cases (§.–.). So advocates of those conditions have no reason to fear denial of WT. Quite the contrary: their own proposals give them reason to reject it. Some epistemologists look to transmission in order to answer the skeptic. G. E. Moore, for example, famously suggested that one can rule out skeptical hypotheses by inference from one’s knowledge that one has hands. The WT advocate might, then, protest that giving up WT will leave us open to skeptical attack, and so comes at too high a price. It turns out, however, that WT can contribute nothing to the fight against skepticism. The question whether we know that skeptical hypotheses are false has little intrinsic interest aside from the potential impact of the answer on our ordinary knowledge. It would, presumably, be nice to know that I’m not a brain in a vat (and that it’s not a disguised mule, that the gas gauge isn’t stuck on “empty,” and so on). But if my failing to know this doesn’t pose any threat to my ability to know ordinary things – that I have hands, that it’s a zebra, that the tank is empty, etc. – then it is far less newsworthy, at least in the epistemological context. The skeptic’s primary aim is not to show that we don’t know that skeptical hypotheses are false; that is only a weapon that she hopes to wield against our ordinary knowledge. Skepticism just is the claim that we don’t 

Moore b (Moore was actually concerned to refute idealism rather than skepticism). See also Klein  and Pryor .



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

have ordinary knowledge; nobody who thinks that we have such knowledge is a skeptic, whatever their views might be concerning whether we know the denials of skeptical hypotheses. So, if ordinary knowledge is not contingent on our knowing that skeptical hypotheses are false – more precisely, on our having a warrant for their denial – then the epistemologist’s anti-skeptical duty is done. The WT advocate’s response to skepticism, however, assumes that we have ordinary knowledge. Inference only delivers a warrant for a conclusion if the premise is warranted. So S can only respond to the skeptic in the envisaged manner if she is warranted in believing the ordinary claim already (and so knows it if it is true). But if she is, then the argument from ordinary claim P to anti-skeptical Q is superfluous; her ordinary knowledge is already secure, whatever the status of her conclusion might be. And if she can’t be warranted in believing P without having a warrant for Q already – as per front-loading – then transmission is powerless to deliver the needed antecedent warrant for Q. Since inference only delivers a warrant for a conclusion if the premise is already warranted, a warrant for Q acquired by inference from P would come too late. The skeptical threat is sometimes presented as a trilemma. We are, it is claimed, inclined to endorse the following three mutually inconsistent claims, where H is “I have hands” and BIV is “I am a handless BIV”: () () ()

S knows H. S doesn’t know ~BIV. If S knows H then S knows ~BIV (from knowledge closure).

The WT advocate offers a resolution: S does know H, and learns ~BIV by inference from it. So () is false, notwithstanding our inclination to affirm it. () is false, however, on any reasonable view (including WT). If S doesn’t even recognize that H implies ~BIV, then it’s no surprise that, although she knows H, she doesn’t know ~BIV. That’s why S’s recognition that P implies Q is included in the antecedent of KC. And, as we saw in §., closure needs to be formulated in terms of warrant – that is, by WC – anyway. Here’s an appropriate revision of the trilemma: 



There are skeptical arguments that don’t invoke the claim that we don’t know the denials of skeptical hypotheses (such as Agrippa’s trilemma). But WT is unlikely to contribute anything to the resolution of those arguments. Affirming it won’t help resolve that trilemma, for example. Front-loading, recall, is the claim that S can only acquire her warrant for P if she already has a warrant for Q. See §.. Also see Chapter  for an evaluation of front-loading.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



() S is warranted in believing H. () S can have no warrant for ~BIV. () Necessarily, if S is warranted in believing H, then she can either acquire a warrant for ~BIV by inference from H – transmission succeeds – or she has another source of warrant for ~BIV. The WT advocate claims again that () is false: if S is warranted in believing H and recognizes that H implies ~BIV, she will acquire a warrant for ~BIV in virtue of that recognition. So she can have a warrant for ~BIV if she does have a warrant for H. () is then consistent with (): although the antecedent of () is true, so is (the first disjunct of ) the consequent. But the trilemma is resolved whether or not transmission succeeds. Suppose that it fails: S can’t acquire a warrant for ~BIV by inference from H, even if she is warranted in believing H. Suppose also that () is true: S is warranted in believing H. Her being so warranted either requires that she already has a warrant for ~BIV (and so before she infers it from H) or it doesn’t. If it does, then, even though she can’t acquire a warrant for ~BIV by inference from H, she still has a warrant for ~BIV. () is then true: if she satisfies the antecedent then she satisfies the (second disjunct of the) consequent. But () is false: she can have a warrant for ~BIV (since she does). The trilemma is resolved, in the same way that the WT advocate resolves it. If S’s being warranted in believing H doesn’t require a prior warrant for ~BIV, then she is warranted in believing H whether or not she can proceed to acquire a warrant for ~BIV by inference from it (since her doing so would require that she is warranted in believing H before inferring anyway). So transmission’s failure is no bar to her being warranted in believing H. And she can be so warranted even if she has no other source of warrant for ~BIV (since she needs no prior warrant for ~BIV at all). But then () is false: she can be warranted in believing H even if she can’t acquire a warrant for ~BIV by inference from H and has no other source of warrant for it. So, assuming that S is warranted in believing H – an assumption that the WT advocate also makes – the trilemma is resolved: () is false if transmission fails, and so whether or not it does fail.



() is false even if, as a matter of fact, S can acquire a warrant for ~BIV by inferring it from H. () requires that she couldn’t be warranted in believing H if she couldn’t acquire a warrant for ~BIV that way (and had no other source of warrant for it). That she can acquire a warrant that way doesn’t imply that she must be able to do so in order to be warranted in believing H.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

WT, therefore, contributes nothing to the anti-skeptical project. The skeptical fire has either already been extinguished before reaching ordinary knowledge – S either has a prior warrant for anti-skeptical Q or doesn’t need one – or it has already consumed that knowledge – S needs a prior warrant for Q and doesn’t have one. In either case, the WT advocate shows up too late to do any good. But perhaps WT can help with the subsequent recovery operation, even if it couldn’t put out the flames. Closure is, the WT advocate might insist, still worth preserving; WT can at least achieve this much, since it provides S with a way to acquire a warrant for Q whenever she is warranted in believing P, as closure requires. So WT still has something to offer. But, in fact, it doesn’t. To see why not, note that closure – WC in particular – amounts to a disjunction: either S will invariably acquire a warrant for Q whenever she infers it from warranted P (the transmission disjunct) or, although she sometimes can’t acquire a warrant for Q by inference from warranted P – transmission sometimes fails – she will inevitably have a warrant for Q when it does fail from some other source (the other-source disjunct). To be convinced that closure is true is to be convinced either that the transmission disjunct is true, that the othersource disjunct is true, or that the disjunction is true without being persuaded as to which it is. If you’re convinced that closure is true because you’re convinced that the transmission disjunct is true, then your conviction that closure is true just is a conviction that WT is true. But then you can hardly appeal to closure preservation as an independent consideration in support of WT; if you had no reason to affirm transmission, you’d have no reason to affirm closure. If you are convinced that the other-source disjunct is true, that just is to be convinced that WT is false – since WT entails that transmission never fails – and so hardly supports WT. And if you are not persuaded that one disjunct in particular is true, while being convinced that one or the other is true, your conviction provides no reason to believe that the 





The same applies to Black’s method externalism that we considered in §.. We know that we have hands, Black suggests, because that belief is sensitive (and indeed so whether or not methods are individuated externally). But, if so, the skeptical threat is already defused, whether or not we also know that we are not BIVs. So whatever virtue there might be in knowing the latter, it’s not the protection of ordinary knowledge from skeptical threat. Recall that the WT advocate’s resolution of the trilemma involves denying () while keeping () (which is an endorsement of closure), whereas the trilemma’s resolution if transmission fails and a prior warrant for Q is not required is due to the falsehood of () (and so implies that closure fails). So WT would ensure that it’s resolved in a manner consistent with closure. You can’t be rationally convinced that both are true; they’re mutually exclusive.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



transmission disjunct is true in particular. So in no case can appeal to closure preservation provide independent support for WT. So WT is out of a job. It can’t contribute to the fight against skepticism. And it can’t be motivated by an independent determination to preserve closure. The WT advocate has, then, nothing to appeal to other than a bare intuition that WT is true. But we’ve been here before: even if we do have such an intuition, we also have a competing, strong intuition that transmission fails in Dretske cases. Moreover, WT can only be true if NIFN – itself a highly intuitive constraint on possible sources of warrant – is false.

. Transmission and Conditions of Warrant A variety of proposed conditions on warrant are on the table. Some are offered as necessary conditions, and some as both necessary and sufficient. Advocates of these conditions might worry that renouncing WT is incompatible with their view. But such a worry would be unfounded, at least with respect to three popular such proposals. For not only is transmission failure compatible with those proposals, the condition proposed itself is not transmitted through inference in Dretske cases. Warrant transmission is not merely the claim that, if S is warranted in believing P and recognizes that P implies Q, then she has a warrant for Q. That’s warrant closure. Transmission requires that S acquires a warrant – one that she would not otherwise have – in virtue of her recognition that P implies Q; if she didn’t recognize this, then she would not have acquired that warrant, and so would have fewer warrants than she does have. Correspondingly, the transmission of a condition of warrant from P to Q is not merely the claim that, if S’s belief in P satisfies that condition and she recognizes that P implies Q, then her belief in Q satisfies that condition. That would be closure for that condition. Transmission requires that S’s belief in Q satisfies that condition in virtue of her recognition that





Evidence that either Peter or Paul committed the murder – they were the only ones at the scene – might well constitute some evidence that Paul did it that you did not have before. But it would be ridiculous to suggest that you should believe that he did it merely because that would “preserve” the fact that one of them did. And, sometimes, sufficient but not necessary. Although Roush  describes her view as a variant of Nozick’s tracking account, incorporating probabilistic renderings of sensitivity and adherence, she proposes that one can know Q by inference from P when P is known (or, in Roush , when P satisfies the tracking conditions), even though one’s belief in Q doesn’t satisfy the tracking conditions. So satisfaction of those conditions is a sufficient but not necessary condition of warrant.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

P implies Q. That recognition must play an essential role: if her belief in Q didn’t satisfy that condition before so inferring, it does so now; and if it did satisfy that condition beforehand for some reason or other, it does so now for an additional reason (or does so to a greater extent than before, if the condition is measurable in degrees). But the proposed conditions of warrant are not so transmitted. S’s inferring Q from P, which satisfies that condition, does nothing to ensure that her belief in Q satisfies that condition (or satisfies it to a greater extent than before). If her belief in Q didn’t satisfy that condition before she so inferred, then it doesn’t do so afterward. And if it did satisfy that condition beforehand for some reason (or reasons), it only satisfies it for the same reason (or reasons) afterward. This might not imply on its own that warrant transmission itself fails. It could be that some other condition of warrant requires that S recognize that P implies Q, so that her recognition is essential to warrant acquisition for Q. But it does imply that the advocate of the proposed condition need not view transmission failure as a threat. Even if transmission succeeds, it’s not because the proposed condition is transmitted. This is also compatible with closure. Q might still satisfy every condition of warrant. The proposal might even imply that it does (if the condition on offer is the sole condition of warrant), and so preserves closure. But, if so, the proposal doesn’t preserve closure because it implies that transmission succeeds. Indeed, it provides a reason to believe that it doesn’t succeed, since the condition on offer doesn’t transmit.





Whether transmission failure for a condition of warrant implies transmission failure for warrant itself is a delicate, and possibly terminological, issue. Suppose that warrant requires that two conditions, A and B, be satisfied. Suppose also that, if S is warranted in believing P and P implies Q, those facts alone imply that Q satisfies A (and so whether or not S recognizes that P implies Q). Finally, suppose that, while those facts don’t alone imply that Q satisfies condition B, they do so in conjunction with the fact that S recognizes that P implies Q. Her recognition that P implies Q is then essential to Q’s satisfaction of B, but not to its satisfaction of A. Is warrant itself transmitted? One might say “no”: transmission requires that S’s recognition of the implication is essential to the acquisition of every condition of warrant; otherwise, it is only part of warrant that is transmitted, not warrant itself. If that’s right, then the fact that a necessary condition of warrant is not transmitted implies that warrant itself isn’t transmitted, and so that WT fails. Or, one might say “yes”: after all, S wouldn’t acquire a warrant for Q unless she recognized that P implies it (because that’s required for condition B). If so, while a condition of warrant isn’t transmitted (namely, A), nevertheless warrant itself is transmitted. So WT succeeds. I’m inclined to the first answer; the idea behind transmission, it seems to me, is that everything that warrant needs is delivered by the inference, and so not delivered without it. But I won’t insist on that here. For related discussion see Warfield , Brueckner , and Murphy . As we’ll see, reliabilism is an exception; it does imply closure (and so transmission) failure.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



The proposed conditions on warrant we’ll consider are safety (§.), reliability (§.), and evidential support (§.). (We already know that classical sensitivity doesn’t inevitably transmit, since it violates closure.)

.

Transmission and Safety

According to the safety account, S is only warranted in believing P when S’s belief is not easily false; that is, in nearby worlds in which S believes P, P is true. For example, S’s belief that it’s a zebra is (presumably) safe: since the consequences would be disastrous if it were revealed that the zoo disguises its animals, zoo animals are not disguised in nearby worlds, but are instead the animals they appear to be. So, in nearby worlds in which S believes that it is a zebra on the basis of its appearance, that belief is correct. But S’s inference to “it’s not a disguised mule” violates NIFN: in any disguised-mule world in which she arrives at that belief in the same manner as she actually does – that is, by inference from “it’s a zebra” believed on the basis of its appearance – she will end up believing that it isn’t a disguised mule. Its being a disguised mule hardly prevents her coming to that belief in this way. So a disguised-mule world in which she does so is nearer than one in which she doesn’t (since she actually does so). So, if there are any nearby disguised-mule worlds at all, she will believe that it is not a disguised mule in, at least, the nearest such world. So there will be a nearby world in which she believes that it is not a disguised mule when it is. Her belief that it isn’t a disguised mule will be unsafe. That belief will, then, only be safe if there are no nearby disguised-mule worlds at all. Call such beliefs far-safe, as opposed to the near-safe beliefs that are false in nearby worlds but wherein S doesn’t believe them. A far-safe belief is safe solely in virtue of its modal profile: there are no nearby worlds in which S believes it and it is false simply because there are no nearby worlds in which it is false at all.



 



This is the simplest formation of safety; there are many variations on the theme. Advocates include Williamson b, Pritchard a, , and , Sosa a and b, and Luper  and . Or, at least, they are not disguised in most such worlds. See fn. . If her belief is not safe, then it is not warranted on the safety account, and so not known. As per Chapter , generalizing this to the other Dretske cases is tantamount to skepticism. But the safety theorist is no skeptic. It is also method-unsafe: there is a nearby world in which she utilizes the same method and ends up believing that it is a disguised mule when it isn’t.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

Moreover, S’s belief that it isn’t a disguised mule, if she does believe it, must be far-safe in order for her belief that it’s a zebra to be safe (and so warranted). Disguised-mule worlds in which she comes to believe that it’s a zebra on the basis of its appearance are nearer than those in which she doesn’t (since she actually does so). So she will believe that it’s a zebra in the nearest disguised-mule (and so non-zebra) world. So if any such world is nearby, she will believe that it’s a zebra in a nearby non-zebra world; her belief that it’s a zebra will be unsafe. So her belief that it’s a zebra can only be safe, and so warranted, if there aren’t any nearby disguised-mule worlds at all. And if there aren’t, as we saw, her belief that it isn’t a disguised mule is far-safe. So her belief that it’s a zebra is only safe if her belief that it’s not a disguised mule (if she has one) is far-safe. But the latter belief’s being safe is only a reflection of the fact that the nearest disguised-mule worlds are far enough away that they can be ignored; her inference contributes nothing to its safety. Suppose that, on the way to the zoo, S believes that there are no disguised animals in the zoo – and so no disguised mule in the zebra paddock – because she assumes that the zoo proprietors would not disguise their animals for fear of the PR disaster that would result if the deception were uncovered and revealed to the public. But in fact the proprietors do plan to replace the zebra with a disguised mule if it dies, notwithstanding that risk. The zebra is also very sick. So there is a nearby world in which the zebra has died and the animal in the zebra paddock is a disguised mule. Her belief is unsafe. Upon arriving at the zoo, S looks in the cage, sees what looks to be a zebra, believes that it’s a zebra as a result, and infers that it’s not a disguised mule. S will still believe that it’s a zebra and infer that it’s not a disguised mule in the nearest disguised-mule world (since that’s what she actually does); and there is a nearby disguised-mule world. So her belief will be as unsafe as it was on the way to the zoo: in the nearest of the nearby disguised-mule worlds she still believes that it’s not a disguised mule. Suppose instead that, after she arrives at the zoo, she learns from a disgruntled employee that the proprietors will have replaced the zebra with 

On Pritchard’s  version of safety, her belief that it isn’t a disguised mule is safe if it is true in most nearby worlds in which she believes it, and true in all of the very nearest such worlds. This tolerates a few nearby (but not nearest) worlds in which she believes that it isn’t a disguised mule when it is. But, for every nearby disguised-mule world, there is a nearby such world in which S believes that it isn’t a disguised mule, since a disguised-mule world in which she arrives at this belief in the same way as in the actual world will be nearer than one in which she doesn’t do so. So whether there are enough nearby worlds in which she believes falsely that it isn’t a disguisedmule world to render the belief unsafe is again solely a function of the number of nearby disguised-mule worlds.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



a disguised mule if it has died, but that the employee doesn’t know whether it has died. So she administers a DNA test to the animal in the zebra paddock, the results of which inform her that it is a zebra. She infers from this that it’s not a disguised mule. The nearest worlds in which the DNA test delivers the result that it’s a zebra when it isn’t are, we can assume, distant. In virtue of the information she learned from the employee, she won’t believe that it’s a zebra unless the DNA test says that it’s a zebra, and won’t believe that it’s not a disguised mule unless the results of the DNA test imply that it isn’t. So, although there are nearby worlds in which it is a disguised mule, she now won’t believe that it isn’t a disguised mule in those worlds. She only believes that in worlds in which the results of the DNA test imply it; and they don’t do so in any nearby disguised-mule (and so non-zebra) worlds. Her belief that it’s not a disguised mule is now safe, and so safer than it was on the way to the zoo. In the first scenario – when S believes that it’s a zebra on the basis of its appearance – her inferred belief that it’s not a disguised mule is no safer than it was before she so inferred. Whereas, in the second scenario – when she believes that it’s a zebra on the basis of the DNA test – her inferred belief that it’s not a disguised mule is now safe, notwithstanding having been unsafe beforehand. So safety doesn’t inevitably transmit: whether the inferred belief is safer than it would otherwise be depends on her reason for believing the premise. And when the inference violates NIFN – as in the first, but not the second, case – safety doesn’t transmit: that she performs the inference makes the inferred belief no safer than before. Safety doesn’t transmit through inference in Dretske cases.

. Transmission and Reliabilism According to the reliabilist, warrant for P requires that S’s belief in P be reliably produced; that is, the process producing it typically outputs true belief in the actual and, perhaps, nearby counterfactual worlds. Reliabilism is usually formulated as a theory of justification rather than of warrant 



There are other examples that indicate that closure (and so transmission) fails on the safety account; see Murphy  and Alspector-Kelly . (Those examples are not, however, Dretske cases, nor do they involve violation of NIFN.) See, for example, Goldman  and  and Kornblith . This is, in particular, process reliabilism. “Reliabilism” is sometimes used more broadly to refer to any view that incorporates some sort of truth-conducive feature, including sensitivity, safety, and some virtue-theoretic accounts (as offered by Sosa  and , Greco , , and , and Pritchard , for example). Classical sensitivity implies closure failure; and we discussed safety in §.. The argument below applies equally well to virtue-reliabilist accounts; the reason on offer for denying

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

(understanding warrant in the Plantinga sense). But reliabilists very typically understand knowledge to require justification; if so, reliability is also a necessary condition of warrant. It might seem at first glance that the reliabilist has good reason to think that reliability transmits in Dretske cases. S’s belief that it’s a zebra is, presumably, reliably produced; the case can certainly be developed in such a way that her judgments of animal species (of the sort, at least, encountered in zoos) on the basis of their appearance is correct a high proportion of the time. She then infers “it’s not a disguised mule” from “it’s a zebra.” That is a belief-dependent process, taking beliefs as input and delivering beliefs as output. We measure the conditional reliability of such processes: given that the input beliefs are true, what is the truth-frequency of the output beliefs? This is not a straightforward function of the fact that the inference itself is valid; what’s at issue is whether S’s own psychological inferential process is reliable, and its being so is not determined (or not solely determined) by the validity of the inferential relation itself. But, as with S’s belief that it’s a zebra, there is no reason why the case can’t be specified in such a way that, whatever is required in order for a psychological inferential process to be conditionally reliable, the process involved in S’s acquisition of her belief that it is not a disguised mule counts as one. So the input belief is (unconditionally) reliably produced; and the output belief is conditionally reliably produced. On standard process reliabilist accounts, this implies that the output belief is (unconditionally) reliably produced. So that S so infers ensures that her belief is reliably produced. But that her belief that it’s a zebra is reliably produced doesn’t ensure on its own that her belief that it’s not a disguised mule, if she has



 



that S’s process in arriving at Q in Dretske cases is reliable is equally a reason for denying that S’s belief in Q is delivered by a disposition to arrive at the truth. The generality problem is an issue here: is the process one of identifying zebras on the basis of the animal’s appearance? All species? Those in zoos? Those that are easily so identifiable? I won’t attempt a solution to that problem here, but will instead assume – as is surely reasonable – that the Zebra case can be elaborated in such a way that the process involved in her coming to believe that it’s a zebra is a reliable one. See Goldman . Indeed, it might count as perfectly conditionally reliable, or close to it; S’s inferential process, however identified, might be such that she only so infers when the inference is valid (which is, of course, truth-preserving). According to one of Goldman ’s proposed conditions on justified belief, if the input belief is justified and the belief-dependent process conditionally reliable, then the output is justified. If justification is reliability, this amounts to the claim that the output is unconditionally reliably produced when the input is reliably produced and the belief-dependent process is conditionally reliable.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



one, is reliably produced; she might believe the latter as a result of wishful thinking (which is obviously unreliable). So reliability is transmitted across the inference. But there is good reason to resist this conclusion. For one thing, it is far from intuitive that that belief is reliably produced. Recall the “Zoo-Testing-R-Us” inspector from §., whose method of investigation involves taking each animal to be as it appears, drawing the conclusion that it is not a-disguised-something-other-than-it-seems-to-be, and arriving at the result that none of the animals in this zoo (or any other) are disguised. The suggestion that his conclusion that none of the animals in this (or any other) zoo are disguised is reliably produced is, on its face, ludicrous. The manner in which the “Zoo-Testing-R-Us” inspector comes by his judgment is utterly useless as a way to determine whether the animals in a zoo have been disguised. And there is a good explanation for this intuition. While S’s (and the inspector’s) process (or combination of processes) does arrive at true beliefs most (or all) of the time, this is an artifact of the circumstances in which the process is instantiated. In particular, actual zoos rarely disguise their animals, and rarely do so in nearby worlds (or so we can assume). So S’s process is reliable simply because, although it will inevitably produce false negatives in positive cases – that the animal isn’t disguised when it is – it so happens that, in the world (or worlds) against which the reliability of the process is being evaluated, positive cases are rare. Suppose that I have a stick upon which is written “that’s not a diamond.” I point it at various objects and acquire the belief that the object pointed at is not a diamond solely on the basis of the stick’s saying so. This process will inevitably deliver false negatives in any positive case (that is, when pointed at a diamond). Nevertheless, the frequency of my being right will be very high, simply because diamonds are rare. And it will continue to be so in nearby possible worlds; diamonds will, presumably, also be rare in such worlds. 

 

Or, more precisely, they are not disguised in a certain variety of ways: the zebra is not a disguised mule, or horse, or donkey, or . . ., the groundhogs are not disguised rats, or hamsters, or . . ., and so on. This list can be made as long as one likes. The investigators could also, presumably inductively, infer that the animals aren’t disguised at all, having ruled out the possibility that they are disguised in the most plausible of ways. That inference can also presumably be specified in such a way as to be conditionally reliable. And so I don’t believe this on the basis of background information concerning the rarity of diamonds (which would involve a different process). Diamonds aren’t actually that rare compared to other gems. But they are rare among objects in general.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

But if this counts as a reliable process, then so much the worse for reliabilism. It certainly won’t deliver knowledge, even when it produces the right result. Nor is this a Gettier case. But if justification is a matter of reliable belief production, and a true belief that is justified and not Gettiered is known, then these should be cases of knowledge. So the reliabilist had better not count them as reliably produced beliefs, at least not if they view knowledge as reliably produced true belief that is not Gettiered. This indicates that the reliability of a process can’t solely be a measure of the frequency of truth in actual (and, perhaps, nearby possible) circumstances; frequency of truth needs to be determined across a range of both positive and negative cases. If we are trying to find out whether there are disguised animals in the zoo, the process that we use to evaluate this must be such that it will deliver a correct answer a high proportion of the time when applied to both cases in which the animals are disguised and in which they aren’t. A process that inevitably delivers false negatives when applied to positive cases is not a reliable process. But any process that involves violation of NIFN will do so, including Dretske cases. If I determine whether newspaper reports are misprints by, first, believing the contents of the report and, second, inferring from those contents that the report itself is not a misprint, that process will inevitably deliver false negatives in every positive case: its output will be that the report is not a misprint whenever it is. And that it does so is recognizable by anyone who understands what the process is. A belief so produced surely doesn’t count as reliably produced simply because misprints are rare. Or, at least, if it does, then this would be a good point at which to get off the reliabilist bus.





Investigative procedures that inevitably produce false negatives would be treated as obviously unreliable well outside of the epistemological context. Medical diagnostic procedures that had this characteristic would certainly count as such, however rare the conditions they are used to diagnose might be. (Imagine a test for a rare type of cancer that is guaranteed to deliver the result that the patient doesn’t have it, whether or not they do.) Note that this is not a variant of the “Norman” and “Truetemp” counterexamples to reliabilism (BonJour  and Lehrer , respectively). In both cases, the process involved is reliable when measured against negative cases: if the president is not in New York, Norman’s clairvoyance will not generate the belief that he is; and if the temperature is not  , Truetemp’s “tempucomp” won’t produce the belief that it is. That is, they will produce a high proportion of true beliefs when measured against a wide variety of president-locations or temperatures. Nor does S have any defeating evidence (or reliably produced defeater) against the reliability of her process. Quite the contrary: assuming her background information is like ours – indicating that zoo animals are unlikely to be disguised, that newspaper misprints are rare, and so on – then she has substantial evidence that her process is reliable (if measured solely against actual and nearby possible cases).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



Assuming this is correct, then standard reliabilist accounts, according to which reliability is measured by frequency of truth in the actual world, or actual and nearby possible worlds, need revision. This also implies that the output of a conditionally reliable process whose input beliefs are reliably produced is not necessarily reliably produced (and so, if reliability is justification, not necessarily justified). And it implies that reliability is not inevitably transmitted through a conditionally reliable belief-forming process. Moreover, if reliability is a necessary condition of warrant then, since S’s belief that it is not a disguised mule is not reliably produced, it is unwarranted. But her belief that it is a zebra is presumably both reliably produced and known (and so warranted). So closure fails on an adequate reliabilist account as well.

. Transmission and Evidentialism In way of an internalist view, consider Conee and Feldman-style evidentialism, according to which S’s belief is justified when it is sufficiently supported by S’s evidence, where S’s evidence is internally specifiable. Evidentialism, like reliabilism, is typically presented as a theory of justification rather than of warrant. But assuming again that justification is a constituent of knowledge – which evidentialists virtually always endorse – the view posits a necessary condition of warrant. Although I suspect that many evidentialists would endorse it, it is not necessary to saddle the evidentialist who advocates WT with the claim that the evidence that suffices for P will inevitably suffice for Q (and so that transmission is penetration). For S’s recognition that P implies Q is also





 

Reliabilists might, instead, suggest that their view doesn’t imply that the beliefs generated in Dretske cases are reliably produced by the standard measures. Perhaps, for example, the process involved in generating my belief that the newspaper report isn’t a misprint can be somehow characterized so that the frequency with which it delivers true belief in actual and nearby worlds is low. If so, that’s fine. We can still assume that the input belief – that the Broncos won, believed on the basis of the report – is reliably produced. So my inferring to the conclusion that the report is not a misprint does not transmit reliability, however the process that produced it is ultimately characterized. See the essays in Conee and Feldman . There are many other views that could reasonably be called evidentialist – Williamson’s, for example – but that involve wildly different views of evidence. I’m focusing on Conee and Feldman’s version as an exemplar of an internalist account (which Williamson’s certainly isn’t). See, however, fn. . It won’t be a sufficient condition since Gettier cases need ruling out as well, and they can’t be ruled out by appeal to S’s evidence. See §..

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

internal, and so constitutes a part of S’s overall evidence for Q that is obviously not part of S’s evidence for P. So assume that S’s total relevant evidence E for Q consists in her evidence for P – namely, her basis B – and her recognition that Q follows from P. If E suffices for P, how could it, in conjunction with S’s recognition of the inferential relation, not suffice for Q? After all, given that P implies Q, the probability of Q given B cannot be lower than the probability of P given B. And the agent’s recognition that P implies Q is surely not going to undermine the probability that B confers on Q. So E must make Q at least as probable as P. However, Q’s being false implies B. And, if Q is false, S will presumably still recognize that P implies Q. So, when Q is false – it’s a disguised mule, for example – S will have available the same evidence for Q that she would have in the “ordinary” scenario in which Q is true, namely when it’s a zebra displaying its natural stripes. So, although S appeals to E in support of Q, if Q is false, E will remain: B is still true and S still recognizes that P implies Q. But, arguably, a body of evidence E contributes no evidential support to an hypothesis H when the denial of H implies E. This is both highly intuitive and follows from the popular view that E only incrementally confirms H when the probability of P given E is higher than the prior probability of H. Assuming that ~H implies E and the prior probability of E is between  and  – a safe assumption since, for example, it is neither 







Note, however, that the internalist evidentialist can’t cite P itself as her evidence for Q, since P will typically concern external matters. She might cite the fact that S believes P (but not P itself ) as such evidence. But that won’t matter for the argument to follow; see fn. . Actually, as I’ve emphasized, B need not be internal; it might concern the surface characteristics of a zoo animal, where a gas gauge needle points, or even the cosmological red shift (see §..). But, for the purpose of considering internalist evidentialism, we can treat B as referring, instead, to internal states generated by the relevant external states: S’s seeming to see a zebra, which is caused by the surface characteristics of the animal, S’s seeming to see a gauge’s needle pointing at E, which is caused by its actually pointing at E, and so on. If there aren’t really such internal evidence-bearing states, shared by both ordinary and skeptical scenarios – as disjunctivists insist there aren’t – then so much the worse for internalist evidentialism. As we saw in §., however, it is dangerous for the closure advocate to appeal to this in defense of closure, since S’s recognition that P implies Q is irrelevant to the fact that Q must be at least as probable as P; only the fact that P implies Q matters. It is all the more dangerous for the advocate of WT to do so, since her claim is precisely that S acquires a warrant in virtue of S’s recognition that P implies Q. Even if S’s evidence includes that recognition, it is idle in such an argument. Q’s being false doesn’t strictly imply that S will recognize that P implies Q (or that she believes P). But it would be ridiculous to suggest that her actually inferring from P to Q delivers evidence for Q that she wouldn’t otherwise have solely because it’s possible for her to fail to recognize that P implies Q (or not believe P) while Q is false. It’s not as though she’d inevitably have evidence to the effect that P doesn’t imply Q (or new reason to doubt P) if Q is false.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



certain that the animal in the paddock will look like a zebra nor certain that it won’t – it’s a theorem of the probability calculus that the probability of H given E is lower, not higher, than the prior probability of H. So, far from confirming “it’s not a disguised mule,” S’s evidence disconfirms it. It remains true that p(Q|E)  p(P|E), since P implies Q. Since p(Q) > p(Q|E) – assuming that  < p(E) <  – this implies that p(Q) > p(P|E): the prior probability that it’s not a disguised mule is higher than the probability that it’s a zebra given that it looks like one (and that P implies Q). On a subjectivist rendering this requires that S be more confident that the animal is not a disguised mule than that it’s a zebra given that it looks like one. On a more objectivist rendering, S’s background evidence for “it’s not a disguised mule” – concerning the likelihood that zoos would disguise their animals, presumably – must render that proposition more likely than does “it’s a zebra” given that it looks like one. But, in either case, her recognition that “it’s a zebra” implies “it’s not a disguised mule,” together with its looking like a zebra, adds nothing to the initial evidence S has for “it’s not a disguised mule.” If sufficient evidence is a condition of warrant, then that condition does not transmit: S already has all the evidence she can get for its not being a disguised mule, whether or not she recognizes that this follows from its being a zebra and notes that the animal looks like a zebra. This is recognizable without the probabilistic machinery so long as it is conceded that (i) evidence E can’t count as evidence, to any extent, in favor of both Q and ~Q, and (ii) E does count, to some extent, as evidence 





If ~H implies E then ~H is logically equivalent to (~H & E); ~H and (~H & E) are, therefore, equiprobable. p(~H|E) = p(~H & E)/p(E) (by definition), so p(~H|E) also equals p(~H)/P(E). Assuming that  < p(E) < , p(~H)/p(E) > p(~H). So p(~H|E) > p(~H). Since p(~H) =  – p(H), p(~H|E) =  – p(H|E). So p(H|E) < p(H). Wright  and Vogel  both point out that there can be cases wherein E does seem to provide sufficient evidence for H that S did not have before, although p(H|E) is less than p(H). These are cases wherein S previously had sufficient evidence E for H and then acquires E that has three features: E constitutes a defeater for E (so that E is no longer relevant); E suffices on its own to provide sufficient evidence for H; and p(H|E) < p(H|E). In S’s new evidential circumstances her evidence, consisting solely in E, still suffices for H; but p(H|E) is nevertheless lower than p(H) (being the probability of H delivered by E before acquisition of E). But Dretske cases do not have these features. If S does have prior evidence for Q (in virtue of background knowledge, say), then B (and her recognition of the implication) doesn’t defeat that prior evidence. (Its looking like a zebra doesn’t undermine background knowledge concerning the likelihood that zoos would disguise their animals, for example.) So the fact that p(Q|E) < p(Q) can’t be excused by appeal to an adverse effect that E has on S’s prior evidence E for Q. Note that this result extends to externalist evidentialist views so long as B constitutes S’s evidence for P. For Q will still imply B, where B is, in at least many Dretske cases, an external state of affairs (the surface characteristics of the animal, the position of the needle on the gauge, the contents of the newspaper report, and so on).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005



Against Knowledge Closure

for ~Q. It’s hard to imagine a view repudiating (i). If E is evidence that ~Q is true, it’s evidence that Q is false. But nothing can be evidence that a proposition is both true and false. And the fact that ~Q implies E makes it very difficult to deny (ii). Before S walks up to the paddock – and, so, before she has any reason to think that it looks like some animal in particular – she has no reason to think either that it’s a zebra or that it’s a mule disguised to look like a zebra. She then looks in and discovers that it looks like a zebra. Although she does have more evidence than she had before to believe that it’s a zebra – enough, perhaps, to warrant that belief – she presumably also has more evidence than she had before to believe that it’s a mule disguised to look like a zebra (albeit not, presumably, enough to warrant that belief ). After all, there are a lot of cages in the zoo; it could have looked like an elephant, flamingo, seal, etc. At least it does look like a zebra, as the latter hypothesis predicts, which it might well not have. Suppose that S actually has reason to believe that the animal in the cage, whatever it is, is disguised (she just overheard a zoo employee saying as much). When she peers in the cage, she unquestionably acquires more evidence than she had before that it is an animal – plausibly a mule, there not being many other animals with a suitable shape – that is disguised to look like a zebra in particular. It’s hard to see why the same would not be so when S initially had more reason to think that it isn’t disguised than that it is. In that case her evidence doesn’t, in the end, support the claim that it is a mule disguised to look like a zebra. But she still has more evidence to that effect than she had before she peered into the cage. Assuming that its looking like a zebra does provide some evidence, however slight and inadequate overall, that it is a mule disguised to look like a zebra, and that that evidence can’t support that proposition’s being both true and false, then its appearance can’t provide any evidence for “it’s not a disguised mule” whatsoever. And her recognition that it must not be a disguised mule if it is a zebra is entirely idle, since she’ll be in the same position to recognize that if it is a disguised mule. 

The fact that “it’s a zebra” and “it’s a mule disguised to look like a zebra” are contraries – they can’t both be true – doesn’t mean that claiming that the animal’s appearance is evidence for both runs afoul of the principle that evidence can’t support both a proposition and its denial. “It’s a disguised mule” isn’t the contradictory of “it’s a zebra”; “it’s not a zebra” is. And the probability of that goes down. It does so, not because the probability of the disguised-mule way of not being a zebra decreases, but because the probability that it is a non-zebra that doesn’t look like a zebra (undisguised mule, flamingo, etc.) decreases. In fact, it goes to zero: if it looks like a zebra then it’s false that it doesn’t look like a zebra. The remaining probability is then distributed (although not

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

Transmission, Skepticism, and Conditions of Warrant



So evidence – a condition of warrant on the present view – doesn’t transmit in Dretske cases. Like the safety theorist and reliabilist, the evidentialist has no reason to view transmission failure as a threat. On the contrary, unless she has some reason to think that some other condition of warrant does invariably transmit, she should think that WT fails.

. Summary of the Last Two Chapters The last three sections only consider “first generation” versions of safety, reliabilist, and evidentialist views. Many variations and combinations of them have subsequently been developed, as have alternative conditions of warrant in completely different veins. There is obviously no space to consider them all here. The point is not that no such view can be formulated in such a way as to ensure that the proposed condition transmits. It is, instead, that nothing in the core conception of those views encourages the development of successors that do so. It is unintuitive that, in Dretske cases, S’s inference makes the inferred belief any more safe, or reliably produced, or evidentially supported than it would be if she didn’t so infer. And the classical formulations of those views (or, in the case of reliabilism, defensible versions of them) don’t motivate bucking those intuitions. The only motivation for developing successor views that do preserve transmission is then the bare intuition that warrant is inevitably transmitted across deductive inference, that is, Williamson’s insight. But in the previous chapter we saw that this intuition, such as it is, is matched by both of the opposing intuitions that warrant does not transmit in Dretske cases and that violations of NIFN don’t deliver warrant. The closure advocate would, then, be better served by pursuing a different strategy in order to defend closure against the argument by counterexample. In Chapter , however, we’ll see that it will be very difficult for her to do so without capitulating to the skeptic.



equally) over the various ways in which something can look like a zebra, including both its being one and its being a disguised mule. But what about an externalist view of evidence? Suppose, for example, E = K as Williamson suggests. Does this evidence transmit? Well, it does if S’s knowing that P and recognizing that this implies Q ensures that S knows Q, but S’s knowing P doesn’t do so on its own. That is, evidence transmits if and only if knowledge transmits. But knowledge transmits only if warrant transmits (see §.). So evidence transmits, on Williamson’s view, only if warrant transmits. Although that doesn’t rule out warrant transmission, it hardly provides reason to believe it. It’s consistent with E = K that S doesn’t know Q even though she knows P and recognizes that P implies Q. Nothing in a disjunctivist account requires that transmission succeeds either. That S sees that it’s a zebra, for example, doesn’t imply that she has direct perceptual evidence for “it’s not a disguised mule.” It is, in fact, highly implausible that she does, even on a disjunctivist account.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:44:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.005

 

Front-Loading

. Warrant Preservation without Transmission Recall that WC – warrant closure – reads as follows: Necessarily, for every agent S and propositions P and Q: if (a) S’s belief that P is warranted while (b) S recognizes that Q follows from P, then (c) S has a warrant for Q.

WT differs from WC only in clause (c), replacing the above with: (c) S acquires a warrant for Q in virtue of (a) and (b).

We will take on board the conclusion of Chapters  and , that WT fails in Dretske cases: S cannot acquire a warrant for Q by inferring it from P, even if her belief in P is warranted. This remains compatible with closure because WC does not imply WT: even if S’s inference from P to Q does not transmit warrant to Q, it might still be the case that S has a warrant for Q by other means. But if WC is true then so is this: Warrant Preservation (WP) Necessarily, for every agent S and propositions P and Q: if (a) S is warranted in believing P and (b) P implies Q but (c) S cannot acquire a warrant for Q by inferring it from P even if she is warranted in believing P, then (d) S nevertheless has a warrant for Q.



Suppose the antecedent of WC is true: S is warranted in believing P and recognizes that P implies Q. Suppose also that WP is false: its antecedent can be true and its consequent false. Suppose that its antecedent is true and consequent false. Since (c) of WP is true – transmission fails – S does not acquire a warrant for Q by inference from P. Since (d) of WP is false, S does not have any warrant for Q from any other source either. So the antecedent of WC is true but the consequent – S has a warrant for Q – is false. So it is possible for the antecedent of WC to be true and the consequent false. So WC is false (it’s prefaced by “necessarily”). So, if WP is false, so is WC. So WC can only be true if WP is true as well.



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



So if transmission fails (as per (c) of WP) but warrant is closed, then S must have a warrant for Q from some other source if she is warranted in believing P and P implies Q. Although the failure of WT doesn’t imply the same fate for WC, it does raise the question why we should continue to endorse the latter (and so WP). It is compelling to think that, in general, if S knows P and recognizes that Q follows from it, then she knows (or at least is in a position to know) Q. But what’s compelling, in particular, is the idea that her recognition of that inferential relation provides her a way to come to know Q: if S doesn’t know Q already, she can acquire knowledge of Q in virtue of her recognition of that inferential relation. As per Williamson’s insight, it’s the idea that S is able to acquire knowledge – and so warrant – by inference that powers the intuitive engine driving the advocacy of closure. But that’s precisely what doesn’t happen when transmission fails: S’s inference from P to Q cannot deliver a warrant for Q that she did not already have, even if she is warranted in believing P. The closure advocate needs to explain why we should think that closure is, nevertheless, preserved in such cases. After all, it’s not intuitive to think in general that S, knowing P, must know Q when she doesn’t even recognize that Q follows from P. It is hardly more intuitive to think that S, knowing P, must know Q when she wouldn’t be able to learn Q as a result, even if she did recognize that it follows from P. So why believe WP? That is, why believe that, whenever transmission fails, some source of warrant for Q must step in to save the day for WC? An answer must somehow appeal to the failure of transmission itself. Clause (c) is not redundant in WP: the fact that P implies Q does not, on its own, ensure that S, who is warranted in believing P, has a warrant for Q. That amounts to the obviously false closure principle that S, warranted in believing P, must have a warrant for any Q that follows from it, even if S doesn’t recognize that it so follows. So, if WP is to be plausible, it must be so because something about transmission failure ensures that S ends up with a warrant for Q whenever she has a warrant for P.

. Front-Loading The most common – perhaps the only – answer is that, in Dretske cases, S’s having a prior warrant for Q is a condition of her acquiring her warrant for P in the first place. This, it is then claimed, explains why she cannot 

See the quote from Cohen  in Chapter , fn. .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

acquire it by means of inference from P. In order for S to acquire a warrant for its being a zebra on the basis of its appearance, for example, she must have already ruled out – by having a warrant for the denial of – its being a disguised mule; and that’s why she can’t acquire a warrant for its not being a disguised mule in virtue of the inference. The explanation for transmission failure, therefore, implies WP and so saves closure, at least in Dretske cases. That S needs a prior warrant for “it’s not a disguised mule” was the crucial assumption in the skeptical front-loading argument discussed in §.. That argument also required the following premises: () () ()

S’s warrant W for P does not suffice as a direct warrant for Q. S’s background knowledge does not suffice to warrant Q. Q is not warranted by default.

These are not, however, so indisputable that the skeptic can treat them as common ground between her and her optimist opponent. So the optimist can suggest that at least one of ()–() is false in Dretske cases. Call someone who advocates front-loading for Dretske cases a “front-loader.” Optimists can be front-loaders too, in way of defending WP. But that they can be doesn’t demonstrate that they should be. Why think that frontloading is required in Dretske cases?

. The Front-Loading Strategy In Dretske cases, Q’s being false implies that B, the basis of S’s warrant for P, remains; for example, if it’s a disguised mule it will still look like a zebra. So, in this sense at least, its looking like a zebra does not discriminate between the ordinary scenario in which it is a zebra and that in which it’s a disguised mule. One might then suggest that the animal’s appearance can, at best, deliver a warrant for the disjunction “it’s a zebra or a disguised 

A variation on this theme is that “it’s not a disguised mule” is an implicit premise in her inference, so that the argument is premise-circular. But even if, as per Wright’s view, S needs a prior warrant for “it’s not a disguised mule,” that doesn’t mean that it’s an implicit premise in her inference. This would require that S is warranted in believing it, since transmission requires that S be warranted in believing the (implicit and explicit) premises of her argument. But Wright suggests only that S have a warrant for “it’s not a disguised mule,” not also that she have a warranted belief in it, and there’s no reason to saddle him with the latter claim. Also, the relevant implicit premise would not be “it’s not a disguised mule,” but just “it’s not disguised to look like a zebra” or, perhaps, just “it’s not disguised to look the way it does.” And “it’s a zebra” doesn’t imply these claims. (Wright makes a similar point in Wright , –.) At any rate, the discussion to follow in the text applies equally to this variation, so I need take no stand on the issue here.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



mule”; it won’t warrant “it’s a zebra” on its own unless S has an antecedent warrant against its being a disguised mule. This is Crispin Wright’s reasoning: [T]he subjective state involved in such a perception [of a zebra] is shared with a perception of mules, artfully disguised to look exactly like zebras . . . So if I am rationally to claim to be perceiving zebras, and to possess a warrant for P on that basis, I better be in a position to discount the mule hypothesis. (Otherwise the suggestion that it only seems to me that I am perceiving zebras is open.) But that is the same as to say that I better be in a position to affirm that the animals in question are not cleverly disguised mules – precisely Q. So that is something that I need to be in position to claim on independent grounds prior to claiming perceptual warrant for P, and on which my claim to warrant for P therefore rests. It is accordingly not something for which I can acquire a claim to warrant by the argument itself.

Wright, along with Martin Davies, introduced the crucial distinction between transmission and closure, and offered explanations of transmission failure that preserve closure along these lines. As a result, his name is most closely associated with what I’m calling the front-loading reconciliation of transmission failure with closure. Wright’s view is, however, something of a moving target: it has evolved over the years and still appears to be evolving. In addition, his concept of warrant seems closer to justification than to Plantinga-warrant, despite the fact that the latter is the concept of warrant immediately relevant to the topic of knowledge closure. Moreover, he has increasingly focused attention on the transmission of claims to warrant rather than of warrant itself (as reflected in the 







Wright , . Wright references the internal “subjective state” of the agent. I’ve insisted that S’s basis might well concern external states. But, as we did when considering internalist evidentialism in §., we can treat the relevant basis as the internal state brought about by the external state. Or, we can generalize Wright’s argument: so long as S’s belief in P is a response to basis B (no matter what external or internal state B might be) and ~Q implies B, S can’t discriminate between P and ~Q by appeal to B; so S needs a prior warrant for Q in order for B to deliver that warrant for P. See Wright , , , , , , ,  and , and Davies , , and . There is now a burgeoning literature on warrant transmission, including Alspector-Kelly , Chandler , Ebert , McKinsey , McLaughin , and , Moretti , Okasha , Pryor , Silins , Smith , and Zalabardo  (and many others). The most surprising recent development is his concession that transmission failure does not prevent the acquisition of an evidential warrant in Dretske cases. See Wright , – (and Chapter , fn. ). For someone who thinks that justification is a component of Plantinga-warrant, justification is of course relevant. But justification can fall short of Plantinga-warrant in a number of ways. Perhaps S is justified in believing P, but not to such an extent as is required by knowledge. Or perhaps S is Gettiered, in which case she is justified but not warranted. See Alspector-Kelly .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

passage above). As a result it is even less clear that his account translates as one that preserves knowledge closure. I won’t explore Wright’s views and their translatability to the knowledge closure issue in detail here. I will instead just treat the argument represented in the quotation above as the front-loader’s explanation for why prior warrant for Q is a condition of S’s warrant for P in Dretske cases, as well as for why transmission fails (note the last sentence in the quotation). That argument is, so far as I am aware, the only one on offer for why frontloading would be required in Dretske cases. The argument underlying the front-loading strategy runs as follows: The Front-Loading Argument (FLA) In Dretske cases (wherein P implies Q, where Q is ~(D & E & B)): () () () () () ()

~Q implies B; therefore B is to be expected in both P and ~Q scenarios; therefore B on its own only warrants P v ~Q; therefore S must rule out ~Q before she can acquire a warrant for P from B; therefore Prior warrant for Q is a condition of S’s warrant for P from B; therefore Transmission from P to Q, when warrant for P is acquired from B, fails.

Call the transition from () to () the front-loading explanation of transmission failure. It ensures that, when transmission fails – in Dretske cases, at least – S will inevitably possess a warrant for Q as WP requires. So closure is preserved as well. In the next section I will present an argument against front-loading – the buck-passing argument – the conclusion of which is that it entails skepticism. In §. we’ll see that that argument also extends to any position that preserves WP, whether or not it involves front-loading. At first glance, it might seem as though the safety account avoids this skeptical result. In §. we’ll see that this is more a liability than an advantage of that account. In §. I will turn to the front-loading explanation of transmission failure, arguing that it is, at best, incomplete. As we’ll see, the explanation   

 See, for example, Wright . See Alspector-Kelly . Recall that D is or implies ~P, and E explains the compatibility of ~P and B. “Expected,” not “implied by”; although ~Q implies B, P does not. That it is a disguised mule implies that it looks like a zebra, but that it is a zebra doesn’t; for example, it could be a zebra cleverly painted to look like a mule.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



of transmission failure offered in Chapter , based on NIFN, fares much better, and isn’t susceptible to the buck-passing argument. However, it also provides no support for closure. The upshot is a dilemma for the closure advocate: she either insists that transmission always succeeds, which generates the difficulties reviewed in Chapters  and , or she endorses front-loading, which doesn’t explain transmission failure and leads to skepticism.

. The Buck-Passing Argument One concern with the front-loading strategy is that, even if successful, it will only apply to Dretske cases. But WP requires that S end up with a warrant for Q in all cases in which transmission fails. So the WP advocate must either claim that Dretske cases are the only instances of transmission failure, or that the FLA applies to other cases wherein transmission fails, or that there is some other explanation for transmission failure in those cases that also implies WP. Other sources of transmission failure have been proposed, however, and they are not obviously candidates for either the front-loading strategy or amenable to some other explanation implying WP. Another concern is that the inference from () to () in the FLA discounts the “dogmatic” position, according to which S need not have a warrant against the skeptical hypothesis in order to acquire her warrant for P (although she also can’t have evidence in favor of that hypothesis). A more pressing concern, however, is that the FLA implies basis infallibilism. To say that a warrant W for P, acquired in response to basis B, is basis-fallible is to say that there is at least one possible world in which, although P is false, B remains; that is, at least one skeptical scenario is possible. Since S’s basis B is present in any such scenario, the FLA requires that S have a prior warrant against that scenario. This applies to 

 

There is, for example, the widely recognized accumulation-of-risk argument that applies to instances of multi-premise closure. Lasonen-Aarnio  has, moreover, extended that argument to apply to single-premise closure as well. Since the failure of closure follows from the accumulation-of-risk argument, so does transmission failure. But nothing in that argument provides any assurance that WP is true. There might also be competing explanations of transmission failure in Dretske cases that don’t preserve WP. And indeed there are; see §.. See Pryor  and . This is not to say that Wright has ignored the dogmatist alternative; see Wright . Skeptical scenarios, both wholesale and piecemeal, are precisely scenarios in which B is true while P is false; different such scenarios vary by how the compatibility between B and ~P is explained (the animal is disguised, or it’s a hologram, or S is hallucinating, or . . .).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

every skeptical scenario. So S must have a warrant against every skeptical scenario. But then every B-world that is not eliminated by the propositions for which S has a warrant must also be a P-world. So those propositions, in conjunction with B, strictly imply P. And warrants for those propositions must be had before S infers anything from P. This isn’t run-of-the-mill infallibilism. It’s not that B itself – the basis upon which S acquires her belief in P, which some might describe as her evidence – can’t be true without P’s being true; it can obviously look like a zebra, for example, without being one. But it does mean that P can’t be false when the conjunction of B and the other propositions for which S has a warrant are true. This is already worrisome. In an ampliative inference, premise P supports conclusion C, although it is possible for P to be true and C false. But if the FLA is correct, such an inference can only succeed when S already has a warrant against (P & ~C), since that is a skeptical scenario in which C is false and yet S’s basis for it – that it is supported by P – remains. There are then no “pure” ampliative inferences at all: the propositions that the agent is warranted in believing, together with P, strictly imply C. That ampliative inference can only succeed in such a context is, to put it mildly, contentious. The front-loader might attempt to mitigate this by pointing out that, although B together with S’s other warranted propositions strictly imply P, the bases of S’s warrants for those other propositions need not themselves be infallible. Although S must have a prior warrant against its being a disguised mule, for example, the basis of that warrant can itself be compatible with its being a disguised mule. The view doesn’t imply basis infallibilism all the way down. Except that it does. For the same issue arises with respect to S’s warrant against each skeptical scenario. Whatever S’s warrant for “it’s not a disguised mule” might be, for example, its basis is surely not going to deductively imply that it isn’t a disguised mule. But, if not, then there are possible scenarios in which that basis exists and yet it is a disguised mule. 

The resulting view is not quite that the apparently ampliative inferences are actually enthymematic deductive inferences, where the implicit premises encode denials of the various skeptical alternatives. Front-loading only requires that S have a warrant against those alternatives, not that she is warranted in believing them, or that her inference mobilizes those beliefs as implicit premises. Nevertheless, it’s close: she has warrants for a body of propositions such that, if she believes them on the basis of those warrants, she is able to mount a deductive argument that implies the conclusion of the original ampliative argument.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



The FLA then requires that S have a prior warrant against those scenarios in order to have a warrant for “it’s not a disguised mule,” and so in order to have a warrant for “it’s a zebra.” And the warrant-enabling buck is passed. Suppose, for example, that S is familiar with a truthful public relations campaign indicating that the zoo is solely concerned with species preservation. She believes, as a result, that the zoo wouldn’t disguise a cheap mule to look like an expensive zebra solely for the sake of the cost savings, and so that the animal she is looking at is not a disguised mule. There is a possible world, however, wherein it is a disguised mule and the PR campaign is merely a misleading ploy by the money-grubbing zoo proprietors to maximize entrance fees. S’s basis – the PR campaign – remains in that world. So the FLA requires that S have a prior warrant against such a conspiracy. Whatever that basis might be, it’s also surely not going to entail that there is no such conspiracy. So another prior warrant is required to oppose the possibility that that basis exists in the presence of conspiracy; and the buck is passed on again. The buck must stop somewhere. There can’t be an infinite sequence of warrants, each presupposing other prior warrants, since the prior warrants must be in place in order for the subsequent warrants to be acquired. S’s acquisition of her warrant for P couldn’t get off the ground. But it can’t stop at any warrant whose basis tolerates a skeptical scenario in which that basis exists and yet the proposition it warrants is false. For then the FLA kicks in again and a prior warrant against that skeptical scenario is needed. The sequence must, therefore, bottom out in a warrant whose basis isn’t consistent with the skeptical scenario it counts against. That is, it must terminate in an infallible basis: there are no possible worlds in which that basis exists and yet the anti-skeptical proposition it warrants is false. Of course, there is never just one skeptical scenario. There are, for example, any number of ways for something that isn’t a zebra to nevertheless look like one that don’t involve a paint job on a mule. And the same goes for the other warrants down the line: the conspiracy scenario is not the only possible one in which the PR campaign remains and yet the 

Peter Klein might disagree (see Klein , for example). But his “infinitist” view of justification is far from widely endorsed. And, even if correct, infinitism with respect to justification is not infinitism with respect to warrant. Warrant requires that other conditions be in place in addition to or, perhaps, instead of justification. At minimum, for example, warrant requires an anti-Gettier condition. So an additional argument is needed to show that every condition of warrant is realized for each proposition in the infinite series. Some of the arguments for justification infinitism, moreover, don’t apply to warrant infinitism. For example, even if, for every justified P, I must have some reasonable response to the question “why do I believe P?” (and circular reasoning is disallowed), that doesn’t mean that I will inevitably have a warrant for that response.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

animal is disguised. The result is a branching network of warrants, each presupposing prior warrants, presupposing prior warrants in turn, and so on, where each branch terminates in a proposition whose basis strictly implies the proposition it warrants on its own. Note that each proposition warranted within the network, except for “it’s a zebra” itself, is the denial of a skeptical hypothesis. Those hypotheses just get more detailed as we travel down the branch. For “it’s not a disguised mule” to be warranted, for example, S needs a prior warrant against the conspiracy scenario in which it is a disguised mule and the PR campaign remains. That warrant requires a prior warrant against the scenario in which it is a disguised mule, the PR campaign remains as a result of conspiracy, and S’s basis against conspiracy (whatever it is) remains; and so on. The terminating warrant of each branch implies the denial of a variation of the skeptical hypothesis one level up, which is itself a variation of the skeptical hypothesis one level above that, and so on up to “it’s not a disguised mule.” No skeptical hypothesis, no matter how detailed, can have a fallible warrant against it without an infallible basis against every more detailed version of it somewhere down the line. So S must have an infallible basis against every skeptical hypothesis if she has warrants against any of them at all. But that requires that those bases together strictly imply the denials of those hypotheses. They, therefore, strictly imply P in conjunction with the original basis B. For they can only fail to do so if they are consistent with some skeptical B & ~P scenario. But that will leave the denial of some skeptical scenario unwarranted, which front-loading doesn’t permit (since only the disjunction of P and that hypothesis would be warranted, not P itself ). It is, however, utterly implausible that we have, or could have, a collection of bases – a body of evidence, if we can so describe it – that strictly implies everything we know. This is all the more obvious when considering the most likely bases against skeptical scenarios. If, for example, S has a warrant against the disguised mule scenario (or variations thereof ), it will presumably consist in background information that suggests that zoos – at least, reputable ones – are unlikely to resort to such deception. There is no question that such information, whatever it might be, is compatible with deception, however much it might count against it. 

Note that appeal to default warrants won’t help. Default warrants are fallible: it is compatible with my having such a warrant that the skeptical hypothesis it warrants is nevertheless true. But then the front-loading strategy requires a warrant against that scenario as much as against any other.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



The Cartesian aspiration that, if only we dig down far enough, we will uncover a vein of information that will provide a skepticalhypothesis-proof certification of all our knowledge was renounced long ago as quixotic. To insist that nothing is warranted until that elusive vein is found is to hold our knowledge hostage to an unattainable ideal. The front-loading strategy implies skepticism. Call this the buck-passing argument.

. And Not Just Front-Loading Perhaps there are other ways to defend WP than by appeal to the frontloading strategy. If so, the closure advocate might still hope that he can avoid the buck-passing argument. Unfortunately, any view that preserves WP will be susceptible to that argument. Or, at least, it will on the assumption guiding this chapter, that transmission fails in Dretske cases. If S’s warrant for P grounded in B is basis-fallible, then there is a (B & ~P) possible world. But ~(B & ~P) follows from P; it is a Q proposition in a Dretske case with P as premise. So, on that assumption, transmission from that P to that Q fails. WP then kicks in, requiring that S have another warrant for Q. The same will then apply to that warrant: if it is fallible, then there is a world in which the basis for its warrant B remains and yet Q is false, the denial of which follows from Q. That inference – from Q to ~(B & ~Q) – is also a Dretske case. So transmission fails there too. WP kicks in again, requiring that S have a prior warrant for ~(B & ~Q). And so on, until we again bottom out in a collection of bases that together imply the denial of every skeptical alternative to P, and so strictly imply P. Front-loading is no longer being assumed, so S need not necessarily have those warrants in order to acquire her warrant for P from B. But WP, nevertheless, requires that S has them anyway. And, since transmission fails, she can’t get them by inference. That’s all that’s needed for the buckpassing argument to apply. The upshot is that WP itself, whether grounded 

Vogel  suggests that, in at least some cases, S’s evidence E for H can itself intuitively suffice as evidence against the skeptical hypothesis (E & ~H). I don’t think that his examples succeed, but there is not the space to consider the issue here. Even if they do succeed, so that there are some cases in which E provides sufficient evidence for ~(E & ~H), this is a long way from showing that E always so suffices. And in the relevant cases here, it intuitively doesn’t. It’s hardly intuitive that the animals’ looking like zebras constitutes evidence – to any extent – for the claim that they aren’t disguised mules, or that that the PR campaign suggesting that this zoo doesn’t disguise its animals is evidence that that very PR campaign is not a misleading attempt to increase ticket sales, and so on.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

in front-loading or some other account, implies skepticism. There can be no non-skeptical reconciliation of closure with transmission failure in Dretske cases.

. A Safe Way Out? Recall from §. that S’s belief that it’s a zebra can only be safe if her belief that it’s not a disguised mule (if she has one) is far-safe, where a belief is far-safe if it is safe merely because the nearest world in which it is false is distant. It is, therefore, safe whatever S’s basis for it may be, and even if she has no basis at all. The same applies to every other skeptical hypothesis: their denial must be far-safe, and so safe no matter what S’s basis is. So, far from requiring more than can reasonably be expected of S in order for her knowledge that it’s a zebra to be insulated from skepticism, the safety account requires nothing of her at all. The same goes for the other Dretske cases. So the safety account appears to escape the buck-passing argument. This is, however, ultimately more of a liability than a benefit for the safety account. There are three problems. The first concerns the motivation behind the view (§..). The second highlights some unattractive consequences of treating the Q propositions of Dretske cases as far-safe (§..). And the third concerns the plausibility that Q propositions are in fact inevitably far-safe (§..). ..

Safety and Closure

It is often touted as an advantage of the safety account over its predecessor, sensitivity, that it preserves closure. But we saw in §. that safety doesn’t transmit in Dretske cases. And anyone who concedes that transmission fails in Dretske cases confronts the question why we should nevertheless endorse closure, given that the standard motivation – Williamson’s insight – no longer applies. The front-loader has an answer. The standard motivation doesn’t apply – transmission fails – only because S’s warrant for “it’s a zebra” requires a prior warrant for “it’s not a disguised mule”; otherwise, the basis to which S appeals only warrants the disjunction “zebra or disguised mule.” Whatever one might think of that response, it would at least explain why closure is preserved in Dretske cases. But the safety theorist is not a front-loader. All that’s required in order for S’s belief that it’s a zebra to be safe is that there are no nearby worlds in

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



which it is false and yet she believes it. That does require that there are no nearby disguised-mule worlds, since the nearest such worlds are also worlds in which it isn’t a zebra and yet she still believes that it is. But it doesn’t require that S have a warrant for “it’s not a disguised mule.” Consider a safety/sensitivity hybrid view: for propositions that are false in nearby worlds, warrant requires that they are not believed in any such world. For propositions that are only false in distant worlds, however, warrant requires that they are not believed in the nearest such world. The nearest world in which it’s not a zebra (because it’s a flamingo, hippo, etc.) is nearby. So S’s warrant for “it’s a zebra” requires, as before, that the nearest disguised-mule world is distant, and for precisely the same reason: her belief that it’s a zebra must be safe to be warranted, and it can only be so if there is no nearby disguisedmule world. On the original safety view, S’s belief that it’s not a disguised mule, if she has one, can be warranted, since it will be safe (because far-safe). Closure is then preserved in this case. On the hybrid view, S’s belief in that proposition will not be warranted, since it is insensitive. Closure then fails. But the advocate of the former view can’t claim that her view is preferable because the safety of S’s belief that it’s a zebra requires that her belief that it’s not a disguised mule be warranted. It only requires this on the assumption that her view is correct. What the safety of her belief that it’s a zebra does require is that the nearest disguised-mule world is distant. But that’s true on both views. So the safety theorist can’t appeal either to Williamson’s insight or to front-loading in way of explaining why closure should be preserved. So it is hard to see why closure preservation would count as an advantage of the safety account when the typical reasons for endorsing closure – transmission or front-loading – don’t apply. .. Knowing Q for Free Safety theorists emphasize the far-safety of the denials of wholesale skeptical hypotheses. Since the nearest world in which S is a BIV is distant, the fact that she believes that she is not a BIV in such worlds does nothing to 



This is because S’s actual belief that it is a zebra is a result of its looking like one. So disguised-mule worlds – wherein it still looks like a zebra – in which S responds to its looking like one are nearer than those in which she doesn’t. Heller  advocates a contextualist version of such a view; see §...

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

undermine the safety of that belief. While it might be odd to suggest that S can be warranted in believing that she is not a BIV without any evidence in support of that belief, it is also odd to think that she could have any such evidence. Wholesale skeptical hypotheses are precisely designed so that any putative evidence to which she might appeal will remain if the hypothesis is true. The safety theorist might, then, turn our apparent lack of evidence for the falsehood of the BIV scenario to her advantage. She can explain why we know that it is false, despite that lack: our warrant for ~BIV doesn’t require evidence. It is quite another matter, however, to propose the same explanation for the denials of piecemeal skeptical hypotheses, such as “it’s a disguised mule.” The safety theorist must claim that this is also far-safe. If so, then, assuming that safety suffices for warrant, S will know that it is not a disguised mule merely by believing it; she need have no basis for that belief whatsoever. Similarly, S can know that the gas gauge isn’t broken, that the newspaper report isn’t a misprint, that the president didn’t have a heart attack, etc. without any basis for these claims. If she believes these purely as a result of wishful thinking and in the face of overwhelming (but misleading) evidence to the contrary, that will do. This is implausible, to put it mildly. On the safety account, far-safe beliefs require no empirical evidence (because they require no evidence whatsoever). But it is obviously a contingent matter whether it’s a disguised mule or the gas gauge is broken or the report is a misprint, and so on. So S would know contingent matters of fact without any empirical evidence for them. This is bad enough when it comes to knowledge of wholesale skeptical hypotheses. But at least then one might argue that the proposition known is a Wittgensteinian “hinge” upon which empirical inquiry turns, and so deserving of its privileged status despite the contingency of the proposition known. But it is implausible that “it’s not a disguised mule,” “the gas gauge isn’t broken,” “the report isn’t a misprint” and so on count as hinge propositions. After all, it takes little effort to imagine the sort of empirical evidence that would be relevant to their  

Or, at least, it will seem to the agent as though that evidence remains. This also applies to Sosa’s well-known “garbage chute” example, wherein S drops garbage down a chute and believes that it is in the basement as a result (Sosa a). Suppose there is a nearby world in which it isn’t in the basement (it catches on a nail). S will still believe that it is in the basement in the nearest such world; her belief is unsafe. So it can only be safe if there is no such world nearby: her belief is far-safe. But then it doesn’t matter why S believes it. Even if she (falsely and irrationally) thinks the chute is a teleporter and so that the garbage was beamed into the basement, the belief that it is in the basement is still safe. See Alspector-Kelly , §.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



evaluation: a DNA test, a mechanic’s examination of the gauge, asking a spectator at the game, and so on. The safety theorist can’t just concede that these beliefs are unsafe, since that implies that S isn’t warranted in believing the corresponding P propositions, and so doesn’t know them. Generalizing that result delivers skepticism. But safety theorists aren’t skeptics. The safety theorist might respond instead by suggesting that safety is only a necessary but not sufficient condition of warrant; S needs evidence as well. But there are three problems with this. First, nothing in the account so amended provides any assurance that S will have the requisite evidence for Q, and so that WP is true; an independent argument would be needed to demonstrate that S can’t have a safe, evidentially supported belief in P without also having evidence for Q. Second, the resulting commitment is also susceptible to the buckpassing argument. Assuming that S’s evidence E for Q doesn’t strictly imply Q, this view requires that S also has evidence for ~(E & ~Q), since (E & ~Q) is a skeptical hypothesis with respect to Q. And assuming that the evidence E for ~(E & ~Q) is also fallible, S needs evidence for ~(E & (E & ~Q)); and so on, terminating in a body of evidence that implies Q. If a view requires a warrant against skeptical hypotheses in order to acquire knowledge of ordinary propositions, and includes evidence possession as a condition of warrant, that view will require infallible evidence against those hypotheses. We obviously have nothing of the sort. Third, evidence would also be required for knowledge of the denial of wholesale as well as piecemeal skeptical hypotheses. But it was precisely the fact that it is so difficult to point to any such evidence against wholesale hypotheses that the safety theorist intended to exploit. That advantage dissolves if an evidence requirement is imposed on warrant; only if safety suffices for warrant can our knowing the falsehood of wholesale skeptical hypotheses be reconciled with our lack of evidence for them.







On at least some interpretations of “hinge proposition,” such propositions are presupposed by any empirical inquiry; they are the hinges upon which such inquiry turns. They are, therefore, not themselves subject to empirical investigation. Pritchard (among others) suggests that background information suffices to provide that evidence (Pritchard , ). See Chapter  for consideration of the view that background information suffices for warrant. Theoretically the safety theorist could impose an evidential requirement only for the denials of piecemeal hypotheses while lifting that requirement for the denials of wholesale hypotheses. But this seems ad hoc: the agent is conveniently released from the evidential requirement precisely when she is unable to meet it.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure ..

Are Piecemeal Scenarios Always Distant?

There is, moreover, little reason to think that the nearest worlds in which Q propositions of Dretske cases are false are invariably far-safe. Many Q propositions are Vogel propositions: there are analogous actual cases in which Q is false that are, from the agent’s standpoint, indistinguishable from the present case in relevant ways. Cars have been stolen, gas gauges have broken, restaurants have burned down, newspapers have produced misprints, and so on. While rare, these events are not the least bit abnormal. That seems to provide reason to believe that the nearest worlds in which a particular such Q proposition is false are not very distant. True, there are more and less secure parking lots, restaurants that pay closer and less attention to fire code, and so on. And to the extent that S’s circumstances fall on the worse end of these scales, the intuition that S knows P recedes. But we nevertheless credit people with knowing such P propositions in situations that are not so far toward the secure end of the scale that there are no similar cases in which tragedy has struck. Cars have been stolen from very secure parking lots, restaurants that scrupulously follow fire regulations have succumbed to the flames, and so on. The safety theorist can always insist that further detail will always reveal a disanalogy that makes a significant difference to the relevant world’s distance from the actual. There must be some difference, after all, since disaster did strike in the analogous case and did not strike in the target case wherein S knows P. But it can’t be that any difference between a target case in which S’s belief that P is true and an analogous scenario in which the corresponding belief is false ensures that the nearest world in which S’s belief is false is distant. All true propositions would then be far-safe, since there’s no nearby world in which they’re false. But that’s ridiculous. So the safety theorist must claim that every Dretske case – or, at least, enough of them to avoid skepticism – is special: the nearest worlds in which their Q propositions are false are distant, even though there are cases in which seemingly analogous Q propositions are actually false (and so are false in the nearest of all possible worlds). That’s implausible on the face of it; aside from an independent commitment to safety theory itself, there’s no reason to believe it.



The same might be said of the wholesale skeptical hypothesis that I am dreaming. We do dream, after all. See Sosa .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



And, indeed, safety theorists themselves provide reason to disbelieve it. “S can’t afford an around-the-world cruise” implies “it’s not the case that S has enough money for such a vacation because S won the lottery.” Not only is it very common to grant that ticket holders don’t know that they’ve lost the lottery before the draw, safety theorists themselves cite this as evidence for their view on the ground that it explains why they don’t know that they lost. The nearest world in which S wins, the story goes, is very close indeed; only a little repositioning of the balls with S’s numbers on it would be needed. And that explains our intuition that S doesn’t know that she’s lost the lottery before the draw. But if the nearest world in which S wins the lottery is nearby, then so is the nearest world in which she can afford a cruise. So S doesn’t know that she can’t afford such a vacation either. The safety theorist might suggest that S doesn’t know that she can’t afford a cruise after all. But she can’t make the same move in all the other Dretske cases whose Q propositions are Vogel propositions; the ubiquity of such cases means that doing so would deliver skepticism. She could try to distinguish the lottery case from the others. But considerations supporting the claim that the world in which S wins the lottery is near – some ticket is likely to win, and there isn’t much difference between the situation of the one that wins and those that lose – seem to apply equally well to these other cases. So the safety theorist is in an untenable position. Her view does preserve closure, at least for Dretske cases; and it does so in a way that appears to avoid the buck-passing argument confronting other views that preserve WP. However, it undercuts the motivation behind closure. It also implies the highly implausible claim that the denials of piecemeal skeptical hypotheses are knowable without any evidence whatsoever. If it avoids this by imposing an evidence requirement on our knowledge that skeptical hypotheses are false, then it is susceptible to the buck-passing argument once again. And it implies that the nearest worlds in which Q propositions are false are distant, which is unintuitive when the Q proposition is a Vogel proposition, and undermines the safety theorist’s own explanation of the intuition that S does not know that she lost the lottery. Safety theory is not a safe way out of the buck-passing argument against WP.

 

See Pritchard , for example. There are, however, other cases wherein closure fails on the safety account. See Chapter , fn. .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

. Explaining Transmission Failure I’ll put the buck-passing argument aside for now and consider the frontloader’s explanation of transmission failure (that is, the transition from premise  to  of the FLA). We’ll see that this explanation is, at best, incomplete, and that the explanation for transmission failure based on NIFN fares much better. However, the latter explanation doesn’t imply that closure is preserved when transmission fails. .. Varieties of Circularity Suppose the front-loading strategy succeeds: S both needs and has a prior warrant for Q (and for the denial of every other piecemeal and wholesale skeptical hypothesis) in order to acquire her warrant for P. According to the front-loading explanation, this implies without further ado that transmission fails. But why think that it does? Admittedly, S can’t acquire her first warrant for Q by inference from P this way, since she needed a warrant for Q to start with. But one can have any number of warrants for a proposition. So S’s having a prior warrant for Q does not stand in the way of her acquiring a second warrant by inference. So it does not obviously prevent her acquiring another warrant for Q by inference from P, even though she needed an initial warrant for Q in order to acquire her warrant for P in the first place. If the argument is that S can’t acquire a warrant for Q by inference from P because she must already be warranted in believing Q in order to acquire her warrant for P, that argument only succeeds on its face if the only warrant that could be available to her for Q, if any, is via inference from P. For she can only obtain such a warrant if she is initially warranted in believing P. But her warrant for P requires that she already have a warrant for Q. And, by hypothesis, she can have no pre-inferential warrant for Q. So she is not warranted in believing P, and so can’t obtain a second warrant for Q. 

And, in fact, she could even acquire knowledge of Q this way. She can’t do this if she had to know Q to start with. But the front-loading explanation doesn’t require that she initially knows Q; it only requires that she has a warrant for it. So she need not initially believe Q on the basis of that warrant, which is (presumably) required for knowledge. So she could start out with a warrant for, but not a warranted belief in, Q, acquire a warranted belief in P, recognize that P implies Q, believe Q as a result, and thereby acquire her second warrant for Q but her first warranted belief in Q. Assuming that Q is true, she comes to know Q for the first time.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



So long, however, as S acquires her initial warrant for Q by independent means that are available prior to the inference, her line of reasoning to Q does not trace a circular path. The argument remains circular in one respect: she must (the front-loader claims) be initially warranted in believing Q in order to acquire another warrant for Q by inference from P. But it is not obvious that the resulting argument is viciously circular. This is not premise circularity; Q is not (or at least, not inevitably) a premise in S’s argument for Q. The premise for that argument is P; and P on its own implies Q. Call this warrant circularity. It is true that, in a dialectical context in which it is an open question whether S has a warrant for Q, a warrant-circular argument for Q could not be deployed to close it. For acknowledging that she is warranted in believing P requires conceding already that she has a prior warrant for Q. But it is unquestionably coherent to claim that, although S does have a warrant for Q prior to inference from P, she also acquires another warrant for Q by inference from P. S could already be warranted in believing that  is the road to South Haven by testimony from a gas station attendant, and then infer it from a map. It is unclear why, in Dretske cases, the fact (if it is one) that she needed an initial warrant for Q in order to acquire a second one by inference from P would render such a claim incoherent. This illustrates a general distinction between structural circularity, wherein prior warrant for the proposition expressed by the conclusion is somehow a condition of warrant transmission, and path circularity, wherein warrant for the conclusion is a condition of transmission and yet the agent appeals to the argument itself as providing her with the requisite warrant. Path circularity is unquestionably vicious: the agent can’t acquire prior warrant for the conclusion by the argument itself, since that warrant would come too late. But structural circularity does not imply this: so long as the agent acquires a warrant for the proposition expressed by the conclusion prior to the inference, nothing in the nature of structural circularity itself prevents her acquiring a second warrant for the conclusion by the inference. The argument would be structurally circular; but her reasoning would not follow a circular route. Warrant circularity is a species of structural circularity. But it is not inevitably a species of path circularity.   

Recall from fn.  that Q need not be an implicit premise in the argument. Nicolas Silins makes a similar point in Silins , §.. Neither is premise circularity. Take, for example, the standard example “God wrote the Bible, God never lies, and the Bible says that God exists; therefore, God exists.” The conclusion is (arguably) an implicit premise. If that argument is the agent’s only possible source of warrant for that implicit

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

 ..

Against Knowledge Closure Possible Examples of Warrant-Circular Arguments that Transmit

There are, moreover, plausible cases of warrant-circular reasoning wherein warrant transmits. Suppose, for example, that we need a prior warrant for the claim that the senses are reliable in order to acquire a warrant for our empirically grounded beliefs (as conservatives like Wright suggest). And suppose we have it. Perhaps it is delivered in virtue of a default entitlement as Wright claims. Or perhaps we can mount an inference to the best explanation: the “external world” hypothesis – which incorporates the proposition that sensory experience is reliable – is the best explanation of certain stability and coherence characteristics of our sensory experience. With that warrant in hand, we acquire empirical information about the world around us. Some of that information concerns the physics, optics, neurophysiology, and so on, involved in the acquisition of that very information. Suppose we assemble that information and discover that it supports the conclusion that our senses are reliable, as would have to be the case in order for us to have acquired that information. We now have an empirically grounded argument for the conclusion that our senses are reliable, the very proposition for which we needed an initial warrant in order to acquire that information in the first place. And yet it would be odd, at least, to point to this fact and conclude that we have not accumulated empirical information bearing on the reliability of our senses after all. No such circularity attends our investigation of the human endocrine system, or of the environmental conditions on Mars, although we rely on our senses for the acquisition of that information. So presumably that information is honestly acquired. But it would be bizarre to attempt to somehow cordon off any and all investigation of the human biological systems involved in the delivery of empirical information as illegitimate – because it is by means of the operation of those systems that

  

premise, the argument is path-circular, and so vicious. But suppose the agent initially believes that God exists, wrote the Bible, and never lies by divine revelation which, let’s suppose, delivers warrant for these propositions. Believing as a result that every sentence in the Bible is true, she notes that it includes the sentence “God exists,” which, she then infers, is true. Her argument is no longer pathcircular. And it is correspondingly less obvious, to me at least, that the argument would be viciously circular. It cannot of course deliver her first warrant for God’s existence. But it could – compatibly with the premise-circular structure, at least – deliver a second one. And it would be odd to take her to acquire warrants for every other sentence in the Bible and yet not for the one sentence “God exists,” simply because it would be her second warrant for that sentence. Wright himself presents such a case, namely, his “sheep-counting” case, in Wright , . See Wright  and  (and Chapter ). See Vogel b and BonJour , for example.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



we conduct that investigation – while allowing investigation concerning any other aspect of the natural world to proceed. Given the interconnectedness between the former and the latter, such selective inquiry is surely not even possible. Here’s another example. Bob invents a silver detector consisting of a box connected to a wand; waving the wand over a nearby object containing silver causes the wand to beep. There is a crucial electrical connection within the box that requires a wire with high conductivity; only silver will do the trick. If a less conductive metal is used, then it will beep randomly. Bob determines, by some independent, utterly reliable method, that a particular piece of wire is silver and puts it in the detector. Thanks to his meticulous assembly of the device and his knowledge that the wire is silver, he now knows that it is an infallible detector of silver. While demonstrating it to possible investors, he waves the wand over the box itself. It beeps. “Why did it beep?,” the investors ask. “Well, the wand only beeps if there’s silver present,” he responds with a smile, “so there’s silver in the box.” Bob can’t acquire a warrant for “there is silver in the box” for the first time during the demonstration. He had to know – and so be warranted in believing – that it was true to start with – that he used a silver wire in its construction – in order to know that the detector isn’t beeping randomly. So his having a prior warrant for “there is silver in the box” is a condition of his warrant for his belief that it only beeps if there’s silver present, from which he inferred that there is silver in the box when it beeped. That inference is warrant-circular. It is, nevertheless, plausible that he acquired a warrant for the proposition that there is silver in the box when he waved the wand over it. It would seem arbitrary, after all, to concede that, when he waves the wand over other objects and it beeps, it informs him that silver is present and yet doesn’t so inform him when he happens to wave it over the box itself, just because he knew that it contains silver already. He had to know that there’s silver in the box in order to acquire that information when it beeps over other objects, after all. Why should the fact that he also needed to know this in order to acquire that information when waving it over the box itself make any difference?



An advantage of this example over the previous one is that it is disputable whether we do need a prior warrant for the claim that our senses are reliable. (The dogmatist claims that we don’t.) But in the example to follow this isn’t disputable.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure .. Warrant Circularity and Violation of NIFN

A notable feature of these two examples is that, unlike Dretske cases, the conclusion’s being false doesn’t imply that the agent will end up with the basis she has for believing the premise. That is, they don’t violate NIFN. Perhaps the nearest possible worlds in which our senses are unreliable are BIV-style worlds wherein our experiences (and, so, apparent scientific results) will be just as they are, although I don’t see any reason to think so. But there are presumably worlds in which our senses are unreliable and our (chaotic) experiences reflect this fact. And there are worlds in which there isn’t silver in the box and the wand doesn’t beep when waved over it (since it only beeps randomly when any metal other than silver is used). The advocate of the front-loading strategy might enthusiastically agree, and claim that the fact that NIFN fails in Dretske cases is crucial to her explanation of transmission failure. But it isn’t. The front-loading argument, recall, runs as follows: () () () () () ()

~Q implies B; therefore, B is to be expected in both P and ~Q scenarios; therefore, B on its own only warrants P v ~Q; therefore, S must rule out ~Q before she can acquire a warrant for P on the basis of B; therefore, prior warrant for Q is a condition of S’s warrant for P on B; therefore, transmission from P to Q, when warrant for P is acquired from B, fails.

The fact that ~Q implies B is, so to speak, the engine driving the argument from () to (): it’s because B does not discriminate between P and ~Q that S must have a prior warrant for Q. But, in the step from () to () – which provides the explanation for transmission failure – the fact that ~Q implies B has done its job. So long as () is true – a prior warrant for Q is needed – transmission fails (according to this argument) whether or not ~Q implies B. Since Bob needs a prior warrant for “there’s silver in the box,” the argument from () to () implies that transmission fails there too, even though “there isn’t silver in the box” doesn’t imply that it will beep (it will then do so randomly). But it is, at least, far from obvious that transmission does fail in this case. Similar remarks apply to the reliability-of-the-senses example. At a minimum, the front-loading strategist’s explanation of transmission failure is incomplete. So long as S does acquire a prior warrant for Q by independent means, the way seems to be clear for her to acquire a second warrant for Q by inference from P, even though her warrant for P required

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



that prior warrant for Q. It, therefore, remains a mystery why she cannot then acquire a second warrant for Q this way, particularly when there are cases – such as the silver detector case – in which she seems to be able to do precisely that. If transmission does nevertheless fail in such cases, the explanation for why this is so has yet to be provided. ..

Virtues of the NIFN Explanation

Since the front-loading explanation of transmission failure in Dretske cases generates skepticism and is, at best, incomplete, there is good reason to look for an alternative. And one is on offer, namely that based on NIFN. Since the falsehood of a Q proposition implies B, S’s method – evaluate P by appeal to B and determine whether Q by inference from P – will inevitably deliver Q when it is false. But no such method – one that is guaranteed to deliver a particular outcome whenever that outcome is false – can deliver a warrant for that outcome. Unlike the front-loading explanation, the fact that ~Q implies B is integral to the NIFN explanation. Transmission, therefore, need not fail when the conclusion’s being false does not imply B. It, therefore, need not fail in the reliability-of-the-senses and silver detector examples; neither violates NIFN. Skepticism does not, moreover, follow from NIFN, since NIFN does not impose the requirement that S end up with a warrant for Q when transmission fails. It is precisely that requirement, imposed by WP, that generates the buck-passing argument. And, for the same reason, the NIFN explanation provides no comfort to the closure advocate: it’s entirely compatible with it that S not end up with a warrant for Q and so that closure is false. Transmission fails, according to NIFN, because the method involved is constitutively blind to the falsehood of Q. That will be so whether or not S has a transmissionindependent warrant for Q. So, for all that explanation requires, S might well be warranted in believing P without having a warrant for Q. And, indeed, it intuitively makes no difference whether the agent does need a prior warrant for Q; transmission intuitively fails regardless, as NIFN predicts. It is far from obvious that S must have a warrant for “it’s 

To be clear, I don’t claim here that transmission does succeed in these cases. As per §., I offer violation of NIFN as only a sufficient condition for transmission failure (and, in some cases, for closure failure as well). There may well be other sources of transmission failure. But the NIFN explanation doesn’t imply that transmission does fail in these cases, which, given that intuition is at least indecisive with respect to whether transmission does fail, is a good thing.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006



Against Knowledge Closure

not a disguised mule” in order to learn that it’s a zebra on the basis of its appearance, for example, at least if she’s visiting a reputable zoo. And yet the intuition that transmission fails is very strong. Suppose, however, the zoo is a tourist-trap petting zoo of the sort that might well be willing to disguise a mule to please its child visitors. S may well then need a warrant for “it’s not a disguised mule” before she can acquire a warrant for its being a zebra on the basis of its appearance. And it remains highly unintuitive that she can acquire a warrant for this proposition by inference from “it’s a zebra.” But it was also highly unintuitive – equally highly unintuitive – that she could do so when visiting the reputable zoo. While the force of the intuition that she needs a warrant for “it’s not a disguised mule” before she can learn that it is a zebra varies between these two cases, the intuition that transmission fails does not. This is to be expected on the NIFN explanation: according to it, transmission fails whether or not S needs a prior warrant for “it’s not a disguised mule” in order to acquire her warrant for “it’s a zebra.” But this is not what we would expect if the explanation offered by the front-loading strategist is correct. The intuition that transmission fails should vary with the intuition that prior warrant for “it’s a disguised mule” is needed, since it is precisely because it is needed, according to that explanation, that transmission does fail. Finally, NIFN is complete in a way that, we saw, the front-loading explanation is not. If NIFN, a highly intuitive principle, is true, then transmission must fail in Dretske cases. But, even if front-loading is true – S needs a prior warrant for Q – that does not, on its own, imply that transmission fails. So there are good reasons to favor the NIFN explanation over the frontloading explanation. But NIFN doesn’t imply WP. Indeed, it undermines the intuition behind the advocacy of closure. That intuition – Williamson’s insight – concerns the capacity of deductive inference to extend knowledge. But it doesn’t apply when transmission intuitively fails. And NIFN – which is, I think, the source of our intuition that transmission does fail in Dretske cases – doesn’t imply that closure is preserved when transmission fails. The claim that closure enjoys unqualified and unassailable intuitive support as an axiomatic epistemic principle is dramatically overstated. 

Recall the “Zoo-Testing-R-Us” example from §.: the sense that the inspector’s methodology is ludicrous doesn’t vary depending on whether the zoo he is “investigating” is a reputable zoo or a tourist-trap petting zoo.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

Front-Loading



. The Closure Advocate’s Dilemma Taking these results together with those of the last two chapters, we see that the closure advocate confronts a dilemma. She could claim that transmission always succeeds in Dretske cases. But that claim is highly unintuitive, requires endorsing violations of NIFN, doesn’t defuse skeptical challenges, and is unmotivated from the standpoint of three prominent conditions on warrant. Or, she can concede that transmission fails in Dretske cases. But doing so while insisting that closure is preserved requires endorsing WP; and WP generates skepticism. Front-loading, moreover – the most prominent, and perhaps the only, systematic attempt to preserve WP while conceding that transmission sometimes fails – doesn’t even suffice as an explanation of transmission failure. And the explanation that does suffice – based on NIFN – provides no support for closure whatsoever. The costs of closure preservation are prohibitive. Notwithstanding the skeptical threat posed by the buck-passing argument, we will examine putative sources of warrant for the Q propositions of Dretske cases in Chapters –, starting with the suggestion that S’s warrant for P suffices as a direct warrant for Q.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 06:45:17, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.006

 

Denying Premise  Warrant for P as Warrant for Q

. Setting Aside Buck-Passing We saw in Chapter  that the closure advocate who concedes that transmission fails in Dretske cases is committed to WP, the claim that an alternative warrant for the conclusion is inevitably available to S when transmission does fail. But that commitment generates skepticism, as indicated by the buck-passing argument. Closure advocates who concede that transmission fails in those cases typically don’t offer systematic accounts of transmission failure ensuring WP at all; Wright and other front-loaders are exceptions. Instead, one finds, at best, proposed alternative sources of warrant for the Q propositions of particular Dretske cases, without awareness of the skeptical threat looming over any general proposal that would suffice for WP. So, even without evaluating those proposals, the damage has been done. Nevertheless, we will survey putative alternative sources of warrant in this and the next two chapters. There are three possible such sources. The first is that S’s warrant for P also directly warrants Q, that is, independently of S’s inference from P to Q. In this (short) chapter we’ll consider that proposal. The second is that S’s background information suffices on its own to warrant Q. That is the subject of Chapter . The third is that S has a default warrant for Q. We’ll consider that proposal in Chapter .

. Putting Inference Out of a Job On the first proposal, closure is preserved in Dretske cases because S’s warrant for P, acquired on basis B, also suffices as a direct warrant for Q. An advocate would want to claim that it does so only when transmission fails. For if it does so whenever P implies Q, then S need never recognize that Q does follow from P; she will have a warrant for Q regardless. But this implies the highly implausible closure principle that 

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 13:40:06, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.007

Denying Premise : Warrant for P as Warrant for Q



S invariably has a warrant for every proposition following from a proposition she is warranted in believing, even if she has absolutely no clue that the one follows from the other. This closure principle is less obviously false than the corresponding knowledge closure principle that S, knowing P, also knows any Q following from it. That requires that S believe Q and base that belief on the warrant she has for Q, in addition to merely having it. But the former principle remains untenable. Q could be a mathematical theorem that is extremely difficult to prove and well beyond S’s mathematical abilities, for example, despite following from far more mundane mathematical principles that she does know. Perhaps she could acquire a warrant for Q if she worked long and hard enough; but it would be bizarre to suggest that she has it in her possession already. Suppose that Goldbach’s conjecture is true and provable. No proof for it has yet been found. Do we, nevertheless, have a warrant for it now in our possession? Did we have a warrant for Gödel’s incompleteness theorem before Gödel proved it? Surely the discovery of such a proof just is the identification of a reason to believe the proposition so that, if we believe it in response to that reason, that belief will be warranted. It’s not a reason that we had all along and perversely ignored. We also noted in §. that such a view would be an odd one for the closure advocate to adopt, even one who concedes that transmission sometimes fails. It is one thing to endorse closure out of respect for the ability of inference to expand knowledge while conceding that there is the occasional exception. It is quite another, and antithetical to the very idea of a closure principle, to endorse a view that makes inference completely dispensable as a means of warrant acquisition. So somehow the advocate of this proposal needs to claim that, although S’s warrant for P suffices as a direct warrant for Q when transmission fails, it does not do so when transmission succeeds. But it is hard to imagine how such a proposal might go. If there is any reason to think that the former warrant suffices for the latter, it is surely because P implies Q, whether or not she can learn Q by inference from P. As we saw in §., for example, some have argued that B must suffice as a basis for a warrant for Q in light of the fact that, if P implies Q, the 

Note that one could have the warrant in hand – one could be aware of the proof – and still not believe the proposition. A philosopher of mathematics who just can’t accept the remarkable implications of Gödel’s incompleteness theorem might refuse to accept it, despite having a perfectly good reason to think that it’s true. He would then have a warrant for that theorem, but not be warranted in believing it (because he doesn’t believe it at all).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 13:40:06, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.007



Against Knowledge Closure

probability of P given B cannot exceed that of Q given B. But this applies merely in virtue of the fact that P implies Q. It, therefore, applies when transmission succeeds as much as when it fails, and so implies the untenable closure principle as per above.

. The Irrelevance of B to Q The suggestion that S’s warrant for P, grounded in B, also suffices as a direct warrant for Q is, in any event, untenable. For one thing, it is highly unintuitive. As Dretske suggested, that the animal looks like a zebra seems “neutralized,” since it would look that way if it were a disguised mule. And there are good reasons to think that Dretske is right. “It’s not a disguised mule” – unlike “it is a disguised mule” – supports no prediction whatsoever. A zebra is an example of something that isn’t a disguised mule. But it is far from the only one; other examples include the Eiffel Tower, Ghengis Khan, and Estonia. In fact, they include just about everything – and, perhaps, everything – that there is; and a vanishingly small number of those look like zebras. So, if its looking like a zebra somehow supports its not being a disguised mule, this isn’t because the latter as an hypothesis predicts the former. Quite the contrary: if the latter is true, the former is overwhelmingly unlikely. Moreover, as we saw in §., its looking like a zebra is arguably evidence for its being a disguised mule, at least to some extent. For it could have looked like a flamingo (or some other non-zebra) rather than a zebra. If it had, that would decisively undermine “it’s a mule disguised to look like a zebra.” But the animal looks instead precisely as “it’s a disguised mule” predicts. That it does look that way plausibly provides S with more reason to believe that it is a disguised mule than she had before looking in the cage, even if not enough to warrant believing it. But evidence that a claim is true is evidence against its being false. It’s looking like a zebra therefore undermines, rather than supports, “it’s not a disguised mule.”

 

Dretske , . As per §., this also follows from a prominent view of incremental evidential support according to which E supports H if and only if the probability of H given E is greater than the prior probability of H. Assuming that H implies E and that  < p(E) < , the probability of ~H given E is lower than the prior probability of ~H. So the probability that it’s not a disguised mule given that it looks like a zebra is lower, not higher, than the prior probability that it’s not a disguised mule. It’s hard to see how its looking like a zebra can then directly support its not being a disguised mule. See Sharon and Spectre , who wield this point in a general attack on knowledge closure by appeal to failure of evidential closure.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 13:40:06, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.007

Denying Premise : Warrant for P as Warrant for Q



So, if it isn’t a disguised mule, it very likely doesn’t look like a zebra; and if it does look like a zebra, that plausibly undermines the claim that it’s not a disguised mule. It’s hard to see how its looking like a zebra can nevertheless deliver a warrant for “it’s not a disguised mule.” One might reply that, while its looking like a zebra doesn’t on its own support the claim that it’s not a disguised mule, it does support the claim that it’s a zebra. Given the right background view of perceptual evidence – such as Pryor’s dogmatist account – that can suffice to deliver a warrant for its being a zebra. S can then infer to “it’s not a disguised mule” and acquire a warrant for it thereby. But the issue at hand is precisely whether its looking like a zebra can directly warrant its not being a disguised mule when inference from “it’s a zebra” doesn’t transmit warrant. So the relevance of its looking like a zebra to its being one is entirely beside the point. Appealing to the animal’s appearance for a warrant for “it’s not a disguised mule,” moreover, violates NIFN. Suppose that, intuition and the above arguments notwithstanding, S makes such an appeal. The animal will, however, still look like a zebra if it is a disguised mule. So her method – appealing to its appearance – is guaranteed to deliver the result that it isn’t a disguised mule when it is. So that method violates NIFN. Direct appeal to the animal’s appearance is, therefore, no more plausible a method of warrant acquisition for “it’s not a disguised mule” than is inferring it from “it’s a zebra” when the latter is based on the animal’s appearance. Insofar as NIFN provides a reason to think that transmission fails in Dretske cases, it also provides a reason to think that its looking like a zebra cannot provide a direct warrant for its not being a disguised mule. All of this generalizes to the other Dretske cases. The upshot is that the attempt to defend WP by claiming that B directly warrants Q in those cases fails. If it does directly warrant Q, then it presumably does so in every case in which S is warranted in believing P and P implies Q. But the result is an entirely implausible closure principle. It is, moreover, vastly improbable that B is true if Q is; so B certainly doesn’t support Q because Q predicts B. And since ~Q does predict B, B undermines rather than supports Q. Finally, appeal to B in support of Q violates NIFN, a highly plausible constraint on warrant acquisition. So S’s warrant for P, based on B, cannot deliver the warrant for Q that the closure advocate needs. In the next chapter, we’ll consider whether S’s background information can do so instead.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 13:40:06, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.007

 

Denying Premise  Warrant by Background Information

.

Outline of the Chapter

We saw in Chapter  that S’s warrant for P, based on B, can’t constitute a warrant for Q in Dretske cases. The next proposal to consider is that S’s background information delivers the needed warrant. Combined with the front-loading account, this is, I think, the most intuitively plausible closure-preserving reaction to Dretske cases. It is conceded that S can’t acquire a warrant for “it’s not a disguised mule” either by appealing directly to its appearance or indirectly, by inference from “it’s a zebra” when the latter is based on its appearance. But she needs such a warrant, precisely because it will look like a zebra if it is a disguised mule; she needs to rule that possibility out (or at least have the wherewithal to do so). Fortunately, she is well aware of the disastrous consequences for the zoo if deception were uncovered, and so well aware that it is unlikely that the zoo would perpetrate such deception. That delivers the needed warrant, one that doesn’t rely at all on what the animal looks like. Since this is what I take to be the most plausible response to Dretkse cases, responding to it will take some work. In the next section we’ll consider whether the view can be applied to wholesale skeptical hypotheses. It can’t, it turns out: if S does have background information counting against the skeptical hypothesis, that information also includes the very pedestrian claims that we are trying to insulate from skeptical attack. Appeal to background information is, as a result, superfluous. The rest of the chapter concerns the suggestion that background information can warrant the denials of piecemeal skeptical hypotheses. I’ll initially 



Without the front-loading requirement – according to which S’s warrant for P requires an initial warrant for Q – I see no reason to think that S’s background information must inevitably deliver a warrant for Q. Some of the work has already been done; since this is a front-loading account, it is susceptible to the buck-passing argument. But we’re now putting that aside.



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



focus the discussion on the lottery proposition “S will lose the lottery,” which is the denial of the piecemeal skeptical hypothesis that S will win the lottery in the Cruise case of §.. I’ll review a number of proffered explanations for the widely recognized intuition that S doesn’t know the lottery proposition – concentrating in particular on Hawthorne’s “parity reasoning” proposal – and argue that they are not up to the task (§.). I will then offer a different explanation for the lottery intuition: warrant is infallible (§.). The putative warrant provided by background information concerning the probability of S’s losing cannot, however, constitute an infallible warrant. It is therefore no warrant at all. This result extends to other Dretske cases whose Q propositions are Vogel propositions. Trenton Merricks has also argued that warrant is infallible. I present his two arguments and defend them against criticisms in §.. The upshot is that warrant is infallible. Since it is, and since background information cannot deliver an infallible warrant for those Q propositions in Dretske cases that are Vogel propositions, it delivers no warrant at all. I provide a summary in §..

. Background Information and Wholesale Skeptical Hypotheses At first glance, it might seem that appeal to background information is ill-suited to provide a warrant for the denial of such wholesale skeptical hypotheses as that S is a BIV. If that hypothesis is true, after all, none of S’s putative background empirical information really is information, since very little of it would be correct. However, the fact that S’s putative background information would not be information if she were a BIV does not in itself imply that it isn’t information if she isn’t a BIV. Appeal to such information would only be prohibited if, in general, information I cannot be reasonably applied against hypothesis H if H implies that I would not be information. But 



That’s not quite right. The relevant skeptical hypothesis being denied is “S can afford an around-theworld cruise because she will win the lottery.” But, on the background warrant model, she knows that this is false because she knows on the basis of background knowledge that she won’t win the lottery, from which she infers the falsehood of the skeptical hypothesis. The closure advocate will obviously take that inference to transmit warrant. So the crucial issue is whether she knows that she won’t win by appeal to background information. Recall that a Vogel proposition has three characteristics, epitomized by the lottery proposition “S will lose the lottery.” First, it is not abnormal for it to be false: there’s nothing abnormal about S’s winning the lottery. Second, there is some statistical reason to think that it is false: there is a nonzero chance that S’s ticket will win. And, third, it would be arbitrary for S to discount the possibility that it is false: S has no more reason to think that her ticket will lose than that any other will do so.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

such a principle is contentious. Some skeptics will insist on it: we can’t appeal to any putative information when responding to the skeptic if it is not information in the skeptical scenario. But many skeptical arguments do not presuppose this principle. The skeptical closure argument does not do so, and neither does the skeptical front-loading argument nor the skeptical underdetermination argument. Those arguments are ultimately directed against the internal coherence of our overall commitments. The advocate of the skeptical closure argument, for example, points out that we are inclined to both endorse closure and concede that we don’t know that the skeptical hypothesis is false; this implies, the skeptic suggests, that we don’t have the pedestrian knowledge that we think we have. The skeptic isn’t claiming that we must establish ex nihilo that our pedestrian knowledge claims are correct – that the information to which we appeal as the basis for such claims really is information – but instead that our other inclinations – with respect to closure and the skeptical hypotheses itself – undermine those claims. Responding to that argument only requires demonstrating either that those commitments are consistent or that one of those other inclinations is misleading. It doesn’t require demonstrating that our pedestrian knowledge claims are correct. The relevant background information won’t be, for example, “I have hands.” That can only be wielded against the skeptic by being mobilized as a premise in a Moorean argument; and that requires that transmission succeeds in Dretske cases. We are, however, currently considering a view that concedes that transmission fails in those cases. But relevant background information might exist nonetheless. That information includes the current limits of our technological capacities: we don’t – yet – have the technological wherewithal to envat a brain. Assuming that information to be correct, S can’t be a BIV. So her background information might, after all, suffice to deliver a warrant against the BIV hypothesis. However, when the skeptical hypothesis is wholesale, appeal to background information arguably violates NIFN. The truth of such an hypothesis does not only imply that S has the same basis B that she actually has (namely, that she seems to have hands); it also implies that she has the same background information that she actually has. So her method – appeal   

Unsurprisingly, the response I favor is to deny closure. If correct, that suffices; it isn’t also necessary to demonstrate that our pedestrian knowledge claims are correct. We also saw in §. that the Moorean response contributes nothing to the fight against skepticism. The dream hypothesis is, however, another matter. We do dream, and it’s far from clear that background information prevents our having the sort of dream proposed by the dream skeptic.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



to what she takes to be background information – will inevitably deliver the result that she is not a BIV when she is. One might respond by pointing out that S only seems to have that information if she is a BIV; she doesn’t actually have it. So appeal to that information doesn’t violate NIFN: if she is a BIV she will not evaluate P by the same method she actually uses, since the method she actually uses involves appeal to information she would not have if she were a BIV. While many are tempted to characterize her information in internalist fashion – so that it only directly concerns the character of her experience rather than states of the external world – that temptation can be and has been resisted. I’m not unsympathetic; as per §.., the basis in response to which S forms her belief need not be characterized in an internalist fashion. But if we do permit appeal to information about external matters, that information will include the very pedestrian claims – such as that S has hands – that we are attempting to protect from skeptical attack. That information is, surely, at least as secure as is the information that we don’t have the technological capacities required to envat a brain. The latter must be gathered from a variety of sources and brought to bear, and will ultimately rest on similarly direct perceptual beliefs somewhere down the line. But then there is no longer a point in appealing to the latter information, at least not if our goal is to secure ordinary knowledge from skeptical attack. On the view we are considering, S needs a warrant for the claim that she’s not a BIV in order to acquire a warrant for pedestrian knowledge claims, and acquires the former warrant by appeal to background information. But if that background information already includes the pedestrian claims themselves, then she doesn’t need a warrant against the skeptical hypothesis after all. And if it doesn’t include those pedestrian claims, then it’s hard to see how it could nevertheless include that background information. As we saw in §. with the Moorean appeal to transmission from “I have hands” to “I’m not a BIV,” appeal to background information is either superfluous for or incapable of insulating pedestrian knowledge from the threat posed by wholesale skeptical hypotheses.

. Background Information and Piecemeal Skeptical Hypotheses The situation with respect to piecemeal skeptical hypotheses is, however, quite different. Appeal to background information doesn’t violate NIFN, even if S’s information concerns only her internal states. It doesn’t follow from the animal’s being a disguised mule, for example, that S will still have the experiences, whatever they might be, that suggest that it is implausible

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

that zoos would engage in such deceptive practices. So, even if putative information I can’t be reasonably applied against hypothesis H when I wouldn’t be information if H were true, S can still appeal to that background information in response to the disguised-mule skeptic. However, it’s unintuitive that such background information does suffice to deliver a warrant for Q propositions that deny piecemeal skeptical hypotheses, particularly when those propositions are Vogel propositions. Famously, background information concerning the probability of losing a lottery does not intuitively suffice for knowing that one has lost before the draw, even though background information renders it highly probable – as probable as one would like – that one has lost. The same applies to other Vogel propositions: background information concerning the infrequency of misprints, restaurant fires, car thefts, heart attacks, broken gas gauges, and so on does not intuitively suffice for knowing that this particular report, restaurant, car, president, or gauge has not succumbed. It’s one thing to suggest that background information suffices to deliver knowledge of such generalizations as “reputable newspapers rarely include misprints,” “restaurant fires are uncommon,” “heart attacks are rare,” and so on. It’s quite another to insist that it suffices to deliver knowledge that this particular newspaper report, restaurant, or president is not an exception, particularly when it is conceded that such exceptions exist and nothing in particular, that S is aware of at least, distinguishes the case at hand.

. Explaining the Lottery Intuition .. The Lottery Intuition, Sensitivity, and Safety In a standard lottery case, although background evidence implies that it is unlikely that any particular ticket will win, it also implies that some ticket will win. One might exploit this in explaining our intuition that S doesn’t know that she lost the lottery. For suppose she does know that. Given that the evidence she has with respect to her ticket – it is highly probable that it loses – is available for every other ticket, S will also presumably know that  

Or afterward, so long as one’s background information does not include information concerning the outcome of the draw. In fact, in most actual lotteries there is no guarantee of a winner; the winning numbers are randomly selected, and it may be that nobody bought a ticket with those numbers (or selected those numbers). I will nevertheless refer to lotteries with a guaranteed winner as “standard”; it’s the standard example in epistemological discussions of the lottery case, if not in real life.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



every other ticket will lose. But she can’t know of every ticket that it will lose. For one will win; but S can’t know a false proposition. However, this doesn’t prevent S’s knowing, of the losing tickets, that they are losers, and so knowing this of each ticket but one. Of course, she would presumably take herself to know, of the winning ticket, that it lost as well. But we take ourselves to know many things that we don’t in fact know because they are false; that doesn’t imply that we know nothing. And yet we still intuit that S doesn’t know, of the losing tickets, that they are losers. And, anyway, there are lotteries in which there is no guarantee that there will be a winner, and for which the intuition that S doesn’t know that she lost remains. So that intuition can’t depend on there being a winner. Nor can it depend on its being probable that there is a winner. As Keith DeRose points out, our intuition that S doesn’t know that her ticket is a loser is elicited in scenarios in which it is unlikely that anyone will win. Suppose a billionaire holds a one-time lottery, and you are one of the  million people who have received a numbered ticket. A number has been drawn at random from among  million numbers. If the number drawn matches that on one of the  million tickets, the lucky holder of that ticket wins a fabulous fortune; otherwise, nobody receives any money. The chances that you’ve won are  in  million; the chances that somebody or other has won are  in . In all likelihood, then, there is no winner. You certainly don’t believe there’s an actual winner. Do you know you are a loser? Can you flat-out assert that you are a loser? No, it still seems.

DeRose offers the “Subjunctive Conditionals Account” instead: SCA (a) although you believe you are a loser, we realize that you would believe this even if it were false (even if you were the winner), and (b) we tend to judge that S doesn’t know that P when we think that S would believe that P even if P were false.

Hawthorne points out, however, that the SCA implies that we will tend to judge that S does not know that she did not win a lottery for which she didn’t even buy a ticket. For if she had won, then she would have bought 

 

Suppose, however, that S reviews the tickets one by one when the winning ticket happens to be last in line. Having rejected every ticket before the last as losers – which, suppose, she knows to be losers on the basis of the probability of their losing – she is in a position to infer that the last is the winner. Assuming closure, she would then know that it is the winner. But, surely, she doesn’t know that.  DeRose ,  DeRose , . Hawthorne , . Hawthorne also reviews a number of other objections to appealing to sensitivity to explain the lottery intuition in Hawthorne , –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

a ticket; but she would still believe that she didn’t win on the basis of the severe odds against. The SCA advocate might respond by relativizing to method. So revised, SCA would read as follows: SCA-M (a) although you believe you are a loser, we realize that you would believe this by the same method you actually use even if it were false (even if you were the winner), and (b) we tend to judge that S doesn’t know that P when we think that S would believe that P by the same method even if P were false.

Clause (a) of SCA-M is now unsatisfied: had S won, she would have bought a ticket; so she would not believe that she lost by the same method, that is, because she hadn’t bought a ticket. So SCA-M doesn’t imply that we would tend to judge that S doesn’t know that she lost. But suppose that, had S won, this would not have been because she bought a ticket but because her spouse bought one for her. Both S and her spouse take a very dim view of lotteries, and so are very disinclined to buy lottery tickets; however, S’s spouse is very slightly less so disinclined than is S. Had S won, it would not occur to her that her spouse would buy one for her; she would still take herself not to have won since she didn’t buy a ticket. SCA-M implies that we, aware of the situation, would tend to judge that S doesn’t know that she didn’t win the lottery. But that seems arbitrary. It hardly seems to matter who would have bought the ticket if S won; that S’s spouse is very slightly less inclined to buy one than is S rather than vice versa doesn’t seem to matter when both are so very disinclined and the difference in their inclinations is very slight. In fact, neither bought a ticket, and much would have to change before either would so much as consider doing so. If we are nevertheless inclined to judge that S doesn’t know that she didn’t win when it’s her spouse that would have bought the ticket if she had won, it’s hard to see why we wouldn’t come to the same judgment if, instead, it’s S herself who would have bought one. Safety theorists offer a different explanation: we intuit that S doesn’t know she lost the lottery (for which she has bought a ticket) because we recognize that she still believes she lost in a (very) nearby world in which she won. But suppose that, in fact, a conspiracy has been perpetrated in her area: the vendors to which she has access have been provided with fake tickets. S knows nothing of this, and so believes that she has bought a

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



genuine ticket. As a result, the nearest world in which she wins is distant: she would have to travel a considerable distance to acquire a real ticket and so have any chance of winning, something that she is unable to do. Her belief that she will lose the lottery is now safe, because it is far-safe: the nearest world in which she believes that she loses and yet wins is distant, simply because the nearest world in which she wins (as a result of buying the real, winning ticket) is distant. It, nevertheless, remains unintuitive that she knows that she will lose when she bases her belief solely on the statistical improbability of winning. ..

Parity Reasoning

Hawthorne suggests instead that the intuition that S doesn’t know that she lost is elicited in response to “parity reasoning”: One conceptualizes the proposition that p as the proposition that one particular member of a set of subcases (p, . . ., pn) will (or does) not obtain, where one has no appreciably stronger reason for thinking that any given member of the set will not obtain than one has for thinking that any other particular member will not obtain. Insofar as one reckons it absurd to suppose that one is able to know of each (p, . . ., pn) that it will not obtain, one then reckons oneself unable to know that p.

We do intuit both that S doesn’t know that her ticket will lose and that she doesn’t know, of each other ticket, that it will lose. That our intuition is the same for each ticket is explicable by appeal to consistency: we can’t reasonably treat cases differently if there is no basis for discrimination between them. But this leaves open the direction our reasoning takes. Hawthorne proposes the “downward” direction, from a survey of each case to that of S. But the “upward” direction is at least as plausible: since S’s epistemic circumstances don’t suffice, for whatever reason, for her to know that her ticket will lose, and those circumstances are identical with respect to any of the other tickets, she doesn’t know of any of the others that they lose either. So we need some reason to think that the relevant reasoning is in the downward direction that Hawthorne proposes rather than the upward. If it’s not because S doesn’t know that she will lose that she doesn’t know the same of the other tickets with respect to which she’s equally positioned, then what explains why we judge, for each ticket, that she doesn’t know that it will lose? 

Hawthorne , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

Hawthorne offers the following explanation: There are essentially two kinds of thoughts that can be at work here. (i) Sometimes one knows that one of (p, . . ., pn) will obtain, as when there is a guaranteed winner in the lottery. (ii) Even where (i) doesn’t apply, one may recognize that it is absurd to think that one can know the generalization that all of (p, . . ., pn) do not obtain, or that ~(p or . . . or pn), and on that basis, reckon that one cannot know of each p, . . ., pn that it does not obtain.

The first thought does explain why it is absurd to suppose that one knows, of each ticket, that it will lose. But it does not, as we saw, explain why it is absurd to suppose that one knows, of each losing ticket, that it will lose, even in lotteries with a guaranteed winner, or that every ticket will lose in a lottery with no winner. That explanation relies on the second-thought line of reasoning. One might try to represent it as follows: Parity Reasoning  () S can’t know this: “every ticket is a loser”; therefore () S can’t know this of each ticket: “it’s a loser”; therefore, () S can’t know this of her ticket: “it’s a loser.” This is not a perspicuous representation of the second thought, however. () and () are clearly true in a standard lottery, since one ticket will win. But we don’t need () to recognize that () is true on that basis. That basis, moreover, doesn’t explain the transition from () to (). Although one ticket will win, S’s ticket might well still lose; so although () (and ()) is true for that reason, () can still be false. And anyway, that’s the reasoning behind the first thought; the second-thought reasoning is not supposed to rely on the assumption that there is a winning ticket. So ()–() need to be rewritten in such a way that S’s failure to know isn’t attributable to her belief’s being false. Here’s the result: Parity Reasoning  (*) S can’t know this: “every ticket is a loser,” even if every ticket loses; therefore (*) S can’t know this of each losing ticket: “it’s a loser”; therefore (*) (Even) if S’s ticket is a loser, she can’t know this of her ticket: “it’s a loser.”  

Hawthorne , . Note that (*) does not read as follows: “S can’t know this: each losing ticket is a loser” (and (*) does not read “S can’t know this: if my ticket is a loser, then it’s a loser”). These are logical truths that S can, presumably, know.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



(*) does imply (*): if she can’t know, even of each losing ticket, that it is a loser, then she can’t know this of her losing ticket (since it’s one of those tickets). But, again, that our judgments of (*) and (*) stand or fall together is explicable by appeal to consistency, and so leaves open the question whether we are inferring from (*) to (*), as Hawthorne suggests, or instead from (*) to (*). So the fate of Hawthorne’s explanation hinges on the plausibility that our intuition that (*) is true explains our judgment that (*) is true. There are, however, two reasons to think otherwise. First, parity reasoning, so represented, still doesn’t explain why we are inclined to endorse (*) (or (*)). Second, even if it did, it still wouldn’t explain our inclination to deny Vogel propositions that are not the eponymous lottery proposition. The first is the subject of the next section; we’ll discuss the second in §... .. Probability and the Direction of Reasoning The first issue concerns the source of our inclination to endorse (*): if in fact every ticket is a loser, then why can’t S know that they’re all losers? Hawthorne is not explicit about this. But one explanation does spring to mind: it’s improbable that they all lose, even if they do. So if warrant requires high probability on one’s evidence, then S can’t know that they all lose, even if they do. That Hawthorne views this as the relevant thought is reinforced by his description of parity reasoning. We recognize that S has no “appreciably stronger reason for thinking” that her ticket is a loser than she does for any other ticket. But why is that? The obvious answer is that they are (roughly) equiprobable. Even if “some tickets enjoy a slightly better chance of winning,” he says, “it is still the case that one’s epistemic position with regard to each subcase is not appreciably different.” The dependency of strength of epistemic position on probability here is obvious. One might worry that the members of some subset of tickets could have a significantly greater chance of winning than other subsets. Hawthorne responds that, so long as the chance for each member within the subsets is the same and relatively low in all subsets – in Hawthorne’s example it’s no greater than . – then parity reasoning applies to the subsets taken individually. The driving force behind all of this is that 

Hawthorne , .



Hawthorne , , fn. .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

strength of epistemic position is determined here, in large measure at least, by the chances. However, the probability that every ticket loses in DeRose’s billionaire lottery case is high (.). So our inclination to deny that one can know that every ticket loses in that case remains unexplained. Moreover, if the triggering thought concerns the probability of losing, then (*) doesn’t support (*). Even if it is improbable that every ticket loses, it is still probable, of each ticket, that it loses. And, since S’s ticket is one of those, it’s probable that her ticket loses; (*) doesn’t support (*). Assuming (as per (*) and (*)) that we restrict attention to the losing tickets – and so to all but one ticket in a lottery with a guaranteed winner or to every ticket in a lottery with no winner – the relevant propositions are also true; that knowledge is factive presents no bar to S’s knowing that they lose. So if the source of our intuition for (*) is its improbability, that doesn’t explain our intuition that (*) or (*) are also true. There are, moreover, typically no inevitable (or even probable) “winners” in Dretske cases that involve Vogel propositions that are not the lottery proposition itself. Although cars have been stolen, for example, it’s hardly inevitable that some car or other will be stolen today. Indeed, it may well be improbable that, on a particular day, some car will be stolen. Nevertheless, we are inclined to affirm that S can’t know that every car wasn’t stolen, even if none of them were stolen. So, like DeRose’s billionaire lottery case, appeal to probability can’t explain that inclination. Of course, the wider the net is cast either temporally (cars parked in lots this week, this month, this year . . .) or geographically (cars parked in this lot, lots in this city, in this state . . .), the more probable it is that some car has been stolen. But the intuition that S doesn’t know that every car wasn’t stolen – or that her car wasn’t stolen – doesn’t get any stronger as a result; it doesn’t seem to depend on how widely the net is cast at all. As Hawthorne points out, the inference from (*) to (*) follows from multi-premise closure (MPC). According to MPC, if S can know each of a set of propositions, then she can know any proposition following from 

A difficult case for Hawthorne is one in which no ticket has the same chance of winning as any other, and yet none of them are likely to win. Partitioning into subsets of tickets with equal chances delivers subsets with only one member each; parity reasoning is obviously inapplicable. If there are enough tickets and the tickets ordered by their chances, then the chance difference between adjacent tickets might well be small. But the difference between the first and the last might also be significant, and there’s no principled way to divide them into subsets whose members have only “slightly different” chances of losing, to which parity reasoning might apply. The intuition that S doesn’t know of any ticket that it loses is as strong as ever, and doesn’t seem to depend on any such arbitrary partitioning decisions.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



them (so long as she recognizes that it does so follow). So, by contraposition, if she can’t know the latter, she can’t know every one of the former. But it is precisely because the conjunction of individually probable propositions can be improbable that those inclined to think that warrant requires high probability are disinclined to endorse MPC. Hawthorne might counter that we find MPC intuitive. Since we also find (*) intuitive, the inference from (*) to (*) is supported by our intuitions. But if the explanation for our endorsement of (*) is that it is improbable that every ticket loses, that doesn’t explain our willingness to endorse the inference from (*) to (*); that it is improbable that every ticket loses doesn’t imply, for each ticket, that it is improbable that it loses. So there’s a tension between this explanation of our intuition that (*) is true and our endorsement of MPC: the former explanation mitigates against the latter endorsement. Unless Hawthorne has some other explanation for why we intuit that (*) is true, our intuition that (*) (or (*)) is true remains a mystery. It’s hard to see what that other explanation might be. That is, it is hard to do so unless we reason in the upward direction. We do intuit that S doesn’t know that her ticket is a loser. But it probably is. So even Hawthorne must grant that this intuition is not the result of a conviction that its losing is improbable. But S is as well (or badly) placed to know, of each other ticket, that it loses as she is of her ticket. So, if she doesn’t know that her ticket loses, then she doesn’t know that of any other ticket; (*) explains (*). And, if every ticket loses, then her ticket loses. So, if she doesn’t know that her ticket loses, she doesn’t know that every ticket loses; (*) also explains (*). That does instantiate (the contrapositive of ) single-premise closure, from “S doesn’t know that her ticket loses” to “S doesn’t know that every ticket loses.” But Hawthorne endorses single-premise closure. While reasoning in the downward  

He defends MPC against lottery- and preface-paradox objections in Hawthorne , –. I, however, don’t. But that doesn’t mean that I take these inferences to fail. To deny closure is not to claim that inference never transmits warrant, but only that it fails to do so in certain rare cases, and I see no need to concede that this is one of them. NIFN does not require doing so. Suppose that S is warranted in believing (x)Px on basis B, that she infers to each of P, . . ., Pn, where n exhausts the range of x, and that (as per (*)) she can’t know any of P, . . ., Pn. If she can’t know any of them, then she can’t acquire a warrant for them by inference from warranted (x)Px. (Recall that we are allowing that they are all true, so it is only warrant failure that prevents her knowing them.) So transmission fails for each inference. If violation of NIFN is responsible for this, then each of ~P, . . ., ~Pn implies B, the basis for their common premise (x)Px. But n exhausts the range of x. So ~(x)Px – which is equivalent to (~P v ~P v ~P . . . v ~Pn) – also implies B. S’s putative warrant for (x)Px on B then also violates NIFN. But then S is not warranted in believing (x)Px after all. So either S is not warranted in believing (x)Px or those inferences don’t violate NIFN. So transmission

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

direction doesn’t explain the pattern of our intuitions here, reasoning in the upward direction does. The upshot is that, in the absence of some explanation of our inclination to endorse (*) other than the improbability of every ticket’s losing, the downward direction of reasoning involved in Hawthorne’s parity explanation – from (*) to (*) and from there to (*) – doesn’t succeed. But the upward direction does: (*) explains both (*) and (*). Those explanations, however, begin with the intuition that S doesn’t know that she lost, despite its being probable that she does. They don’t, therefore, explain that intuition. ..

The Absence of Parity Reasoning in Other Vogel Cases

The second reason for dissatisfaction with the parity-reasoning explanation is that it is far less plausible that we do engage in such reasoning when contemplating other Vogel propositions than the lottery proposition. Hawthorne intends his explanation to extend to these; Knowledge and Lotteries is not merely about knowledge of lotteries. He suggests that our intuition with respect to Vogel’s “heartbreaker” case – wherein we consider the proposition that all  golfers in a tournament will get a hole-in-one on a particularly difficult hole – is that we know that this proposition is false. This is, he claims, because parity reasoning is not particularly natural. Of course, in the lottery case, the kind of structuring of epistemic space that triggers such parity arguments is quite natural, whereas it feels somewhat contrived in the case of the Heartbreaker . . . It is true that with some work, a conception of epistemic space can be effected that triggers the relevant kind of parity argument – but effecting such a conception does take some work. This can hardly strike us as a great mystery: while basic mastery of the idea of a lottery encodes some such division, the same is not so for golf tournaments.

But the same is also not so for parked cars, presidents, gas gauges, restaurants, and the like: basic mastery of the relevant concepts does not

 



can’t fail as a result of NIFN; it’s not possible for S to be warranted in believing (x)Px while not warranted in believing each of P, . . ., Pn by inference from (x)Px as a result of NIFN violation. Hawthorne . I don’t actually share this intuition; it strikes me as similar to cases in which we are willing to say something along the lines of “come on; you know you won’t win the lottery,” which we need not take literally. Hawthorne , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



encode any such division either. One can, with some work, view one’s car’s occupancy of space B today as one of some number of space-occupancydays (in this lot? all lots? this week? this year?), one’s gas gauge as one among many gas gauges (automobile gauges? gauges in general?), and this restaurant as one among many restaurants (businesses? buildings?). But there’s nothing particularly natural about doing so; for a start, the choice of a comparison class seems utterly arbitrary and yet can have a dramatic impact on the probabilities involved (whereas that choice is fixed for the lottery proposition by the structure of the lottery and the number of tickets sold). It is certainly no more natural to do so than it is in the heartbreaker case. So it is hard to see how Hawthorne’s explanation is to be extended to these cases. He does recognize that “parity reasoning is not the only source of skeptical doubt.” But he surely does intend such reasoning to explain our skeptical intuitions in at least the car theft, president, and misprint cases; he explicitly mentions them as examples of lottery propositions. The parity reasoning explanation, therefore, seems, at best, to explain only our skeptical reaction to the eponymous lottery proposition; it doesn’t explain our skeptical response to other Vogel propositions. And – as per the first concern – it doesn’t even provide the requisite explanation for the lottery proposition.

. Warrant Infallibilism and the Lottery Intuition In this section I will propose a different explanation of the lottery intuition: one can only be warranted in believing P if P is true. That is, warrant is infallible. But S’s putative warrant for “I will lose the lottery” is compatible with S’s winning. So her putative warrant for that proposition is no warrant at all. Trenton Merricks has offered two arguments for warrant infallibilism as well. In §., I’ll describe his arguments and defend them against criticisms.



 

He offers “duplicate reasoning” – everything could seem just as it actually does and yet P be false – perhaps in light of the fact that it is very difficult to fit our intuition that we don’t know the falsehood of wholesale skeptical hypotheses, such as that I am a BIV, into the parityreasoning mold. Hawthorne , . Appeal to NIFN doesn’t explain the lottery intuition either. It is not inevitable, given only that S wins, that she will believe that she lost in light of the chances of her doing so. She could, for example, have bought all of the tickets, in which case the chance of her winning (given that it is a standard lottery) is .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure ..

Explanation Expectation

Consider a case in which we are initially inclined to judge that S knows P on basis B. Now suppose that we learn that P is, in fact, false. We then expect there to be some explanation of how P could be false notwithstanding B. For example, presumably S can know that the Broncos won in virtue of reading a newspaper report to that effect. If it had turned out that the Broncos had lost and yet the newspaper still reported that they won, we would expect there to be some explanation of the compatibility of those facts. Perhaps, for example, the typesetter accidentally entered the score for last week’s game that the Broncos did win. In the scenario in which S does know, the typesetter correctly entered the scores for this week’s game; that he entered the wrong scores in the scenario in which they lost provides the explanation we seek. An explanation of the compatibility of A and B – of how A could be true despite B – is not an explanation of A itself, but is instead a “how-possibly” explanation. The explanation is called for because the conjunction of those two states must be abnormal in some way: it is not possible for A and B to both be true in normal circumstances. Since A and B are both true in fact, the actual situation must somehow depart from the normal in order to allow for that fact. For example, the query “how could Bob be driving a Ferrari F? He flips burgers at McDonald’s!” does not call for an explanation of Bob’s driving a Ferrari – it would be beside the point to indicate that he bought one yesterday – but references the fact that burger flippers don’t earn nearly enough to afford a car that costs $. million. So, in the normal course of events – wherein Bob’s earnings would constitute his sole source of income – he could not possibly afford it. So there must be some departure from the normal that accounts for the discrepancy (such as his winning the lottery). In the newspaper case, the normal scenario is that in which S does acquire the knowledge that the Broncos won by reading the newspaper report. When we learn that the report is false, this triggers a call for explanation: how could they have lost when the newspaper reported that they won? This is not a request for an explanation of the Broncos’ loss – it would be beside the point to indicate that their entire starting lineup is out with the flu – but instead concerns how the situation departs from the normal scenario in which S knows that they won, which departure explains how the newspaper could have come to report that they won when they lost.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information ..



The Missing Missing Explanation and Vogel Propositions

Notably, none of this applies in the lottery case. Suppose we grant that S can know that she will lose on the basis of the improbability of her winning. Now consider the scenario in which she wins. Although unexpected, there would be absolutely nothing abnormal about her doing so given that basis. There is no characteristic of either S’s cognition or the surrounding circumstances that we understand to obtain when S wins that explains how it is possible that she won despite her statistical basis for believing that she lost. She did not misapprehend the probabilities involved (the lottery wasn’t rigged in her favor, for example); they are relevant to the proposition that she lost in precisely the way that she took them to be; and they are adequate for the delivery of warrant (we are assuming) when that proposition is true. The answer to the question “but how could it be possible for her to have won when the probability of her losing was so high?,” insofar as there is an answer to give, is just “well, there was always a chance.” It was always possible for her to win given that basis and her circumstances; there is nothing to explain. The same applies to the other Vogel cases. Background knowledge delivers, at best, the information that the car’s being stolen, the president’s having a heart attack, the gas gauge’s breaking, and so on, are improbable. But if the improbable were to occur, the only answer to the question “but how could the car possibly have been stolen/the president have had a heart attack/the gauge have broken when it was so unlikely?,” in so far as there is an answer to give, is “well, there was always a chance.” There is no characteristic that we understand to obtain in the scenario in which the car is stolen, the presence of which explains how it could be stolen compatibly with S’s evidence against its being stolen, which is absent when it is not stolen. Her evidence always allowed for that possibility, and there is no other difference, within or outside S, at which we can point an accusing finger when that evidence misleads. Of course, there is an explanation of why the car was stolen: the thief found that particular model attractive, perhaps. There is, in the same way, an explanation of why S’s ticket won: the balls in the cage were distributed in such a way that they dropped in the order corresponding to S’s ticket. Such explanations can be elaborated in greater detail if desired. But these are explanations for why the relevant proposition is false; they are not explanations of how it could possibly be false despite the presence

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

of S’s basis for believing it. Suppose that S knows that the newspaper’s report isn’t erroneous solely on the basis of the infrequency of such errors. Now consider the scenario in which the report is erroneous. The explanation for why the report is erroneous is that the typesetter accidentally entered the scores from last week’s game. But that doesn’t explain how S could possibly believe that the newspaper report was not erroneous given her background information concerning the infrequency of misprints. There is no explanation to provide, other than “well, there was always a chance.” So, in cases wherein we initially intuit that S knows P, if we then learn that P is false we intuit that there must be some difference from the knowledge case that explains how it is possible for S to believe P on the same basis and yet P be false. But in Dretske cases wherein we have only a statistical basis for Q, if we learn that Q is false we do not intuit that, in addition to the bare fact that Q is false and whatever explains that fact, there is another departure which explains how it is possible for S to believe Q on that basis and yet Q be false. In the former cases, there is a missing explanation and, in the latter, the missing explanation is itself missing. This is, I suggest, a clue for the explanation of our intuition that background statistical evidence doesn’t deliver knowledge of the lottery and other Vogel propositions. .. The Missing Missing Explanation and Warrant Infallibilism What explains our expectation that there must be some explanation of the compatibility of ~P and B when, in the normal course of events, S would know P by appeal to B? Here’s an answer: when S acquires knowledge of P in response to B, her warrant for P – which includes, but is not exhausted by, B – strictly implies P. That is, given the additional background conditions that must be in place in order for B to deliver a warrant for P, it is not possible for B to be true when P is false. So, if B is true when P is false, some condition of 





Steven Wright joke: “A cop stopped me for speeding. He said, ‘Why were you going so fast?’ I said, ‘See this thing my foot is on? It’s called an accelerator. When you push down on it, it sends more gas to the engine. The whole car just takes right off’.” This isn’t true of all Dretske cases. It follows from “it’s a zebra” that it’s not a five-foot-tall carbon atom disguised to look like a zebra. Background knowledge concerning the size of carbon atoms – which is not merely statistical in this case – does (presumably) suffice to know that it’s not a fivefoot-tall carbon atom. Nelkin  voices a similar theme.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



warrant (other than B itself ) must have failed, the failure of which explains that compatibility. When, for example, we learn that the Broncos lost notwithstanding the newspaper’s report that they won, this tells us that some condition of S’s warrant for “the Broncos won” must have failed, whose failure explains how it is possible for S to believe that the Broncos won when they didn’t. It’s a condition of S’s warrant that the typesetter didn’t enter the scores from last week’s game; that this condition fails provides the requisite explanation. In §. I called such background conditions enabling conditions of S’s warrant for P on B, the set of which is C. The resulting view is that B, in conjunction with all of the enabling conditions of S’s warrant for P on B, strictly implies P: it is impossible for B & C to be true and P false. Warrant is infallible. In the lottery case, however, there is no condition to point to, the failure of which explains the compatibility of S’s win with the statistical evidence against it. But, since warrant is infallible, there must be some such condition that is present when she loses and absent when she wins if her statistical evidence does provide a warrant when she loses. Since there is no such condition, that evidence does not deliver a warrant for her belief: she doesn’t know that she lost, even when she does. As a result, we intuit that she doesn’t know that she lost. But perhaps there is some such explanation, even in the lottery case. What could it be? It can’t be the fact that the belief is false itself; that doesn’t explain the compatibility of B with ~P, but merely cites the latter. Nor can it be whatever explains P’s being false; that still wouldn’t explain the compatibility of ~P with B. Nor do popular conditions on warrant provide the requisite explanation. There is no false assumption upon which she relies in her justification for believing that she loses (as per the no-false-lemmas antiGettier condition). And, even if there were, they would remain in place if she won. Her winning does imply that her belief that she will lose is both insensitive and unsafe. But they remain so even if she 

 

As DeRose ,  says, “Hypotheses are supposed to explain; skeptical hypotheses should explain how we might come to believe something despite its being false.” Effective skeptical hypotheses specify the condition of warrant that has failed, the failing of which explains how it is possible for you to believe what you do without that belief’s being true.  See, for example, Clark . Obviously, “I will lose” is not itself such an assumption. The actual world itself is both the nearest world to the actual and among nearby worlds. So the nearest world in which Q is false is one in which S believes it (and by the same method); and there is a nearby world in which S believes it (by the same method) and it is false.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

loses. And if her belief that she loses is reliably produced, justified, supported by the evidence, or epistemically virtuous when she loses, it is equally so when she wins. The only exception of which I am aware is the defeasibility condition: S knows P only if there is no defeater, that is, no truth such that, if S were justified in believing it, her belief that P would be unjustified. “She wins” would be a defeater, so defined, for her belief that she loses: she isn’t justified in believing that she loses if she is justified in believing that she will win. So, if warrant requires indefeasibility then, even if S is warranted in believing that she lost when she did, she can’t also be warranted in believing that she lost when she won. But “she wins” shouldn’t count as a relevant defeater anyway. The idea behind the no-defeater account, after all, is that one doesn’t know when there exists evidence against one’s belief that P such that, if one were aware of it, it would undermine one’s justification for P. It is, however, at least odd to characterize the bare fact that P is false as potential evidence against one’s belief that P. And, anyway, even granting that S’s false belief is unwarranted because it is defeated and defeated solely because it is false, this still does not offer the explanation we seek, namely, of the compatibility of its falsehood with S’s basis. If the Broncos lost, notwithstanding the newspaper’s report to the contrary, S’s belief that they won is defeated simply because it is false. But that doesn’t explain how it can be false notwithstanding the report; it only repeats the fact that it is false. So the failure of some other condition of warrant must explain that compatibility. All of the above goes for other Dretske cases whose Q propositions are Vogel propositions. In each such case, there is nothing to appeal to in way of explaining how S could end up believing Q when it is false compatibly with her statistical evidence. But then no condition of warrant has failed when Q is false. If not, and warrant is infallible, then S is not warranted in believing Q on that evidence, even if she’s right. .. Warrant Infallibilism and Gettier Cases Suppose that X – the reporter switched the scores with last week’s game, the gas gauge is stuck, and so on – is the explanation of how it is possible 



In the nearest world in which she wins she still believes that she loses (and by the same method, namely, estimation of the probability of losing); and there is a nearby world in which she wins but still believes that she loses, and by the same method. See Lehrer and Paxson .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



for P to be false (the Broncos lost, the tank isn’t empty) while B is true (the paper reported that the Broncos won, the gauge reads “empty”). On the above view, ~X is a condition of S’s warrant for P on B. So the presence of X will prevent S’s acquiring a warrant for P in response to B, even when P is true. This is typically possible: while X is compatible with (B & ~P) – since it explains how this conjunction is possible – it is also typically compatible with (B & P). The typesetter’s having entered the scores from last week’s game explains how the Broncos could have lost this week’s game notwithstanding the paper’s report to the contrary. But they could have won this week’s game too, in which case the report would be coincidentally correct. The result is a Gettier case: although it is now true that the Broncos won, and S is intuitively justified in believing that they did – because the newspaper still reports that they did – S doesn’t know this. In general, the feature in virtue of which a standard Gettier case is a Gettier case – in virtue of which the belief is only accidentally true – is the same feature which, if the belief were false, would explain how it could be false despite the fact that the basis remains. The implicit backdrop to any Gettier case is a scenario in which S does know P on the same basis as in the Gettier scenario. Typically, for example, S can know that there is a sheep in the field by seeing what looks to be a sheep in the field. Suppose that there is no sheep in the field at all. We want an explanation: how could there be no sheep in the field when she sees something that looks like one? Explanation: she’s looking at a sheepdog. The Gettier case is constructed by retaining both the basis – she sees what looks to be a sheep – and the feature providing that explanation – it’s a sheepdog – and then arranging matters so that the belief is nevertheless true – there’s a sheep behind the barn. That warrant is infallible explains why the relevant beliefs in Gettier cases are not known. Suppose that warrant is infallible. Then, when S knows that there is a sheep in the field on the basis of seeing what looks like a sheep, she acquires that knowledge in circumstances that are such that she couldn’t have seen what looks like a sheep in the field unless there was one. So if, in another scenario, there isn’t a sheep in the field despite her seeing what looks like a sheep, some condition of warrant has failed, the failure of which differentiates that case from the knowledge case. And indeed, she can’t know that there’s a sheep in the field on the basis of seeing something that looks like a sheep if what she’s looking at is a sheepdog. That it’s a sheepdog explains how she could see what looks like a sheep when there’s no sheep in the field. If the feature of the second

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

scenario that provides that explanation is retained in a third scenario in which P and B are true – she’s still looking at a sheepdog – then warrant fails in that scenario as well, since a condition of warrant for P on B has failed. So S doesn’t know P in the third scenario, despite P’s being true. .. Warrant Infallibilism versus Other Infallibilisms Recall from §. the difference between basis and warrant infallibilism. Warrant infallibilism is the claim that a warranted belief must be true. Basis infallibilism, however, is the distinct and highly implausible doctrine that, for everything one knows, the basis alone – which is only a component of one’s warrant – guarantees that the belief is true. The latter is the doctrine that WP implies, as we saw in §., and does so to its detriment, since few if any of our putative knowledge claims enjoy infallible bases. But warrant infallibilism does not generate skepticism. Basis infallibilism requires that B on its own strictly implies P, whereas warrant infallibilism requires that B, in conjunction with all other conditions of warrant – that is, all other features that must be in place in order for S to acquire a warrant for P on B – strictly implies P. So basis infallibilism is far more demanding; knowing that the tank is empty, for example, by appeal to the gauge doesn’t satisfy it. But that doesn’t mean that it doesn’t satisfy warrant infallibilism. Given that all other conditions of warrant are satisfied – the gauge is not stuck, it is appropriately connected to the tank, it is correctly calibrated, S reads and interprets the gauge correctly, and so on – S will believe that the tank is empty only if it is. So warrant infallibilism doesn’t imply skepticism; it is not susceptible to the buck-passing argument. Warrant infallibilism is also distinct from justification or evidence infallibilism, the doctrines that one cannot be justified/have evidence for P if there are possible worlds in which one’s justification/evidence remains when P is false. So long as one can be justified/have evidence for P but not have a warrant for P, as per Gettier cases, then warrant infallibilism is compatible with justification/evidence fallibilism. Warrant infallibilism, although controversial, is far more widely accepted than is basis (or justification or evidence) infallibilism. Many proposed conditions on warrant – safety, sensitivity, no-defeater – imply 

It is important to remember that the concept of warrant in question is Plantinga-warrant; some other uses of “warrant” amount to justification, in which case warrant fallibilism is entirely plausible.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



it. And warrant infallibilism is seen by some to be one of the lessons of the post-Gettier literature.

. Merricks’ Arguments for Warrant Infallibilism In sum, the assumption that a belief can’t be warranted and yet false – that warrant is infallible – explains why we expect there to be an explanation of how P could be false when B is true in cases that are otherwise analogous to those wherein S acquires knowledge in response to B. It explains why we intuit that S doesn’t know that she lost the lottery, and that we don’t know other Vogel propositions. And it explains why Gettier cases are not cases of knowledge. These considerations provide good reason to think that warrant is infallible. Trenton Merricks has offered two related arguments for the same conclusion. I think those arguments succeed. In the next section I will present his first argument and respond to criticisms of it. In §.. I will present his second argument, and in §..–.. I will defend it against criticisms. His arguments, together with the considerations of §., provide overwhelming reason to believe that warrant is infallible. Given that it is, background information cannot deliver warrant for Q propositions in Dretske cases that are Vogel propositions. ..

Merricks’ Warrant Transfer Argument

Merricks’ first argument for warrant infallibilism runs as follows: Warrant Transfer Argument () Necessarily, if a belief can be warranted and false, then its warrant can be transferred to an accidentally true belief. () Necessarily, warrant cannot be transferred to an accidentally true belief. () So a belief cannot be warranted and false. Premise  is relatively uncontroversial. It is widely accepted that Gettier examples indicate that no accidentally true beliefs are known. But warrant 

 

If S’s belief that P is sensitive, for example, then she doesn’t believe it in the nearest world in which P is false. But the nearest world to the actual is the actual world itself. So if P is actually false then her belief is insensitive. Sensitivity implies truth.  See Zagzebski , for example. Merricks  and . This is a swifter presentation of the argument than Merricks’ original. See – of Merricks .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

just is the difference between true belief and knowledge. So if accidentally true beliefs are unknown they are unwarranted (and so are not warranted by transfer). Merricks argues for premise  by extrapolation from Gettier cases. Suppose, for example, that Smith has a false but warranted belief that Jones owns a Ford Escort, and infers that Jones owns a car. In fact, although Jones doesn’t own an Escort, he does own a Honda, although Smith has no reason to believe that he does. Smith has inferred an accidentally true belief from a false yet warranted belief. Assuming that warrant transmits through this inference, a warrant has been transmitted to an accidentally true belief as per premise . This defense of premise  has come under attack by Ryan (), Howard-Snyder and Howard-Snyder (), and Coffman (). The critique each offers is essentially the same. They don’t deny that an accidentally true belief does follow from the false and yet warranted beliefs they postulate, presumably because they recognize the ease with which Gettier cases can be constructed that involve inference from putatively warranted yet false beliefs. Instead, they point out that, precisely because the conclusion is accidentally true and the premise (by hypothesis) is warranted, the warrant fallibilist will deny that warrant does transmit from a warranted false belief to an accidentally true belief. The plausibility of the claim that warrant does transmit in these cases, they suggest, is “counterbalanced” by the plausibility that there are warranted but false beliefs. And, as Coffman points out, neither justification nor knowledge closure requires that the conclusion is warranted in these cases. Since the conclusion of a Gettier case is justified although unwarranted, the fact that both it and the warranted premise are justified does not require that both are also warranted. And since the premise of the relevant Gettier case is, although putatively warranted, false, it is also not known; and neither is the (accidentally true and so unwarranted) conclusion. So denying that warrant transmits doesn’t run afoul of justification or knowledge closure. However, it does run afoul of WC. WC requires that, when S is warranted in believing P and recognizes that P implies Q, S has a warrant for Q. But in Gettier cases S does not have a warrant for the conclusion  

Daniel Howard-Snyder and Frances Howard-Snyder are coauthors of the relevant article (). I will contract the article’s authorship to “the Howard-Snyders” hereafter for the sake of readability. Notwithstanding the argument of this book, it seems to me that the claim that warrant does transmit in the relevant cases is far more initially plausible than the claim that there are false and yet warranted beliefs. But that won’t matter for the argument to follow.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



despite recognizing the implication. So, if S does have a warrant for the premise, then WC fails. It is, moreover, not an option to affirm knowledge closure – KC – while denying WC. As per §., KC presupposes that WC is true. And, as we saw in the same section, knowledge closure can be properly formulated only in terms of warrant, that is, by WC. KC, we saw, is true only if transmission always succeeds, that is, if WT is true. To endorse WT, however, is to deny the possibility of cases in which, although transmission fails, closure succeeds. But we are now exploring possible sources of warrant for Q propositions – and exploring background information as a possible such source in particular – precisely in order to accommodate the conclusion from Chapters  and  that transmission doesn’t succeed in Dretske cases. The upshot is that, while warrant fallibilists who deny closure could endorse this response to Merricks’ warrant transfer argument, those who endorse closure cannot do so. Knowledge closure requires WC. And if WC is true then, assuming that the conclusions in Gettier cases are not warranted, Merricks’ warrant transfer argument demonstrates that warrant is infallible. .. Merricks’ Supervenience Argument Merricks’ second argument appeals to the constructability of Gettier cases built upon the same putatively warranted but false belief (rather than one inferred from it). Call this, for reasons that will be soon be clear, his supervenience argument. It runs as follows: Supervenience Argument () If a belief can be at once warranted and false, then it can be warranted and accidentally true. () No belief can be warranted and accidentally true. () So no belief can be at once warranted and false. 





These are also not the sort of cases that lead closure deniers to deny closure. The inference from “Jones owns an Escort” to “Jones owns a car” doesn’t violate NIFN. Nor would the conclusion be insensitive when Jones knows the premise. And anyway, as Merricks points out, classical sensitivity is warrant-infallibilist; the fallibilist could hardly reasonably appeal to it as bolstering her defense of fallibilism. See Merricks , , fn. . One might suggest that, since I deny WC, I can’t appeal to Merricks’ argument in support of warrant infallibilism. But, of course, to deny closure isn’t to claim that transmission always fails; it’s only to claim that there are particular exceptions. And, as per the previous footnote, the inferences in question are not the relevant exceptions. This argument is developed primarily in Merricks .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

Premise  is, again, uncontroversial. In defense of premise , Merricks again extrapolates from a Gettier case. In the actual world A, Smith forms a false belief that Jones owns an Escort: he sees Jones driving an Escort and Jones proudly displays an Escort ownership certificate to Smith, but in actuality the car Jones drives is a rental and the certificate is forged. Now consider the world W which is identical to A except that, seconds before Smith forms his belief, Jones’ aunt dies in obscurity, thousands of miles away, and bequeaths an Escort to Jones. The result is a Gettier case: in that world Smith’s belief is accidentally true. Merricks claims that there is no improvement in S’s “overall epistemic situation” in A versus W. The only difference is the aunt’s dying thousands of miles away, bequeathing the Escort. That difference has the effect of making Smith’s belief true. But that is, if anything, an improvement in W over A rather than vice versa; at least, in W, Smith’s belief is true, which is presumably better than its being false. Merricks also claims that warrant supervenes over one’s overall epistemic situation in such a way that, if an agent is warranted in one scenario but not in the other, there must be some improvement in the former as compared to the latter overall epistemic situation. It is highly implausible that one can acquire a warrant that one did not have by moving to a scenario in which one is epistemically worse off. Since there is no such improvement in A as compared to W, if Smith is warranted in A he must also be warranted in W. But Smith is not warranted in W (wherein his belief is accidentally true). So he is not warranted in A. To generalize from the example: for any putatively warranted but false belief there is a world in which the same belief is accidentally true but wherein the agent’s overall epistemic situation is no worse than it is in the actual world (and in which it is arguably better since the belief is now true). Given that loss of warrant can only result from a worse overall epistemic situation, the agent’s belief must be warranted in the world in which it is accidentally true as well. But accidentally true beliefs are not warranted. So there are no warranted but false beliefs.



This version of Merricks’ second argument is developed in Merricks . A stronger version of the argument is constructible that utilizes a much weaker version of premise , as follows: () If there are any warranted yet false beliefs, then at least one could be accidentally true. () No belief can be warranted and accidentally true. () Therefore, there are no warranted yet false beliefs. Defending this version of () only requires that, among the most plausible candidates for warranted yet false belief, a corresponding Gettier version can be constructed. The argument

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



The argument is intuitively forceful. Gettier cases are sometimes described as involving two pieces of luck, bad and good: the bad luck would normally lead the agent to a false belief – the Escort that Smith saw Jones driving around was a rental and the certificate is a forgery, and so Jones owns no Escort – but then is compensated by good luck which makes the belief true in an unexpected fashion – the aunt dies and bequeaths an Escort. The situations in both A and W share the bad luck, but only in W is it compensated by the good. It is odd to suggest that, nevertheless, Smith is epistemically better off in A – to the point of being warranted in A but not in W – despite the absence of good luck in A and the presence of bad luck in both. .. The Howard-Snyders, Bad Luck, and Good Luck Nevertheless, the Howard-Snyders respond by identifying what they propose to be an improvement in Smith’s epistemic situation in A over W, namely, that the following subjunctive conditional is satisfied: NA (“No Accident”) Were S’s belief to be true it would not be accidentally so.

Given standard possible-worlds semantics, NA fails in W: for in the nearest world to W in which the belief is true – namely, W itself – Smith’s belief is accidentally true. But A can be specified in such a way that NA is true because W is not the nearest world in which Jones owns an Escort. Smith’s aunt is actually hale and hearty (or doesn’t own an Escort, or doesn’t exist at all). So the nearest world – W* – in which Jones does own an Escort is one in which the Escort that Smith sees Jones driving about is in fact his and the certificate genuine. Smith’s belief in that world, although true, is not accidentally so (and is, presumably, knowledge). So, in the nearest

 



cannot, therefore, be blocked by claiming that there are some warranted yet false beliefs for which no Gettier version can be constructed. See Zagzebski  and Turri . As a result, Ryan is incorrect when she says that “It seems reasonable to think that in [A] S fails to know P simply because P is false” (Ryan , ). Not only is P – “Jones owns an Escort” – false in A, “Smith sees Jones drive a rental and the certificate is forged” – the bad luck – is true in A, and preserved in W, but not in the world in which Smith knows. Howard-Snyder & Howard-Snyder , . Ryan  proposes, instead, “p is not accidentally true” as the improvement in A over W. But this is equivalent to “either p is false or nonaccidentally true.” As Merricks points out, p’s being false on its own is no epistemic improvement in A over W, and therefore neither is satisfaction of this disjunction (Merricks , , fn. ). The HowardSnyders, who agree with this critique, suggest that their subjunctive conditional does better, since it imposes a subjunctive condition on both true and false putatively warranted beliefs.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

world in which the belief is true, it is not accidentally so. Since the subjunctive conditional’s holding is an epistemic improvement – it is better that, if one’s belief were true, it would not be accidentally so – there is an improvement in Smith’s overall epistemic situation in A as compared to W. So Merricks’ argument fails: the loss of warrant may be due to the fact that NA, a condition of warrant, is true in A but false in W. There is good reason to be suspicious of this argument. If Smith’s belief in A is warranted, then the bad-luck features of the case – that the car is a rental and that the certificate is forged – do not stand in the way of that warrant. So retaining those features in W shouldn’t stand in the way of Smith’s being warranted. Nor does the fact that Smith’s aunt bequeaths an Escort in W undermine Smith’s warrant. Suppose that there is no bad luck: the Escort that Smith sees Jones driving is in fact his, and the papers he shows Smith are genuine. Suppose also that Smith’s aunt bequeaths another Escort to Jones. The fact that she does so obviously does not undermine Smith’s warrant for believing that Jones owns an Escort; Smith remains warranted in believing – indeed, knows – that Jones does own an Escort (although he doesn’t know that Jones owns two). So the bad-luck characteristics present in both A and W should make no difference to Smith’s warrant, and neither should the fact that his belief is true because his aunt bequeaths an Escort in W. And yet, the combination of these features in W is supposed to strip Smith of the warrant he has in A. It’s very hard to see why that would be. Merricks responds to the Howard-Snyders’ proposal, in part, by insisting that the subjunctive conditional, if true, is so in virtue of underlying aspects of the epistemological situation, and so whether its satisfaction constitutes an epistemological improvement depends on the corresponding status of those underlying aspects. The Howard-Snyders respond by pointing out that the belief is accidentally true in W but not in A, which is an underlying difference between the two worlds that constitutes an improvement in A over W. But being accidentally true is itself a characteristic that is grounded in further underlying features of Smith’s epistemic situation: for the belief to be accidentally true just is for it to be true in a manner consistent with the bad luck present in both A and W. But the belief’s being true is not to blame for loss of warrant in W. So, notwithstanding NA, there is 

Merricks , , fn. .



Howard-Snyder & Howard-Snyder , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



no improvement in A over W; only the bad luck is relevant, and it is present in both. .. NA and Epistemic Improvement At first glance, satisfaction of NA seems to be an epistemic improvement. It might suggest a comparison between two agents, one who is, as it were, epistemically skilled and the other incompetent, so that the nearest world in which the first arrives at the truth through skill is closer for the first agent than for the second; the second could only be right in as near a world by sheer luck. But this doesn’t apply to Smith. The nearest world in which Smith knows that Jones owns an Escort is W*, wherein Smith does own the Escort that Smith sees him driving. In one scenario, that’s the nearest world in which Smith’s belief is true. As a result, NA is true. In the other, there’s an intervening world – W – wherein his belief is accidentally true. As a result, NA is false. But the distance to W* – the nearest in which Smith knows – is the same in both scenarios; Smith is no better placed to acquire knowledge in the one scenario than in the other. And it hardly counts as a disadvantage that his belief is true in some intervening worlds in the second scenario. Quite the contrary: although it is as difficult for Smith to know in the first scenario as it is in the second, at least it would be somewhat easier for him to be right in the second. Consider an analogy. In scenario A, a skilled archer in an archery contest in dead calm weather aims at a target that, in the normal course of events, she would easily hit. A bitter competitor has, however, placed a powerful electromagnet disguised as a nearby tree, which he turns on when the archer fires, thereby pulling the steel arrowhead ten degrees off course. The nearest world in which the archer hits the target is that in which the competitor has second thoughts and so does not turn on the magnet. So, in the nearest world in which she hits the target, her doing so is a manifestation of skill. Scenario B is identical to A except that there are occasional gusts of wind (although not when she actually releases). The competitor is utterly determined to win at all costs, so the nearest world in which he has second thoughts and so doesn’t turn on the magnet is distant. But the nearest 

Merricks suggests that W is nearer than is W* (Merricks , , fn. ). But that is not inevitable; it depends on how difficult it is to make the belief true in a manner consistent with the bad-luck feature, which will vary from case to case.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008



Against Knowledge Closure

world in which there is a gust of wind that compensates for the -degree drift when the magnet is on is nearby. As a result, the nearest world in which the target is hit is now one in which it is hit by accident: although the magnet is on, the gust of wind conveniently brings it back to the target. The nearest world in which the archer’s hitting the target is a manifestation of her skill – because the competitor has turned off the magnet – is equidistant in both scenarios. In A, the arrow misses the target in every intervening world until we reach that world. In B, it hits the target in some intervening worlds by accident. There’s obviously no sense in which the archer is somehow better off in A than in B. When these scenarios are not in mind and one considers only the conditional “were the archer to hit the target, her doing so would not be accidental” (versus “not a result of skill”), doing so brings to mind two different archers, with differing levels of skill, aiming at the same target in the same conditions. Whether the conditional is true then depends on the distance to the nearest world in which the archer hits the target as a result of skill. It’s much further out for the less skilled archer, so the nearest world in which it is hit by accident is more likely to intervene. As a result, the conditional is true for the skilled archer and false for the less skilled. But when it’s the very same archer in the two scenarios with the same level of skill, in conditions that are equally hostile to her hitting the target by skill – as in A and B above – this is no longer so. Whether the conditional is true depends only on the distance to the nearest world in which a convenient environmental feature intervenes to compensate for those hostile conditions. If such a world is nearer than the nearest in which the conditions are no longer hostile that is, if anything, an improvement in the archer’s circumstances. It is, at least, easier for the target to be hit notwithstanding those hostile conditions. 





Although it is improbable that the gust of wind would blow in the precise way required, that does not make the nearest world in which it does so a distant one. (It is improbable that one wins the lottery, and yet the nearest world in which one does so is quite nearby.) The concept of “accident” in play here is the same as in Gettier cases. Given only the bad-luck features of S’s circumstances – she’s looking at a sheepdog/the magnet is on – S would be expected to fail to have a true belief/to hit the target. But atypical good luck – there’s a sheep behind the barn/there’s a compensating gust of wind – delivers the intended result that her belief is true/the arrow hits the target. A disanalogy is that a wind gust could be bad news as well as good as far as hitting the target goes: even if the magnet is on, the wind could blow the arrow further off course instead of righting it. But Smith’s aunt’s bequeathing an escort can be nothing but good news; if the bad luck is present – the car is a rental and the certificate a forgery – her doing so can only ensure that Smith’s belief is true.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

Denying Premise : Warrant by Background Information



In the same way, satisfaction of NA is not an epistemic improvement in Smith’s case. In the two scenarios – one in which NA is true and the other in which it is false – it’s the very same believer in circumstances equally hostile to his knowing. The difference is only a reflection of a condition – his aunt could easily bequeath an Escort – that makes it somewhat easier for his belief to be true. That is also, if anything, an improvement. Any impression to the contrary is due to the relevance of NA when considering different agents with distinct epistemic capacities. But in the two relevant scenarios here, there’s only Smith.

. Summary The upshot is that Merricks’ supervenience argument for warrant infallibilism succeeds: for any plausible scenario in which a belief is warranted and yet false, there is a corresponding scenario in which that belief is accidentally true and yet represents no epistemic improvement. So assuming both that warrant cannot be lost without some deterioration in the epistemic situation and that accidentally true beliefs cannot be warranted, there can be no warranted false beliefs. We’ve also seen that Merricks’ warrant transfer argument for warrant infallibilism also succeeds. And we’ve noted that warrant infallibilism accounts for our expectation that, if the belief is false, there will be an explanation of how it could be false notwithstanding the presence of the same basis that delivers knowledge in the normal course of events. It also explains why we deny knowledge in Gettier cases. And, finally, it explains why we intuitively (and correctly) deny that one can know that one will lose the lottery on the basis of background information that only renders it probable that one will lose, and similarly in the other Vogel cases. Warrant is infallible. Since it is, background information does not suffice to deliver a warrant for knowledge of Vogel propositions. For that background information, at best, renders the belief probable, and nothing in S’s surrounding circumstances makes it any more certain. But, since warrant is infallible, that’s not enough. For it would be compatible with such a background warrant that one’s belief, although warranted, is false. But there are no warranted false beliefs. In Chapter , we’ll consider whether S might instead be warranted in believing the denials of skeptical hypotheses by default. This disanalogy is, thus, only grist for the argument that Smith’s epistemic circumstances are, if anything, improved when NA is false.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 08:39:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.008

 

Denying Premise  Warrant by Entitlement

. Warrants by Entitlement We’ve canvassed three possible sources of warrant for Q so far – by transmission from P, itself warranted on basis B; by direct appeal to B; and on the basis of background information – and found them wanting. I am aware of no other possible source of warrant to which the closure advocate might appeal. But there remains one option: Q propositions are warranted despite the fact that there is no source of such warrant because they are warranted by default; no source is required. This has become a popular theme in way of responding to wholesale skeptical arguments, one that originates in Wittgenstein’s characterization of “hinge” propositions. Perhaps the best-known version of the approach is Wright’s “entitlement strategy” (§.). In §. I will provide some reasons to think that what Wright means by “warrant” can’t be the Plantinga-warrant that is relevant to the present topic. I will then put that aside and consider whether the strategy succeeds against skeptical hypotheses. There are two versions of Wright’s strategy to consider: strategic entitlement (§.) and entitlement of cognitive project (§.). As we’ll see, neither strategy delivers the needed warrant for Q propositions of Dretske





As with the three previous putative sources of warrant, appeal to warrant by default is also susceptible to the buck-passing argument. Such warrants are fallible: even if S has a default warrant against the hypothesis that she is a BIV, it is compatible with her possession of that warrant that she, nevertheless, is a BIV. WP then requires that she have a warrant against the skeptical scenario “I am a BIV with a default warrant against the hypothesis that I am a BIV.” Whatever that warrant might be, if it is also fallible then S needs a warrant against the skeptical scenario in which she has that warrant and the previous skeptical hypothesis is true. And so on. However, as with the three previous putative sources of warrant, I’m putting this aside for the purpose of examining warrant by default for the denials of skeptical hypotheses. Wittgenstein . For implementations of the default warrant strategy see Wright  and , Pritchard , , and , and Coliva  and .



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



cases. I will briefly consider Annalisa Coliva’s and Duncan Pritchard’s versions of the entitlement strategy in the concluding section (§.).

. Entitlement and Skepticism Wright’s first response to the skeptic is concessive: the skeptic is correct that we don’t have and can’t acquire evidence (either empirical or a priori) that we can mobilize against the denial of such wholesale skeptical hypotheses as “I am a BIV.” Any putative evidence to which we might appeal would of necessity be delivered by a project of inquiry that succeeds only if the skeptical hypothesis is false. (Appeal to my having hands, for example, assumes that my perceptual system is operating correctly, which in turn requires that I’m not a BIV.) So that evidence can’t, Wright concedes, deliver an evidential warrant for the claim that the hypothesis is false. The skeptic then draws the conclusion that we can have no warrant for the denial of that hypothesis. Wright points out, however, that there is a lacuna in the skeptic’s argument: she is assuming that the fact that no evidence for that denial can be provided implies that no warrant is possible, which in turn requires that the only available warrants are those delivered by appeal to (empirical or a priori) evidence. The skeptic’s conclusion is, therefore, avoidable by countenancing a kind of warrant that does not require evidence. Suppose there is a type of rational warrant which one does not have to do any specific evidential work to earn: better, a type of rational warrant whose possession does not require the existence of evidence – in the broadest sense, encompassing both a priori and empirical considerations – for the truth of the warranted proposition. Call it entitlement. If I am entitled to accept P, then my doing so is beyond rational reproach even though I can point to no cognitive accomplishment in my life, whether empirical or a priori, inferential or noninferential, whose upshot could reasonably be contended to be that I had come to know that P, or had succeeded in getting evidence justifying P.

If I have a warrant by entitlement for the denial of the skeptical hypothesis, then the requirements imposed by front-loading are met and I can acquire an empirical warrant for the mundane propositions that the skeptic denies that I can acquire. 



Safety theories offer an alternative default-warrant approach to Q propositions: warrant for P requires that Q propositions are far-safe and, since a belief is far-safe solely in virtue of its modal profile, S’s belief in Q is automatically warranted. We’ve already explored that approach in §.. Wright , –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

As Wright recognizes, this view implies failure of closure over warrants acquired by evidence: I can be evidentially warranted in believing that I have hands (thanks, in part, to an entitlement to the denial of “I am a handless BIV”) and yet not be evidentially warranted in believing the denial of the skeptical hypothesis, which denial follows from my having hands, even if I recognize that it so follows. However, Wright insists, it does preserve warrant closure overall, since – so long as I am warranted in believing that I have hands, which requires a prior anti-skeptical warrant by entitlement – I will have a warrant for “I have hands” only when I have a warrant for “I am not a handless BIV.” Since WC is not limited to evidentially grounded warrants, it looks as though this strategy can save WC so long as the Q propositions of Dretske cases are candidates for warrant by entitlement.

. The Meaning of “Warrant” It is questionable, however, whether Wright’s entitlement strategy is relevant to WC. While WC doesn’t require that S’s warrant for Q be evidentially grounded, it must, nonetheless, be a Plantinga-warrant (“P-warrant”), being the difference between knowledge and true belief. For it’s only the closure of P-warrants that is relevant to the status of knowledge closure. But, as indicated in §., it is far from obvious that what Wright means by “warrant” is, or suffices for, P-warrant. As per the above quotation, my being warranted, for Wright, appears to consist merely in my being “beyond rational reproach” for accepting the relevant proposition. But it is far from clear that being beyond rational reproach suffices even for justification. It certainly does not suffice for P-warrant; agents in Gettier cases are presumably beyond rational reproach, but they are not P-warranted. The quotation above itself suggests that Wright recognizes this, since the fact that “I can point to no cognitive accomplishment in my life . . . whose upshot could reasonably be contended to be that I had come to know that P” is, nevertheless, compatible with my being entitlementwarranted. So, even if there are entitlements in Wright’s sense, there being so does not ensure that WC is true.  

See, however, fn. . In his  paper, Wright also says that “[b]y a ‘non-evidential’ warrant, I have in mind grounds, or reasons, to accept a proposition that consist neither in the possession of evidence for its truth, nor in the occurrence of any kind of cognitive achievement . . . which would normally be regarded as apt to

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



Wright is, moreover, insistent that the attitude warranted by an entitlement is not a belief. The relevant attitude is instead “trust,” where trusting P is not merely acting as one would if one believed P, but is also incompatible with doubt with respect to P. But how is trust differentiated from belief if it is to be incompatible with doubt? In virtue, Wright suggests, of the fact that belief is essentially rationally controlled by evidence. Since entitlements are precisely warrants that are not acquired by appeal to evidence, they cannot license belief. Very roughly, if we think of “belief,” in its core uses, as denoting a normatively constrained and normatively constraining state – a state identified by its “in-” and “out-rules,” as it were: something essentially rationally controlled by evidence and essentially rationally committal to thought and action – then the general idea I am canvassing is that it will be necessary, in trying to make something of the notion of rational entitlement, to think in terms of attitudinal states which share much of the second ingredient – the element and style of commitments involved – with belief, but not the first.

But then entitlements can’t be P-warrants. P-warrant precisely consists in the difference between true belief and knowledge (which requires belief ); nothing can fill that role that is essentially incapable of being attributed to a belief. One might counter that we should expand the candidate attitudes involved in knowledge to include trust: one can know P without believing P so long as one trusts P, P is true, and one is entitled to that trust. Alternatively, one might claim that Wright’s conception of belief is unduly restrictive. Although beliefs have the “out-rules” he ascribes to them, they do not require the “in-rules”: one can believe that P even though that belief is not controlled by evidence. Then entitlement can, in some cases, constitute the difference between knowledge and true belief as P-warrant requires. Nevertheless, warrant, as Wright describes it, does not suffice for P-warrant. So having a warrant by entitlement in Wright’s sense for the denial of skeptical hypotheses does not deliver what the (P-warrant) closure advocate needs. I will, nevertheless, treat the entitlement strategy as an attempt to characterize a default P-warrant for the rest of this chapter in order to ensure its relevance for the knowledge closure debate, whether or not this is Wright’s own intention.



ground knowledge or justified belief that P” (Wright , ). Since it is precisely a P-warrant that grounds knowledge, this again suggests that Wright’s warrant is not P-warrant (or even justification). Wright , ; emphasis in the original.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

.

Strategic Entitlement

It remains to be seen, however, whether Q propositions are candidates for warrants by entitlement. This is difficult to evaluate since Wright proposes four different possible forms of entitlement, he proposes them somewhat tentatively, and each is unclear in certain respects. Only two of the four forms of entitlement are (as far as I can determine) relevant to the present issue: strategic entitlement and entitlement of cognitive project. The paradigm for strategic entitlement is Reichenbach’s “vindication” of induction. Reliance on induction presupposes commitment to the principle of the uniformity of nature, for which we are, nevertheless, in no position to acquire evidence (since, as Hume pointed out, the application of any such evidence would presuppose that very principle). We are, the claim goes, nevertheless entitled to trust that principle. For either it’s true or it isn’t. If it isn’t true then there is no method to acquire reliable expectations about the future, which are necessary in order for us to lead “secure, let alone happy and valuable lives.” If it is true then induction will be (let’s assume) the most reliable method. So trusting it is a “dominant” strategy: we will be better off if we rely on it when it is true than if we don’t rely on it when it is true, and we will be no worse off by relying on it when it is false than by not relying on it when it is false. Wright suggests that this strategy could also be applied to general features of our cognitive interaction with the world, such as the reliability of our perceptual system as a guide to the nature of the surrounding material world. For we need to navigate our way around that world, and do so by relying on our perceptual faculties. If they are untrustworthy, then, having no other faculties to rely on, we will do badly whether or not we rely on them. If they are trustworthy, then we will also do badly if we do not rely on them and will do well if we do. So trusting them is a dominant strategy, and therefore an entitlement. ..

Strategic Entitlement and Wholesale Skeptical Hypotheses

It is unintuitive that a strategic entitlement can deliver P-warrants for the denials of wholesale skeptical hypotheses. However fundamental to our intellectual lives it might be that they are false, that they are so is a contingent matter. The claim that we can be in a position to know contingent propositions merely by entitlement is difficult to accept. It is 

Wright , .



Wright presents this application in his , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



all the more so if the entitlement is due solely to the practical consequences of trusting versus failing to trust that they are false. Even putting that aside, it is doubtful that the strategic reasoning applied to induction and perceptual reliability also applies to Q propositions that constitute the denial of wholesale skeptical hypotheses. Trust is a “proattitude” toward a proposition, indeed the same pro-attitude involved in belief (but without the “in-rule” of being regulated by evidence). So failure to trust involves not having that pro-attitude. Presumably, then, ambivalence – having no opinion either way – as well as the con-attitudes from mild suspicion to outright denial will count. Suppose that I fail to trust that I am not a BIV in one of these ways. The strategic entitlement approach requires that, if I’m not a BIV, the consequences of my failing to so trust are worse than they would be if I did so trust. But there are arguably no differences whatsoever in the consequences resulting from my trusting versus failing to trust that I am not a BIV. Suppose I believe that I am a BIV when I’m not. Of course, if that belief leads me to refrain from eating (because I think it’s not real food) or to jump out of windows (because I think I don’t really fall), then nasty consequences will indeed ensue. But it’s far from obvious that these are rational responses to that conviction. Wholesale skeptical hypotheses are precisely designed so that their being true has no impact on the course of my subjective experience. Recognizing this, I know that the world would seem to respond to my behavior (or “behavior,” since BIVs do not physically behave) if I were a BIV in precisely the way it would if I were not. So why would I do (or “do”) anything different? After all, behavior (not eating, jumping out of windows) that would lead to harm to myself if I were not a BIV will still lead to harm to myself – painful experiences, at least – if I were a BIV, since the program generating my experiences perfectly emulates the experiential upshot resulting from my behavior in a non-BIV world. Compare this with the Reichenbachian argument. Suppose I fail to trust that nature is uniform – suppose, indeed, that I believe that it isn’t. Then I will believe that there are simply no reliable ways to anticipate the future; I then might as well jump out the window since, I believe, no action of mine can secure my safety more effectively than any other. If nature isn’t uniform then I’m right. But if it is uniform, I will have led a short and nasty life when I could have avoided doing so. So trusting that 

Presumably, any differences between trusting that I am not a BIV and failing to so trust will be all the more evident when the latter is realized by my actually believing that I am a BIV.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

nature is uniform, at least to the extent of acting as though it is, is a dominant strategy. But believing that I am a BIV doesn’t license any such risky behavior. I recognize that the experiential consequences resulting from my “behavior” will correspond precisely to those that would result from my behavior if I’m not a BIV. So I have every reason to behave precisely as I would if I believed that I’m not a BIV. So, if I am in fact not a BIV, then the same experiential consequences of my actions will ensue if I trust that I’m not a BIV as will ensue if I don’t trust that I’m not a BIV; and the same goes if I am a BIV. Trusting that I am not a BIV is not a dominant strategy. One might point out that we take our behavior to generate many consequences that don’t impinge on the course of our own experience; much of a parent’s behavior, for example, is directed toward the well-being of his children rather than himself. If I believe that I am a BIV, then I believe that I don’t really have children whose well-being is affected by my actions. I might, then, behave quite differently; the result might be a far more egoistic lifestyle. Suppose that we count among the relevant benefits any positive altruistic effects of my behavior – those that impinge on others – as well as those pleasurable consequences that I experience. If I believe that I am a BIV when I’m not, then I will lose out on the altruistic benefits. However, if I trust that I’m not a BIV when I am, then I will have invested much time and energy in the attempt to improve the well-being of people who don’t exist. Altruism presumably generates better consequences overall than does egoism when I am in fact not a BIV. But altruism arguably generates worse consequences than egoism when I am in fact a BIV: misguided efforts for the benefit of people who do not exist are a waste of time that could have been devoted to my own interests. Trust that I’m not a BIV is still not a dominant strategy. But if the strategy succeeds when applied to trust in the general reliability of our perceptual system, how could it not succeed when applied to the denials of wholesale skeptical hypotheses? After all, those hypotheses could not be true if that system is reliable. There are, however, different ways in which that system might be unreliable. It might be consequentially unreliable: like a pilot who places trust in a heads-up-display to report direction, altitude, etc. that is in fact seriously malfunctioning, the perceptual system might be unreliable in 

Of course, an egoistic approach won’t lead to, for example, rampant theft, since I’m well aware that I will still suffer the same experiential consequences: BIV-jail will be as unpleasant as the real thing.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



such a way that its being so generates consequences that are detectably different (and, relevantly for the purposes of the strategic entitlement method, much less desirable) than those resulting from its being reliable. But when a wholesale skeptical hypothesis is true the perceptual system is inconsequentially unreliable: there are no discernible differences for the agent resulting from its being unreliable versus its being reliable. For that reason, the strategic entitlement approach doesn’t apply: the agent has no rational grounds for a change in behavior in virtue of the expectation of a difference in discernible consequences generated by that behavior. So wholesale skeptical hypotheses represent a manner in which the perceptual system is unreliable that is not amenable to that approach. In his  paper Wright indicates that he initially discarded the strategic entitlement approach, although not in light of the critique above (which does not apply to the Reichenbachian defense of trust in the uniformity of nature). Rather, he concedes that the attitude underwritten by the strategy is, at best, that of treating the proposition that nature is uniform, for example, as an assumption or working hypothesis. But then the results of particular instances of inductive reasoning will be viewed similarly: we only behave as if they’re true, since doing so is the dominant strategy. But so behaving is consistent with being entirely open-minded about whether those results are true: I can behave as if P is true without believing that it is. The entitlement strategy, however, is supposed to underwrite trust in those results; and trust, as Wright understands it, is incompatible with open-mindedness. But he now thinks that rejection was premature: “all that is required is that a state of trust be appropriately written into the decision-theoretic matrices.” Instead of evaluating whether it is rational to engage in inductive practice, we ask instead whether it is rational to trust that nature is uniform. If nature is indeed uniform and we trust it, then we will acquire “many true and useful beliefs.” If nature is uniform and we don’t trust it then either we won’t acquire many true and useful beliefs or we will acquire many true and useful beliefs but “at the cost of the rational incoherence of combining them with lack of trust in the methods whereby they were acquired,” since treating “nature is uniform” as no more than a working hypothesis concerning which one is open-minded doesn’t underwrite belief in the results of inductive reasoning. Finally, if nature isn’t uniform, then we will acquire few true and useful beliefs, whether or not 

Prichard  presses this objection.



Wright , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

we trust that nature is uniform. Since it is better to have many true and useful beliefs than it is to have either few of them or many but with rational incoherence, trust – and not merely an instrumental attitude – is a dominant strategy. We’ve seen that, as applied to the BIV hypothesis, the usefulness of the beliefs is irrelevant. Since the truth of a skeptical hypothesis is inconsequentially unreliable, trust in its denial is no more nor less useful than lack of trust. However, perhaps the argument can run solely by appeal to the truth of the resulting beliefs. Truth is a kind of benefit too. So, if I acquire more true beliefs if I trust that I’m not a BIV than I would if I don’t so trust when I’m actually not a BIV, and no fewer true beliefs if I trust that I’m a BIV than I would if I don’t so trust when I actually am a BIV, then trust is a dominant strategy. But this reasoning doesn’t succeed. It’s true that if I’m not a BIV then it will be better to trust that I’m not, thereby acquiring many true beliefs, than it would be to fail to trust that I’m not a BIV, thereby acquiring few true beliefs. But, if I am a BIV, then it will presumably be worse to trust that I’m not a BIV, thereby acquiring few true beliefs and many false beliefs, than it will be to not trust that I am a BIV, thereby acquiring few true beliefs but also not acquiring many false ones. In a calculation over the epistemic goods, the cost of false belief must surely be factored in along with the benefit of true belief. But, when it is, I would be better advised to abandon trust in my not being a BIV when I am a BIV; I will thereby avoid a dramatically false vision of the world and my place in it. So, even if we consider only the epistemic benefits and harms of truth and falsity of belief, trust that I’m not a BIV is still not a dominant strategy. It doesn’t deliver the greatest benefit whether or not I am a BIV.





Wright’s decision-theoretic matrix on p.  of his  paper references the true (and useful) beliefs resulting from trust in the uniformity of nature when nature is uniform, but not the false beliefs resulting from trust in the uniformity of nature when nature is haphazard. The same applies to induction. If nature is haphazard then indeed no beliefs will be useful whether or not one trusts that nature is uniform. But if it is haphazard and one does so trust, one will acquire many false beliefs about the future. Whereas, if one does not so trust – either because one neither trusts nor employs any method at all or because one employs induction and its products as merely working hypotheses concerning which one is open-minded – then one will avoid many false beliefs about the future. Application of the strategy to induction only succeeds with respect to the utility of belief, irrespective of its truth. But, we have seen, that application fails when it comes to skeptical hypotheses. And it serves, in the case of induction, only to underwrite treating its products as working hypotheses; if Wright’s critique of his earlier self is correct, that will still not deliver an entitlement to trust.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement ..



Strategic Entitlement and Piecemeal Skeptical Hypotheses

As unintuitive as it is to think that we can be in a position to know the denials of wholesale skeptical hypotheses merely in virtue of a strategic entitlement to trust those denials, it is dramatically more so with respect to the denials of piecemeal skeptical hypotheses. The suggestion that we could come to know that it’s not a disguised mule, that one’s car wasn’t stolen, that the gas gauge’s needle isn’t stuck, that the restaurant didn’t burn down, and so on merely in virtue of the practical benefits of trusting that these are true looks desperate as an attempt to preserve warrant closure. While the presumption that wholesale skeptical hypotheses are false does arguably underwrite much of our intellectual lives, the same can hardly be said of the denials of piecemeal hypotheses. And, while it is difficult to explain how we could have evidence for the denials of wholesale hypotheses, there’s no such difficulty with respect to piecemeal hypotheses: one need only conduct a DNA test, look in the parking space, insert a dipstick in the gas tank, walk to the restaurant, and so on. The suggestion that none of this is needed in order to come to know the denials of these hypotheses is downright bizarre. Putting this aside once again, piecemeal skeptical hypotheses are also not amenable to the strategic entitlement approach. Consider, for example, G: “the tank is empty but the gauge is broken and reading ¼ tank,” which is a piecemeal skeptical hypothesis when consulting the gas gauge to determine how much gas is left in the tank. Do we have a strategic entitlement to ~G? If so, trusting ~G must be a dominant strategy: we are no worse off trusting it when it’s false than not trusting it (and, in particular, not trusting it when it’s false). But trusting it when it’s false can have disastrous consequences: skipping the next station, I might end up stranded in the middle of nowhere with a non-functional car. Whereas, if I don’t trust it and it is false, then I will presumably gas up at the next station, thereby avoiding being stranded. Trusting ~G is clearly not a dominant strategy. In general, trusting the denial of a piecemeal skeptical hypothesis will very typically generate better outcomes than not trusting it when the 

As per §., I don’t suggest that Wright claims this. Quite the contrary, he seems to suggest otherwise (as per the quotation in that section). However, the issue at hand is whether entitlement can deliver a P-warrant, whether or not that is Wright’s claim. Wright has, moreover, repeatedly referenced Dretske’s arguments against closure, and Dretske’s zebra case, in numerous papers, and suggested that closure can be preserved in such cases even while transmission fails. But that is only relevant to Dretske’s arguments if the warrant at issue is P-warrant, since Dretske’s arguments are specifically directed against knowledge closure.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

denial is true; it’s hard to see how reasoning from a true proposition (namely, that the hypothesis is false) could generate worse outcomes than abstaining from reasoning from a true proposition. Assuming so, trusting the denial of a piecemeal skeptical hypothesis will be dominant just in case trust in that denial when it is false (and so the hypothesis true) is no worse than not trusting that denial when it is false (and so the hypothesis true). This will vary from case to case. Suppose, for example, that you happen to have a full gas can in the car. Suppose also that ~G is false: the tank is in fact empty, although the gauge indicates a quarter-tank left. Now suppose that you trust the gauge. You then give the next gas station a miss, run out of gas, fill the tank using the can, and drive on. Suppose instead that you don’t trust the gauge: you fill up at the next station and drive on. The disruption of your drive in either case might well be about the same: misplaced trust generates no worse consequences than well-placed distrust. But the circumstances won’t be this conveniently arranged with respect to the vast majority of piecemeal skeptical hypotheses. In general, circumstances in which it would be no worse to play the trusting game and lose than it would be to not play at all are presumably rare; typically, while taking the plunge promises greater benefits than not doing so, it also threatens greater harm. In many cases, indeed, there will be no consequences at all, beyond merely having a true versus false belief (versus no belief at all). In Zebra, for example, presumably nothing practically significant rides on whether S’s belief that it is a zebra is correct. So the only consequence is that, if S trusts that it isn’t a disguised mule when it is, she will falsely believe that it is a zebra, and if she doesn’t trust that it isn’t a disguised mule – and so discounts its appearance as irrelevant – when it is a disguised mule, she will not falsely believe that it is a zebra (nor truly believe that it isn’t). As when considering the epistemic consequences of trust in the BIV hypothesis, the denial of piecemeal hypotheses won’t emerge as entitlements. While having a true opinion is presumably better than no opinion, a false opinion is presumably worse. So trusting that it is not a disguised mule – and so acquiring the belief that it is a zebra – is, again, not a dominant strategy.

 

This is not impossible, however; the true proposition could be misleading. Of course, you’ll still have to refill the gas can. But you can do so at the next fill-up anyway, and you saved the time involved in getting to the gas station, filling up, paying, etc.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



One might respond that there is a significant price to pay for failing to trust the denials of piecemeal skeptical hypotheses, even if trusting them is not a dominant strategy. There are innumerable such hypotheses: the oil gauge, speedometer, tire pressure gauge, air-bag indicator, etc. might also be malfunctioning, for example. In general, one couldn’t get on with any ordinary activity without trust that disaster won’t strike: one can’t eat for fear of being poisoned, drive for fear of car accidents, shower for fear of scalding, plug in the toaster for fear of electrocution, and so on. Failure to trust the denials of any piecemeal hypothesis will generate paralysis, resulting in a miserable, and short, existence. We have, moreover, background information concerning the likelihood that disaster does strike; gas gauges are typically reliable, poisonings are rare, and so on. Perhaps, as per Chapter , that fact doesn’t generate a warrant on its own. But perhaps it does suffice to generate a strategic entitlement to trust, even if doing so is not a dominant strategy. As we’ll see in Chapter , I have some sympathy for this as relevant to the legitimacy of assuming that disaster will not strike. But it is quite another matter to view it as delivering a P-warrant for the denial of each piecemeal skeptical hypothesis. If it did, and if my belief is true, then I know that disaster won’t strike. But that is surely too optimistic. If I know that I won’t get into a car accident I need not buckle up; if I know that I won’t get salmonella then I needn’t bother to make sure that the chicken is sufficiently grilled; if I know that my car won’t be stolen I needn’t lock the doors; and so on. Aside from the fact that it is highly unintuitive to view strategic entitlement as delivering knowledge of the denials of piecemeal hypotheses, the sanguinity that this would engender would be disastrous. An excess of optimism can generate harm just as easily as can an excess of pessimism. Moreover, the most that this can deliver is warranted trust in those piecemeal hypotheses for which failure to trust would stymie our ordinary activities (and for which we have background evidence to the effect that they are unlikely). But that is not so in many cases. As pointed out above, nothing of practical significance hinges on whether S trusts or fails to trust that the animal is a disguised mule. The same goes for much pedestrian knowledge; we take ourselves to know many things, the knowing of which has little to no practical bearing on our lives. No practical paralysis results from our failing to trust that skeptical hypotheses relative to those propositions are false. 

I’m grateful to a blind reviewer for suggesting this response.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

Perhaps we then have a strategic entitlement for some, but not all, of the denials of piecemeal hypotheses. But that does not suffice for the closure advocate’s purposes; she needs warrants for the denials of all skeptical hypotheses, both wholesale and piecemeal. In sum, the strategic entitlement approach won’t generate entitlements to trust the denial of wholesale or piecemeal skeptical hypotheses as the closure advocate requires, at least not if that entitlement is understood to deliver a P-warrant.

. Entitlement of Cognitive Project Another entitlement strategy that Wright offers that might be pertinent is entitlement of cognitive project. A cognitive project is identified by a question (paradigmatically of the form “whether R”) and a method employed to answer it. Such projects have presuppositions, where “P is a presupposition of a particular cognitive project if to doubt P (in advance) would rationally commit one to doubting the significance or competence of the project.” Such presuppositions include “the proper functioning of the relevant cognitive capacities, the suitability of the occasion and circumstances for their effective function, and indeed the integrity of the very concepts involved in the formulation of the issue in question.” Wright points out that, in the context of any particular project, the agent will very typically not have conducted any investigation to assure herself that these presuppositions are satisfied. More importantly, if she were to do so, any such investigation would itself have its own presuppositions, as would an investigation of those presuppositions, and so on. Skepticism might seem to result: it’s impossible to investigate every proposition directly or indirectly presupposed by a particular cognitive project; but, since the viability of the project depends on those presuppositions’ being satisfied, we are in no position to treat that project as delivering a warrant for its outcome. Wright suggests instead that, precisely because a demand that every presupposition of a project be independently established is incapable of being satisfied, it can’t reasonably be taken to constitute a condition on a cognitive project’s delivering warrant that they be so established. “If there is no such thing as a process of warrant acquisition for each of whose  

Wright , . In Wright  he substitutes the title “authenticity condition” for “presupposition.” Nothing hinges on the terminology; I will continue to use the original term. Wright , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



specific presuppositions warrant has already been earned, it should not be reckoned to be part of the proper concept of an acquired warrant that it somehow aspire to this – incoherent – ideal.” This is not to say that independent support for some of a project’s presuppositions is never required. But there can’t be a blanket obligation to provide that support for every presupposition. At some point, trust in some presuppositions must be legitimate, despite the absence of independent evidential support for them, and the project for which they are presuppositions must count as warrant-delivering. We, therefore, enjoy an entitlement to trust those presuppositions. ..

Entitlement of Cognitive Project and Wholesale Skeptical Hypotheses

It’s not obvious why that ideal’s being incoherent implies that the concept of an acquired warrant does not aspire to it. There is no guarantee, after all, that our concepts are inevitably in good order, so that their satisfaction does not impose unrealizable demands. Of course, one can stipulatively define “warrant” as one likes, and so in such a way that its realization does not impose such demands. And Wright’s use of the concept might be fluid enough as to incorporate such a stipulative aspect. But the present issue concerns P-warrant, namely, that which makes for the difference between true belief and knowledge. The skeptic could not unreasonably insist that there is no guarantee that our concept of knowledge – and so of P-warrant – is free of such recognizably unsatisfiable demands. Indeed, the suggestion that it isn’t is nothing new in the history of skeptical thought. And there is presumably much less room here to stipulate one’s way out without simply changing topic from that of knowledge, as that term is ordinarily applied, to something else entirely. But put that aside and assume that Wright has shown that there are entitlements of cognitive project. He is well aware, however, that we can’t reasonably take ourselves to be entitled to every presupposition of every project. So we need a principled division between those presuppositions to which we are entitled and those that require evidential support. Wright offers the following: [T]he relevant kind of entitlement – an entitlement of cognitive project – may be proposed to be any presupposition of a cognitive project meeting the following additional two conditions: (i) [w]e have no sufficient reason to 

Wright , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure believe that P is untrue [and] (ii) [t]he attempt to justify P would involve further presuppositions in turn of no more secure a prior standing . . . and so on without limit; so that someone pursuing the relevant enquiry who accepted that there is nevertheless an onus to justify P would implicitly undertake a commitment to an infinite regress of justificatory projects, each concerned to vindicate the presuppositions of its predecessor.

The application of clause (ii) to wholesale skepticism is reasonably clear. That I am not dreaming, for example, is a presupposition of any project involving perception; and any project aimed at delivering empirical evidence in support of that presupposition will involve that same presupposition over again, since initial doubt concerning it will undermine appeal to any such evidence. Since no presupposition can be more secure than itself, clause (ii) is automatically satisfied. This is a general feature of wholesale skeptical hypotheses: their denial is a presupposition of the range of propositions (very typically, all “external world” empirical propositions) they target, and any project aimed at establishing their denial will have among its presuppositions the denial of that very hypothesis. For that reason, they satisfy clause (ii) of Wright’s entitlement template, and so we are entitled to trust them without evidence. .. Entitlement of Cognitive Project and Piecemeal Skeptical Hypotheses But the application of clause (ii) to Q propositions that encode the denial of piecemeal skeptical hypotheses is much less straightforward. Unlike the denial of skeptical hypotheses, it is not inevitable that Q itself is a presupposition of any project that could be mobilized to investigate it; so clause (ii) is not automatically satisfied. S could, for example, investigate whether the animal is a disguised mule by application of a DNA test. Its not being a 



Wright , –. He also restricts such entitlements to presuppositions of projects that are “indispensable, or anyway sufficiently valuable to us – in particular that . . . [their] failure would at least be no worse than the costs of not executing [them], and [their] success would be better” (Wright , ). But these are precisely those conditions that determine the applicability of the strategic entitlement approach. Presuppositions that satisfy this condition would then count as a subset of strategic entitlements generally. But then Q propositions will no more count as presuppositions by the entitlement-of-cognitive-project approach than by the strategic entitlement approach since, we’ve seen, they don’t satisfy that condition. But it’s not clear why Wright imposes this condition. If it really is no part of our concept of an acquired warrant that its every presupposition must be evidentially supported – if all inquiry is “local” – then this will be so for projects that have no practical consequences whatsoever as much as for those that do have such consequences. The strategic dimension seems beside the point. They also satisfy clause (i): we certainly don’t have evidence (let alone “sufficient reason”) that we are BIVs.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



disguised mule is not a presupposition of that project: if I initially doubt that it isn’t a disguised mule – I suspect that it might be a disguised mule – the DNA test can mitigate that doubt. So which such Q propositions count as entitlements depends crucially on what it is to be “of no more secure a prior standing.” Unfortunately, Wright provides little clarification of the kind of security he has in mind. Here is his most extensive discussion of the issue: [S]uppose I undertake a project is [sic] to predict the winners in tomorrow’s card at Newmarket by rolling a pair of dice for each runner in the afternoon’s races and seeing which get the highest scores. Clearly it is a presupposition of this project that the method in question has some effectiveness. What prevents that presupposition becoming an entitlement? . . . [I]t would be straightforward to gather no end of empirical evidence to discredit the dice-rolling method. And this would not be possible if the various presuppositions of such evidence-gathering in turn were of “no more secure a prior standing” than the dice-rolling method. If they were of no more secure a prior standing, we’d have to admit to a standoff and suspend judgement. So the very discreditability of the method entails that clause (ii) is unsatisfied.

But it seems straightforward to gather empirical evidence for the disguisedmule hypothesis (and so against its denial): DNA analysis would surely do the trick. The same goes for the other standard cases: we can find out how much gas is in the tank by using a dipstick and comparing it against the gauge, drive over to see if the restaurant burned down, ask the reporter whether the newspaper report described the game she saw, and so on. However, in the examples as described – and in innumerable real-life cases – we’ve done nothing of the sort. So, although they are presuppositions for which we seem to have “no specific, earned evidence” – and so for which warrant by entitlement is required – they fail to satisfy clause (ii) and so are not in fact entitlements; they are, therefore, not warranted. Wright might respond by denying that the presuppositions of those projects really are more secure. He recognizes that “if one chose, 



Admittedly, S could be so antecedently convinced, for whatever reason, that it is a disguised mule that she would discount even DNA evidence indicating that it is zebra, and so discount the DNA project for that reason. In his  – but not in his  – paper, Wright appends “irrespective of the outcome” to the definition of a presupposition (for otherwise, with that antecedent conviction, “it’s a disguised mule” would count as its own presupposition). Initial doubt concerning “it’s not a disguised mule” does not undermine the DNA project irrespective of its outcome: if the outcome were that it is indeed a mule (and so a disguised one) she need not discount the DNA project that confirms her initial suspicion. So “it’s not a disguised mule” is not a presupposition so defined. Wright , –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

one could investigate (at least some of ) the presuppositions involved in a particular case.” I might go and have my eyesight checked, for example. But the point is that in proceeding to such an investigation, one would then be forced to make further presuppositions of the same general kinds (for instance, that my eyes are functioning properly now, when I read the oculist’s report, perhaps with my new glasses on).

He could insist that the presuppositions of the DNA project, for example – which includes much biochemical theory, the assumption that the sample wasn’t surreptitiously switched with one from another animal, and a variety of others – are no more secure than the presupposition that the animal is not a disguised mule. Without a detailed account of the relevant kind of security in hand, it’s difficult to evaluate this. But we can start with his claim that “the very discreditability of the method implies that clause (ii) is unsatisfied.” Suppose that Q is a presupposition of project Y investigating some matter, that project X investigates whether Q is true, and that P is a presupposition of project X. Suppose also that the outcome of X is that Q is false. Then P is more secure than Q if, noting this outcome of X, we would reasonably believe that Q is false. That is, instead of being forced to admit a “standoff” and suspending judgment between trusting P and doubting Q versus trusting Q and doubting P, we rationally do the former, and so decide that project Y’s presupposition is false and so its outcome unreliable. So trust in presupposition Q of project Y is an entitlement if at least one of the presuppositions of any project investigating Q – or those of any project investigating the presuppositions of any project investigating Q, or . . . – is no more secure than Q in this sense. It would be bizarre, however, to suggest that the denials of piecemeal skeptical hypotheses are entitlements so understood. This would be to claim that there is no point in, for example, doing a DNA analysis on the animal because, even if we got the result that the animal is a mule, the presuppositions of the DNA test are no more secure than the presupposition that it’s not a disguised mule, and so we’d have to admit a standoff and suspend judgment between trusting the DNA test and repudiating appeal to the  

 Wright , . See the quotation presenting Wright’s Newmarket example above. “At least one,” because presumably the security of the set of presuppositions overall for the project investigating Q (or investigating a presupposition of the project investigating Q or . . .) is determined by the least secure: even if all but one are more secure than Q, if one is less secure than Q then it will need investigation, since in the contest between it and Q itself the result will again be a stand-off.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



animal’s appearance versus doubting the DNA test and trusting appeal to the animal’s appearance. But that’s ridiculous; a DNA test can obviously rationally override appeal to the animal’s appearance. And the same goes for virtually every other denial of a piecemeal skeptical hypothesis: we can describe a method to investigate whether that denial is correct which, if it delivered the result that the hypothesis is in fact true, we would reasonably take to undermine the project threatened by that hypothesis rather than concede a stand-off. So, by the only measure of security that Wright has provided, the denials of piecemeal skeptical hypotheses are not entitlements.

. Conclusion Wright recognizes that he has not resolved the question how to tell what presuppositions of what projects are entitlements. And it’s unclear whether he intends to count the denials of piecemeal skeptical hypotheses among them. They don’t seem to count as the “hinges” or “cornerstones” represented by such sweeping claims as “there is an external world” or “I am not a BIV” to which he is most concerned to apply the entitlement strategy, after all. But if they don’t, there remains the concern that we seem to have no “specific, earned evidence” that the denials of piecemeal hypotheses are true; Wright’s conservatism requires that we have antecedent warrant for the presuppositions of our everyday cognitive projects too. He could 







It’s also ridiculous to suggest that it is pointless to have your eyesight checked because you would have to rely on your eyesight to read the oculist’s report. If you have your new eyeglasses on, then you are no longer relying on your unaided eyesight; the presupposition that your unaided eyesight is reliable enough to warrant some particular perceptual belief is no longer in play. (And, of course, if you don’t yet have new glasses – you are, after all, only now reading the report – you can always have someone else read it.) “No less important than trying to delimit by what principles we may be rationally entitled to certain trustings is the project of determining when we are not, that is, when absence of evidence does indeed defeat rational acceptance. This is, of course, an absolutely crucial issue. It presents, in my judgement, perhaps the most major challenge remaining to the theorist of entitlement.” (Wright , ). Wright does, however, explicitly contrast some entitlements that are context-specific with hinges: “These presuppositions are not just one more kind of Wittgensteinian ‘hinge’ proposition as that term has come generally to be understood. Hinges, broadly speaking, are standing certainties, exportable from context to context. Whereas the present range of cases are particular to the investigative occasion: they are propositions like that my eyes are functioning properly now, that the things that I am currently perceiving have not been extensively disguised so as to conceal their true nature, etc.” (Wright , ). He draws a similar distinction between “contextual” and “absolute” strategic entitlements on p. . This is complicated by the fact that Wright has increasingly come to view the issues with which he is concerned – transmission failure, conservatism, and entitlement – as involving claims to warrant rather than warrant per se. In his  as well as his  paper, indeed, he suggests that entitlement provides the basis for a rapprochement between liberals (AKA dogmatists) and conservatives:

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009



Against Knowledge Closure

insist that we do have that warrant, not by entitlement but instead by appeal to background information. Although, for example, we haven’t performed a DNA test on the animal, we do seem to possess background information about zoos in the face of which it is unlikely that this animal would be disguised (and so a disguised mule). Given what Wright means by “warrant” – being beyond rational reproach for trusting – perhaps such background knowledge does indeed suffice. But, as we saw in Chapter , it doesn’t suffice for P-warrant. It’s one thing to claim that background evidence ensures that we are beyond rational reproach for trusting that it’s not a disguised mule and quite another to claim that we have what it takes to know this. A number of other criticisms have been raised against Wright’s entitlement strategy. These include the problems of “leaching” and “alchemy,” as well as the concern that the strategy is too pragmatic to deliver a truly epistemic warrant. There is not the space here to explore those concerns. But, even if they are misplaced, neither the strategic entitlement





liberals are right that nothing is required of the agent in addition to her bare response to experience in way of acquiring warranted perceptual belief (because warrant for the presuppositions are entitlements, requiring nothing from the agent herself ), but conservatives are right that a claim to perceptual warrant would require appeal to the entitlement for its presuppositions. It’s not clear that this works. After all, it is doubtful that a typical agent has the wherewithal to identify and describe her Wright-style entitlement when challenged on a presupposition of her everyday cognitive project; and yet, surely many ordinary claims to know are entirely reasonable. At any rate, the entitlement proposal is only relevant to the present issue – whether closure can be saved by appeal to warrant by entitlement – if the proposal is applied at the level of warrants rather than claims to warrant. Wright identifies the leaching problem in ; he updates his response to it in . The alchemy problem was originally presented in Davies  and pressed further in McGlynn . Jenkins  and Pritchard b and  press the objection that the entitlements are not epistemic. Wright’s recent response to the problem of alchemy, however, deserves mention here. The problem is that, for a deductive consequence of a proposition delivered by a project for which that consequence is an entitlement – as in Dretske cases – closure of evidential warrant would require that a proposition initially only warranted as an entitlement must acquire an evidential warrant; but Wright’s view was precisely that we cannot have an evidential warrant for the denials of skeptical hypotheses. Wright’s earlier response was to deny closure of evidential warrant. But, in the face of a critique by McGlynn  – one that essentially mobilizes Hawthorne’s appeal to closure over disjunction and equivalence in opposition to Dretske in Hawthorne’s  – Wright endorses transmission of evidential warrant to such consequences. He is “aware that this is a significant reconception of the way that I have tended to characterise transmission failure in some previous work . . .” (Wright , ). It certainly is, since it involves denying that transmission fails in the various cases in which he has long suggested that it does fail, at least if transmission requires that S acquire an evidential warrant for Q in virtue of S’s having a warranted belief in P and recognizing that P implies Q. Wright now suggests instead that transmission fails when one’s initial level of confidence in Q sets an upper bound on the rational support afforded to Q by inference from P, so that, although the inference does deliver an evidential warrant, it cannot enhance that prior confidence (Wright , –). For what it’s worth, this strikes me as an odd characterization of transmission failure; what is it, after all, that isn’t transmitted?

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

Denying Premise : Warrant by Entitlement



approach nor the entitlement of cognitive project approach will deliver the Q propositions of Dretske cases as entitlements – especially not those that constitute denials of piecemeal skeptical hypotheses – as would be required in order to save closure. The upshot is that Wright’s entitlement proposal won’t save closure. This does not, of course, demonstrate that no similarly Wittgensteininspired proposal could do so. But neither of the two most recent alternative views that invoke similar themes – Pritchard’s account of our commitment to hinge propositions, and Coliva’s “moderatist” account of presuppositions as assumptions – will do the trick. Pritchard explicitly excludes “it’s not a disguised mule” from the hinge-proposition category, insisting that we do need evidence for it. Intent on saving closure, he appeals to the very sort of general background information concerning the likelihood that zoos would engage in deception that we considered in Chapter  and found wanting. And Coliva concedes that, since the legitimacy of an assumption doesn’t constitute a warrant for it (that being the difference between her moderatist view and that of the conservative), her view implies the failure of warrant closure. Overall, the prospects for the suggestion that we have a default warrant for Q propositions of Dretske cases of the sort that would save P-warrant closure are very dim. This completes our review of the responses that the advocate of closure might offer against the argument by counterexample. We’ve seen that none are tenable. The overall result is that the argument by counterexample succeeds: Dretske cases stand as the counterexamples to closure that Dretske suggested they are. In Chapter , we will consider the two most common arguments against closure denial (as opposed to arguments for closure): the abominable conjunction problem and the spreading problem. 

See the references in fn. .



See Pritchard , .



See Coliva  and .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:48:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.009

 

Abominable Conjunctions, Contextualism, and the Spreading Problem

. Arguing against Closure Denial We noted in §. that knowledge (or warrant) closure stands out from most other closure principles in that it is not derivable from widely endorsed principles: there is no uncontroversial inference from truisms concerning knowledge and deductive inference to closure. The typical reaction is to claim that, unlike many other closure principles, knowledge closure is an epistemic axiom supported by an unmediated intuition. But we’ve seen good reason to doubt that our intuitions really do support closure in the end. There are, however, other arguments that are not so much in favor of closure as against its denial. Some are directed against specific closuredenying theories of warrant (those of Dretske and Nozick in particular). But such criticisms don’t touch the arguments of the previous chapters, which don’t depend on any particular theory of warrant. However, two popular arguments apply against any closure-denying view, namely, the abominable conjunction problem and the spreading problem. In §§.–. we’ll discuss the former and in §. the latter.

. Abominable Conjunctions The abominable conjunction problem concerns the infelicity of affirming, for example, “I know that I have hands but I don’t know that I’m not a BIV.” This is an unquestionably odd thing to say, as is any such conjunction involving a piecemeal skeptical hypothesis (such as “I know that the Broncos won but I don’t know that the newspaper report from which I learned this isn’t a misprint”). DeRose, who coined the term “abominable conjunction,” directed the problem against Nozick’s view in 

See Chapter  and passim.



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  particular, but it applies with equal force against any closure-denying view. On any such view, it can – and does – happen that S knows P but doesn’t know that a skeptical hypothesis, whose denial is implied by P, isn’t true. If such conjunctions are true in some cases, why does it seem infelicitous to affirm that they are? Appeal to this infelicity is taken to be an argument for closure because our resistance to affirming such conjunctions is predicted if our knowledge attributions are, in part, guided by a commitment to closure. S never knows P without knowing Q, so long as she recognizes that Q follows from P. So, if we are disposed to regulate our knowledge attributions in such a way as to conform to closure, then we will find abominable conjunctions to be infelicitous, as we do. This is an inference to best explanation: the best explanation for our sense that abominable conjunctions are infelicitous appeals to a commitment to closure. But we do not merely find such conjunctions to be infelicitous; we tend to resolve the infelicity in one particular way. There are two reactions to such cases that would preserve closure: affirming knowledge of both P and Q, and denying knowledge of both P and Q. As is widely recognized, when confronted by the skeptical scenario that Q denies, we often find ourselves inclined to retract our initial claim to know P rather than affirm knowledge of Q. There are three features of such retraction. First, we are disinclined to assert the proposition “I know that P” that we were inclined to assert before the skeptical hypothesis was salient. Second, we are inclined to assert “I don’t know P.” And, third, we take the proposition that we now assert to be the denial of the proposition that we earlier asserted; that is, we take ourselves to be asserting that our earlier claim to know P is false. A full explanation of our reaction to abominable conjunctions needs to explain, not merely our sense that abominable conjunctions are infelicitous, but these tendencies as well. We’ll first consider the abominable conjunction problem from the perspective of contextualism and related views, and then from the  



 See DeRose . Or so it is claimed. But see §.. and §§..–... There is, in fact, a third such reaction: affirming knowledge of Q while denying knowledge of P (I know that it’s not a disguised mule, for example, but not that it’s a zebra). But that would be a very odd reaction to Dretske cases. This is why contextualists need an error theory to explain our purported “semantic blindness,” that is, our failure to recognize that the content of the proposition earlier affirmed is different than that of the later proposition denied in virtue of a contextually generated shift in the semantic value of “know.”

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

perspective of classical moderate (i.e., non-skeptical) invariantism. We’ll see that the problem provides no motivation to affirm closure from either perspective.

. Abominable Conjunctions and Contextualism Most contextualists suggest that, in ordinary contexts wherein ordinary standards are in play, S knows both P and Q, while in the high-standards context typically triggered by consideration of skeptical hypotheses, S knows neither P nor Q. Closure itself, as a result, is preserved in every context. But why believe that closure is preserved in every context? Contextualism doesn’t require it: it could be that, although S knows P in ordinarystandards contexts, S doesn’t know Q in even those contexts (and knows neither in high-standards contexts). Contextualism does, however, allow for closure preservation. And closure is supported by the intuitive infelicity of abominable conjunctions. It is thought to be an advantage of the contextualist position that it can accommodate that intuition. ..

DeRose versus Heller

Nevertheless, one contextualist does deny closure, namely, Mark Heller. It will be instructive to compare Heller’s view to DeRose’s, and then generalize the discussion to the relationship between abominable conjunctions and contextualism in general. Heller’s view is a contextualized version of the hybrid safety-sensitivity account that we considered in §..: Expanded Relevant Alternatives (ERA) S knows P only if S does not believe P in any of the closest not-P worlds or any more distant not-P worlds that are still close enough.



 

Contextualists rightly note the danger in sticking to the object language, wherein one asks whether S knows, rather than employing semantic ascent and asking whether “S knows” is true instead. A number of misplaced criticisms of the view are due to a failure to recognize the need to ascend, particularly when dealing with such semantic theories as contextualism (see DeRose , esp. chapter ). I suspect – or, at least, hope – that the danger is sufficiently recognized nowadays that it’s not a problem to indulge in object-language expressions, which is less cumbersome, except when semantic ascent is required to avoid now familiar mistakes. Heller . Heller , . (The consequent should be read as a negated disjunction, not a disjunction of negations.)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  Ordinary knowledge claims such as “I have hands” (H) will often be false in “close enough” worlds. The second disjunct of ERA then requires that S not believe H in any such world. And, indeed, she typically doesn’t (as when, for example, she loses her hands in a chainsaw accident). Skeptical claims such as “I am a BIV” (BIV) are false in any “close enough” world (we assume), so the second disjunct doesn’t apply. But the first disjunct does: S can only know ~BIV if she doesn’t believe it in the closest (yet distant) BIV worlds. But she does; so she doesn’t know ~BIV. Contextualism enters Heller’s account through the phrase “close enough”: what ~P worlds count as close enough so as to require that S not believe P in those worlds varies with conversational context. In particular, salience of such a skeptical hypothesis as BIV extends the boundary of nearby worlds so that it encompasses the nearest world in which S is a BIV. S still believes ~BIV in that world, and so she doesn’t know it. But she also believes H in such a world; and in that world H is false as well. So the second disjunct of ERA kicks in: she doesn’t know H either. In ordinary contexts, however – wherein skeptical hypotheses are not salient – the boundary of near-enough worlds is much further in, so that the nearest BIV world is not among them. So S knows H: she doesn’t believe H in either the nearest ~H world or any ~H world that is further out but near enough. Nevertheless, she still doesn’t know ~BIV, since she still believes it in the nearest BIV world; the first disjunct rules that knowledge out. What’s striking about Heller’s proposal is that his explanation for why abominable conjunctions are infelicitous in high-standards contexts is identical to DeRose’s. DeRose offers the “Rule of Sensitivity”: When it’s asserted that S knows (or doesn’t know) that P, then, if necessary, enlarge the sphere of epistemically relevant worlds so that it at least includes the closest worlds in which P is false. (DeRose , )

In high-standards contexts the abominable conjunction “S knows H but doesn’t know ~BIV” is false because introduction of the second conjunct tends to raise the standards in such a way that S knows neither H nor ~BIV, so that the first conjunct is false. So it’s no surprise that the conjunction is infelicitous in such contexts. This is precisely the same explanation delivered by Heller’s view. The difference is that, for Heller, the abominable conjunction, although typically false in contexts in which it (and, in particular, the second conjunct) is contemplated, is true in ordinary contexts: so long as ordinary

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

standards are in place (because, in part, the skeptical hypothesis is not salient), both conjuncts are true. As a result, closure is false in those contexts. Our tendency to judge that the conjunction is false is only due to the fact that its contemplation very typically generates a context wherein the first conjunct, and so the conjunction, is false. We don’t notice that it is true in ordinary contexts simply because we’re not contemplating it in such contexts. For DeRose, however, the conjunction is false in all contexts: in ordinary contexts, S knows both H and ~BIV. DeRose is, of course, well aware of our reluctance to resolve the conjunction this way; rather than affirming knowledge of both, we tend to deny knowledge of either. The explanation for this is the same as Heller’s: contemplation of the (second conjunct of the) conjunction typically sets standards wherein S knows neither H nor ~BIV. Moreover, when we contemplate the conjunction, we erroneously extrapolate S’s ignorance of both H and ~BIV to ordinary contexts in which we are not contemplating the conjunction (but in which the conjunction remains false). So the infelicity of abominable conjunctions in the contexts in which we contemplate them – and, indeed, treating that infelicity as reflecting the truth in those contexts – doesn’t discriminate between these two views. So it doesn’t support closure-affirming over closure-denying contextualism. ..

Extrapolating from High- to Low-Standards Contexts

DeRose might point out that his view takes the intuition that abominable conjunctions are abominable at maximum face value: they are infelicitous because they are false (and closure preserved) in all contexts. Whereas, for Heller, they are infelicitous only because they are false, and closure preserved, in the contexts in which they are contemplated; they are nevertheless true, and closure fails, in ordinary contexts in which they are not contemplated. But Heller’s view also takes an intuition at maximum face value, namely, that we don’t know ~BIV (in any context). DeRose will claim that this is an erroneous extrapolation from the high- to the ordinarystandards contexts. But Heller can reply that DeRose erroneously extrapolates the denial of abominable conjunctions – and so closure – from the high- to the ordinary-standards contexts. So there seems to be no grounds here for preferring DeRose’s approach over Heller’s. 

Note that the “we” in question are the attributors, not the subject (although we could be both).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  Recall that DeRose doesn’t offer the Rule of Sensitivity as expressing a condition on knowledge; it is instead a mechanism governing a change in context and corresponding increase in standards, so that the nearest BIV world counts as relevantly nearby. DeRose advocates instead a “doublesafety” account: S’s knowing that P requires that S doesn’t believe that P when P is false, and doesn’t disbelieve that P when P is true, across nearby worlds, where the range of “nearby” is contextually determined. It’s because we don’t notice the operation of the Rule of Sensitivity – we don’t notice the contextual shift that it generates – that we tend to extrapolate our ignorance from high-standards to ordinary-standards contexts. This tendency is seriously misleading. On DeRose’s account (as on all safety views), not only is the “strength of epistemic position” of our belief in ~BIV at least as strong as it is for our belief in H in ordinary contexts, it is much stronger. H is typically false in nearby worlds, even by ordinary standards; it’s only safe because we are responsive to evidence – it doesn’t look as though we have hands – as a result of which we don’t believe it in those worlds. But we need no such responsiveness for our belief in ~BIV: it’s not false in any nearby worlds at all, and so the safety of that belief is insulated from any vagaries in our responsiveness to evidence. And yet, we not only judge that that belief is unknown – and so unsafe, if our knowledge judgments are responsive to the double-safety condition 

DeRose , chapter . The second conjunct of double-safety is analogous to Nozick’s “adherence” condition, and faces some of the same objections that have been raised against it. Here’s one that is not, so far as I know, yet in the literature. Suppose that I use a gauge to determine whether the temperature of the liquid in a beaker is higher than  C. This gauge doesn’t report degrees; it simply reads “yes” or “no” (and always reads one or the other). As it happens, it only reads “yes” when the temperature is over  C, and so delivers false negatives whenever the actual temperature is between  C and  C, although it’s correct when the temperature is below  C. I know that the gauge either delivers false negatives within that range or is accurate within that range, but I don’t know which, and have no evidence either way. But, in either case, it is accurate outside that range. I nevertheless believe that it is accurate within that range as well, albeit with no good reason. I put the gauge in a beaker of liquid; it reads “yes.” The temperature is actually  C, so that answer is correct. Presumably I know that the temperature is over  C. No matter which of the two kinds of gauge it is, it will be accurate whenever it reads “yes,” and I know that it is one of those two kinds. But there are very nearby worlds in which the temperature is  C; if I had put the gauge in the liquid one second later, the liquid would have cooled to that point and the gauge would read “no.” I would then believe that the temperature is below  C because I believe (with no good reason) that it is accurate in that case as well. So there are nearby worlds wherein I believe that the temperature is not over  C but wherein it is over  C; my actual belief that the temperature is over  C doesn’t satisfy Nozick’s (and DeRose’s) second condition. And yet, intuitively, I know that the temperature is over  C when the gauge reads “yes.” Since we can make the relevant world in which the temperature is  C as near as we like without changing the judgment that a “yes” answer delivers knowledge, no contextual relativity can change that result.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

on knowledge – in high-standards contexts in which it is in fact unsafe, we also extend that judgment to ordinary contexts wherein the belief is actually extremely safe. That’s a pretty dramatic mistake: we judge to be unknown – and so unsafe – beliefs that are, in fact, as safe as one could possibly want. On Heller’s view, by contrast, our judgment that we don’t know that skeptical hypotheses are false, even in ordinary contexts, is correct. Prima facie, at least, it seems better to explain our tendency to judge that we are ignorant of the denials of skeptical hypotheses in ordinary as well as in skeptical contexts as a result of our grasp of knowledge conditions that apply in all contexts than it is to explain that tendency as arising from a widespread but erroneous conflation of the standards of one context, wherein that tendency is correct, to another wherein it is seriously misleading. At any rate, DeRose’s explanation of the infelicity of abominable conjunctions has no apparent advantage over Heller’s. Both provide precisely the same explanation in the high-standards contexts in which it’s contemplated, namely, that it’s false. There is no apparent advantage, so far as that explanation goes, that accrues to extending the falsehood of such conjunctions into ordinary contexts wherein it isn’t contemplated, as per DeRose, as opposed to extending our ignorance of the denials of skeptical hypotheses into those contexts, as per Heller.







For what it’s worth, the suggestion that there are high-standards contexts of the sort which the contextualist posits strikes me as odd. The standard triggered by consideration of wholesale skeptical hypotheses is extremely demanding, so much so that just about every knowledge claim is false. That’s an utterly useless standard: we can’t use “know” in that context to convey information to others, to indicate that any issue is settled for us, to use what we know as a basis for action, etc. Why in the world would our expression “know” tolerate such pointless truth conditions? And why would they be so easily triggered that merely contemplating the skeptical hypotheses tends to put such pointless truth conditions into place? And why would we be so easily fooled into thinking that those pointless truth conditions apply to our use of the term in ordinary contexts? To be fair, Heller does have to concede that we erroneously conflate the two standards when we judge that, even in ordinary contexts, we don’t know H (after the skeptical hypothesis is salient). Although H is safe by ordinary standards, however, it’s not dramatically so; the nearest world in which it is false is still typically nearby. Whereas, the nearest world in which BIV is true is far, far away. Prichard  suggests that it’s a disadvantage of Heller’s view over closure-affirming contextualism that it requires denying closure. But if the reason for endorsing closure is that it explains the infelicity of abominable conjunctions, then the fact that Heller’s view explains this just as well vitiates that advantage. (And we’ve already seen in Chapter  and passim that straightforward appeal to closure as intuitive doesn’t succeed.)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  .. The Felicity of the Denials of Skeptical Hypotheses in Ordinary Contexts DeRose might respond that it’s possible to resist the standards-raising tendency that consideration of skeptical hypotheses triggers, and that it seems felicitous to affirm knowledge of both the ordinary claim and the denial of the skeptical hypothesis in such circumstances. But it’s far from obvious that it is felicitous to affirm the latter claim in such circumstances. Were someone to raise the concern that the newspaper report that the Broncos won the game might have been a misprint, I might well say something like “oh, come on; that’s so unlikely!” and proceed to respond to someone who hasn’t read the report and asks me whether I know who won by declaring “yes, I do: the Broncos won!,” raise a toast to the Broncos’ win, and so on. If, in mid-celebration, the persistent misprintskeptic interrupts with “well, but you don’t know that the report isn’t a misprint, do you?,” it still seems inappropriate for me to declare outright “yes, I do know that” and more appropriate to say instead “well, no, I don’t know that it isn’t; but no worries, it probably isn’t. Now, relax and have a beer; we’re celebrating their victory!” In general, it’s far from obvious to me that, in a context in which I’m happy to speak and behave as though I know the relevant ordinary proposition despite the skeptical possibility’s having been brought to my attention, I’ll be just as happy to claim that I know that that possibility doesn’t obtain. Instead, I’d be inclined to concede that I don’t know that it hasn’t, but that it’s perfectly fine to dismiss that possibility for some reason or other (such as it’s being improbable), and then behave as though the issue had never arisen, including claiming to know the ordinary proposition. ..

Comparative Judgments

Apparently, DeRose himself doesn’t take the abominable conjunction to be as decisive an objection to closure denial as do some; he rests at least as much weight on the force of comparative judgments of the agent’s strength of epistemic position. Just as the comparative judgment that Wilt is at least as tall as Mugsy can explain both “if Mugsy is tall then so is Wilt” and  

I do concede, however, that it remains discomforting to affirm both “I know that the Broncos won” and “I don’t know that the report isn’t a misprint” in the same breath. See §.. See DeRose , §. See also DeRose , chapter .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

“if Wilt isn’t tall then neither is Mugsy,” so the comparative judgment “I am in no better position to know H than I am in to know ~BIV” can explain why “if I know H then I know ~BIV” and “if I don’t know ~BIV then I don’t know H,” and so that closure is true, regardless of context. The argument, I take it, is that we find such comparative judgments to be intuitively correct, and our doing so supports closure for any context. But I simply don’t share that intuition. Prima facie, it seems to me harder to know that I’m not a BIV than that I have hands. For one thing, evidence that seems relevant for the latter seems irrelevant for the former. Moreover, DeRose himself argues forcefully that our intuitive judgments concerning what someone knows correlate closely, albeit not perfectly, with judgments about whether their belief is sensitive. That would suggest that, rightly or not, we judge sensitivity to be highly relevant to whether one is in a position to know. But belief in H has that property and belief in ~BIV doesn’t. Moreover – and more significant to me – the belief in H doesn’t violate NIFN and belief in ~BIV does (under the circumstances), and my intuition that violation of NIFN seriously undermines one’s being in a position to know is quite forceful. So I just don’t share the intuition that the comparative judgments are so obviously correct as to provide independent support for closure. But suppose that the comparative judgments are intuitive. The contextualist – of all people – should exercise caution when extrapolating our intuitive judgments when considering specific claims about knowing or being in a position to know – including specific comparative judgments – from contexts in which we contemplate them to those in which we don’t. Contemplation of the comparative judgment inevitably involves consideration of the skeptical hypothesis, and so will tend to occur in highstandards contexts. Perhaps we find it intuitive only because it is true in those contexts. They are, after all, contexts wherein the standards are such 

 

DeRose , §. The comparative claim also explains our reaction to abominable conjunctions: if I am in no better position to know H than I am in to know ~BIV, then I won’t know (or be in a position to know) H without knowing (or being in a position to know) ~BIV. See, for example, DeRose . See also DeRose , chapter . DeRose initially says that a belief’s being (double-) safe is “[a]n important component of being in a strong epistemic position with respect to P” (DeRose , , emphasis added). Since the nearest worlds in which BIV is true are distant, one’s belief in ~BIV can be insensitive while being safe, whereas one’s belief in H can only be safe if it is also sensitive. But to claim that this shows that an insensitive belief can be in a stronger epistemic position than a sensitive belief is to assume that safety is the only component of one’s position’s strength. But sensitivity could be another such component, as suggested by the correlation between our judgments concerning when someone knows and the sensitivity of their belief. If so, one’s beliefs in H and ~BIV are both strong in one respect and weak in another.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  that one’s belief in H only counts as knowledge (on DeRose’s view) if one’s belief as to whether H matches the truth all the way out to the nearest world in which BIV is true; and the same goes for one’s belief in ~BIV. So it’s no surprise that one would judge that one is in no better position to know H than one is in to know ~BIV; one can only be in a position to know H in that context if one is in a position to know ~BIV. That says nothing about one’s comparative position to know these propositions in ordinary contexts wherein one is not contemplating the comparative claim. So a closure-denying contextualist like Heller can say the same thing about the comparative judgment that he says about abominable conjunctions: we erroneously extrapolate our intuition concerning it from the high-standards context generated by our consideration of it to ordinary contexts wherein we don’t consider it. So, even if the comparative judgment is intuitive, that provides no basis to discriminate between closureaffirming and closure-denying contextualism. ..

Generalizing

All of this generalizes to contextualism per se. The outline of DeRose’s (and Heller’s) explanation of the infelicity of abominable conjunctions in high-standards contexts is the same for virtually all other contextualists: we find them infelicitous in those contexts because they are false in those contexts. That explanation underdetermines the choice between closureaffirming versions of contextualism according to which such conjunctions are also false in ordinary contexts – wherein we don’t contemplate them, and so don’t form judgments about their (in)felicity – and closure-denying versions according to which such conjunctions are true in ordinary contexts because, although we know the quotidian proposition by ordinary standards, we don’t know the denial of the skeptical hypothesis by any standard at all. The infelicity of abominable conjunctions provides no support for closure, at least not among contextualists.



Appeal to pairs of judgments in distinct contexts, as per Cohen’s “airport” case (Cohen ) and DeRose’s “bank” case (DeRose ) and “Thelma, Louise, and Lena” case (DeRose ), won’t help. In none of these cases does the knowledge claim under consideration concern a skeptical hypothesis. And, if they were to do so, the claim to know in the ordinary context is much less obviously intuitively appropriate (or correct) in the low-standards case. Perhaps, when little hangs on whether the check gets deposited in the low-standards version of the bank case, for example, it is appropriate for me to say that I know that the bank will be open on Saturday; but is it really appropriate for me to also say that I know that the bank hasn’t changed its hours since my previous visit two weeks ago? Of course, the salience of that possibility might raise the standards

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

. Contextualism and Anti-Skeptical Sources of Warrant It’s intuitive – for most people, most of the time, at least – that we don’t know the denials of skeptical hypotheses. Closure-affirming contextualists accept this: although, they suggest, this answer is wrong (as applied to people in ordinary contexts), it’s intuitively correct as a result of our misguided tendency to fail to take note of contextual shifts typically introduced when skeptical hypotheses are contemplated. One might complain that this is unfairly immune to disconfirmation: we can never appeal to our inclination to deny that we know that skeptical hypotheses are false because, on this hypothesis, they are only entertained in contexts wherein we don’t know that they are false, but only because we’ve entertained them. Since we can’t entertain them when we’re not entertaining them, we’re never in a position to appeal to our intuitions in order to evaluate the claim that we know that they’re false in those contexts; the contextualist’s hypothesis is unfalsifiable by appeal to intuition. The contextualist might note in response that, in some circumstances, it doesn’t seem to be such a bad thing to claim that we do know them (in “oh, come on, you know you’re not a BIV” scenarios, for example). But appeal to this in way of supporting the claim that we know that skeptical hypotheses are false in ordinary contexts still seems unfair. The contextualist takes seriously the inclination to affirm such knowledge in those special cases, but discounts the tendency to deny them in others as resulting from semantic blindness. Why not instead discount the tendency to affirm such knowledge in the special cases as resulting from semantic blindness (or some other error) and take seriously the tendency in other cases? The latter are, after all, the typical cases. But put that aside. The most significant problem for the closurepreserving contextualist is that appeal to contextualism doesn’t help at all with the arguments against warrant for piecemeal and wholesale skeptical hypotheses canvassed in previous chapters. Since those arguments are already on the table, I’ll only briefly review them here with contextualism in mind. .. Contextualism and Transmission If the contextualist claims that knowledge of the denials of skeptical hypotheses in ordinary contexts is acquired by inference from ordinary independently of the low stakes involved, so that I don’t know this by the new, high standards. But the closure-denying contextualist can happily agree.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  knowledge claims, then she faces an uphill battle. For one thing, even if ordinary folk do believe that they are not BIVs, it’s doubtful, to put it mildly, that they do so because they’ve inferred this from ordinary knowledge. This is not doubtful merely because ordinary folk don’t consciously contemplate the inference relation. It’s plausible that we have beliefs that are “inferential” – our believing them is somehow dependent on other beliefs that imply them – but where it would be strained to suggest that we explicitly entertain that implication relation as we do when, for example, we’re working through a mathematical proof. The doubtfulness is, rather, a result of the fact that it is bizarre to suggest that the inference from, for example, “the Broncos won” to “the report is not a misprint,” when the former is believed on the basis of the report itself, is to any extent probative. The contextualist could, I suppose, suggest that the seeming absurdity of such claims, as applied to ordinary contexts, is another manifestation of semantic blindness: it’s absurd to suggest that such knowledge could be acquired that way in high-standards contexts, but only because we don’t know “the Broncos won” in such contexts from which knowledge of “the report is not a misprint” could be acquired by inference. But the absurdity doesn’t seem to depend on whether or not S knows that the Broncos won. It seems ridiculous to suggest that she could learn that the report is not a misprint by inference from their winning even if she knew that they won on the basis of the report. The argument against transmission by appeal to NIFN offered in Chapter , moreover, applies whether or not contextualism is taken on board. A violation of NIFN, recall, occurs when S uses a method of investigation into whether P that is such that the method itself guarantees that it will deliver the result “~P” whenever P is true. It’s intuitively ridiculous to suggest that such a method could deliver warrant for, and so knowledge of, ~P. And there’s no reason to think that our sense that this is ridiculous is the result of our contemplating NIFN in high-standards contexts in which it is true and erroneously extrapolating to low-standards 

Claiming this would not run afoul of the effect that the ordinary folk’s contemplation of the skeptical hypothesis – which would inevitably result from their inferring its denial from ordinary knowledge claims – would put them in a high-standards context. That’s the subject’s context, not the attributor’s. Of course, when the attributor claims that such knowledge is the result of inference, she will tend to find herself in a high-standards context as well, for the same reason. But the contextualist can claim that, when she attributes such inferential knowledge, she has resisted the standards-raising tendency.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

contexts in which it is false. Violation of NIFN seems problematic no matter what P is, or whether it matters to us whether P is true. The arguments considered in Chapter  to the effect that transmission fails on safety, reliabilist, and evidentialist views are also untouched by positing contextual variability. We saw that transmission fails on the safety account because inferring from ordinary P to anti-skeptical Q can never contribute to the safety of Q given S’s actual basis for believing P. It doesn’t matter how wide the sphere of nearby worlds might be. And beliefforming processes that violate NIFN are guaranteed to be unreliable by any reasonable measure of reliability, since they inevitably deliver false negatives. Finally, the argument against evidential transmission – essentially that the evidence available to S counts, if anything, in favor of rather than against the skeptical hypothesis – applies no matter how much evidence is required. In sum, contextualists should deny that transmission succeeds in Dretske cases, just as should anyone else. So far as I am aware, most contextualists agree and so posit instead alternative sources of warrant for Q. As we’ve noted in §. and passim, however, this seriously undermines the intuitive support for closure, insofar as that support hinges on the thought that deductive inference inevitably extends knowledge; to concede that transmission fails just is to concede that this is not inevitable. Surprisingly, perhaps, the contextualist (or any other closure advocate) could reply that it is in fact consistent with closure to attribute ordinary knowledge claims to ordinary folk while denying them knowledge of the denials of skeptical hypotheses. The antecedent of knowledge closure requires that S not only knows ordinary P, but also that she believes Q because it follows from P. So, if she doesn’t do that, then the conjunction of closure and knowledge of P is consistent with ignorance of Q. And, since it is doubtful that most ordinary folk do believe, for example, that they’re not BIVs because it follows from their having hands, it is in fact consistent with closure to affirm the intuitive judgment that they don’t know that they’re not BIVs even by ordinary standards. Indeed, that should be our judgment for most people most of the time, as it seems to be. It’s interesting that contextualists don’t exploit this. So far as I’m aware, every closure-affirming contextualist insists that the ordinary folk know both that they have hands and that they are not BIVs, even though closure doesn’t require that they do so. 

This assumes that the inference is the only available source of warrant for Q. But we’re now considering whether this would suffice on its own, so that assumption is appropriate.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  But there is a good reason for their not exploiting this. The abominable conjunction isn’t “S knows H, and believes ~BIV because it follows from H, but doesn’t know ~BIV”; it’s just “S knows H but doesn’t know ~BIV.” That infelicity doesn’t seem at all mitigated if – as is, surely, almost always the case – she doesn’t believe ~BIV because it follows from H. So the contextualist would have to accept that abominable conjunctions, despite their infelicity, are true almost all of the time. The contextualist could, I suppose, suggest that this is another example of semantic blindness: in high-standards contexts we know neither H nor ~BIV by the relevant standard, so that abominable conjunctions seem false (because they are, in our context), and we erroneously extrapolate that to ordinary contexts. But then the contextualist can hardly appeal to the infelicity of abominable conjunctions as supportive of closure in all contexts: although closure is preserved in ordinary contexts (because its antecedent goes unsatisfied), abominable conjunctions are still true in those contexts. ..

Contextualism and Front-Loading

The closure-affirming contextualist who concedes that warrant transmission fails in Dretske cases will need some reason to think that closure succeeds in these cases. The obvious suggestion is front-loading: S needs a warrant for Q in place already in order to acquire her warrant for P. That would explain why abominable conjunctions are infelicitous in a way that, we saw, transmission would not: whether or not S recognizes that Q follows from P, she has to have a warrant for Q anyway. So, for example, “S knows that she has hands but doesn’t know (or isn’t in a position to know) that she is not a BIV” will remain infelicitous. But, as noted in §., although front-loading does preserve closure in the cases to which it applies, it is a much stronger, and more contentious, principle. Moreover, the closure-denying contextualist can plausibly counter that front-loading is only required in high-standards contexts. If the standards are sufficiently high that the disguised-mule possibility is relevant, then it’s no surprise that S can only know that it’s a zebra if she already has a warrant against that possibility. But this provides no reason to think that, in ordinary contexts, this is so. And it is more intuitive that it is not so: it is far from obvious that, for example, an ordinary zoo-visitor needs an antecedent warrant against “it’s a disguised mule” (and against every other conceivable skeptical hypothesis) in order to learn that it’s a zebra on the basis of its appearance. And, insofar as we do tend to think

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

that we’d need these antecedent warrants even in such ordinary contexts, that is easily explicable as an erroneous extrapolation from the highstandards context in which that possibility is salient, and so relevant, to the low-standards context in which it isn’t salient. The front-loading (and, more generally, the WP-advocating) contextualist also faces a substantial headwind in light of the buck-passing argument of Chapter . That argument relies only on the claims that transmission does fail in Dretske cases and that, whenever transmission fails from P to Q, S inevitably has a (non-inferential) warrant for Q, so that closure – WC – is nevertheless preserved. And its conclusion is that S would have to have a basis-infallible warrant for P. That’s the highest standard for warrant that one could postulate, and one that we rarely if ever meet. Nothing in that argument turns on how strong an epistemic position it is initially claimed that S must be in with respect to Q, however that is measured. So it applies to ordinary knowledge as much as to knowledge by any other standard. ..

Contextualism and Safety

In §. we also considered the question whether a safety account could escape the buck-passing argument. I won’t rehearse the details of my response here, except to emphasize one point. We noted in §. that S’s belief that a skeptical hypothesis is false can only be safe if it is far-safe, that is, only if the nearest world in which it is false is beyond the boundary between near and distant worlds. This applies to piecemeal as much as to wholesale skeptical hypotheses. But far-safe beliefs are safe simply in virtue of their modal profile; no responsiveness to evidence is required of the agent in order to ensure their safety. DeRose, who advocates a (double-) safety condition as at least a necessary condition on knowledge, must agree. But then this condition is met regardless whether S has any evidence for (for example) “it’s not a disguised mule”; she could believe it solely because that’s what her horoscope says, and the belief will still be safe. DeRose suggests that we need no empirical evidence for “I’m not a BIV”; that belief is safe simply because the nearest world in which I am a 



WP, recall, is the claim that, whenever transmission fails, some alternative source of warrant for Q will inevitably be available to S. Front-loading implies WP (in Dretske cases), but it is at least logically possible for front-loading to be false while WP is true. (It’s just much harder to see why WP would be true if so.) And, so long as she believes it firmly enough that there are no nearby worlds in which it is true and she disbelieves it, she’ll satisfy the second conjunct of double-safety.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  BIV is very distant. But he doesn’t say this about such piecemeal skeptical hypotheses as “that’s not a disguised mule,” claiming instead that we know such things because, at least in part, we have background evidence for them. However, in those same contexts, the nearest world in which it is a disguised mule must count as distant (by ordinary standards). So S’s belief will remain safe (by ordinary standards) even if she believed it because her horoscope told her so. DeRose might suggest that another condition – an evidence requirement, for example – applies to S’s belief that it’s not a disguised mule, even in ordinary contexts. But it’s hard to see why; so far as I can tell, the reason why no evidential requirement is imposed on our knowledge of the denials of skeptical hypotheses is that the nearest world in which they are false is distant, so their safety is assured. But the safety of “it’s not a disguised mule” is equally assured in ordinary contexts, and for the same reason; no responsiveness to evidence is required. DeRose might respond that, although the nearest disguised-mule world is distant by ordinary standards, it’s still much nearer than is the nearest BIV world. So the “threat” of nearness, as it were, is more pressing for the former than it is for the latter hypothesis. But that’s not inevitable; the very idea of perpetrating the disguised-mule deception on the trusting public might be so repulsive to the zoo proprietors that the nearest world in which they do so is almost as far away as the nearest world in which a mad scientist envats a brain. The sense that S couldn’t know that it’s not a disguised mule (or that this newspaper report isn’t a misprint, etc.) solely on the basis of her horoscope’s saying so doesn’t diminish as a result. 

 



DeRose , chapter . Cohen  advocates the related view that it is a priori reasonable to deny wholesale skeptical hypotheses although, for Cohen, that reasonability doesn’t suffice for knowledge. See DeRose , chapter , §. Indeed, even “I win the lottery” must count as distant by ordinary standards, since its denial is (the relevant part of ) the Q proposition “it’s not the case that I can afford a cruise vacation because I will win the lottery,” where P is “I can’t afford a cruise vacation.” This is a significant departure from the usual safety theorist’s claim that our intuition that we don’t know that we won the lottery is due to the fact that the nearest world in which I win is (very) nearby. (As we saw in §.., however, those safety theorists have a difficult time explaining how we could nevertheless know “I can’t afford a cruise vacation” and similar ordinary knowledge claims.) It’s not actually clear to me what DeRose’s view is. §§– of chapter  of DeRose  seem to suggest that his double-safety account is a complete “picture” of knowledge (although not a complete theory; minor modifications might be required). But, when defending our knowing ~BIV by ordinary standards in §§–, he suggests that such knowledge is had, not only because the belief is safe, but also because we have no undermining (but misleading) evidence to the effect that it isn’t. But the belief that I am not a BIV will still be double-safe, even in the face of significant undermining evidence, simply because it is not false in any nearby world, so long as we don’t disbelieve it in any nearby world. This also goes for belief in the denial of piecemeal skeptical

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure .. Contextualism and Direct Warrant

The arguments of Chapters – against postulated sources of warrant are also unaffected by contextual variability. Chapter  considered the suggestion that S’s warrant for P, based on B, suffices as a direct warrant for Q. That suggestion confronts the facts that: (i) the closure principle this would support is untenable by anybody’s lights; (ii) B isn’t probable on Q at all – in fact, ~B is – whereas B is probable (indeed, certain) on ~Q; and (iii) such a warrant would violate NIFN. Variation in the standards of warrant for Q undermines none of these points, since they indicate that B can’t provide a basis for a direct warrant for Q to any extent at all. ..

Contextualism and Warrant Infallibilism

Introduction of contextual variability also leaves untouched the arguments for warrant infallibilism in Chapter . I indicated that, when we learn that S’s belief that P, based on B – which would, in normal circumstances, deliver knowledge of P – is in fact false, we expect a “how-possibly“ explanation of how P could be false notwithstanding B’s being true. This expectation arises in the most ordinary of cases, such as our putative knowledge that the Broncos won on the basis of the newspaper report; there’s nothing to suggest that we have that expectation only in highstandards contexts wherein skeptical hypotheses are salient or the stakes are high, and don’t have it in ordinary cases. My claim was that this expectation is explicable if warrant is infallible. Since some departure from the normal circumstances, wherein S would come to know that P is true by appeal to B, explains how P could be false notwithstanding B’s being true, this suggests that, in normal circumstances wherein S’s appeal to B does deliver knowledge of P, P couldn’t be false unless B was as well. That explanation applies perfectly well to ordinary cases in which skeptical hypotheses aren’t salient and the stakes are not particularly high. Merricks’ arguments for warrant infallibilism aren’t affected by contextual variability either. Merricks argues that, if warrant is fallible, there are hypotheses: our belief in them must be far-safe in order for the ordinary claims that imply them to be safe at all. So they will be (double-) safe whether or not I have any background evidence for them and/or undermining evidence against them, so long as I don’t disbelieve them in nearby worlds. So I don’t understand why DeRose suggests that knowing the denials of skeptical hypotheses of either stripe requires the absence of undermining evidence, or why he suggests that we need evidence against piecemeal hypotheses in ordinary contexts.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  cases of accidentally true belief – in other words, Gettier cases – that would have to count as warranted, and so as knowledge. So his arguments ultimately turn on the intuition that Gettier cases are not cases of knowledge. To resist them, the contextualist would have to claim that Gettier cases do count as knowledge, at least by ordinary standards. But there’s nothing about the relevant Gettier cases that suggests that our intuition that they are not cases of knowledge is due to our contemplating them in high-standards contexts. The backstory to any Gettier case is that, if the agent didn’t suffer the “bad luck” as a result of which his belief would have ended up false if not for the compensating “good luck,” they would know the relevant proposition. That knowledge would be of the very ordinary sort that the contextualist is concerned to preserve. And the Gettier case doesn’t introduce any change that would generate a shift to a higher context. It’s still intuitive, after considering the Gettier scenario, that, had the bad luck not been present, the agent would have that ordinary knowledge. So the arguments of Chapter  in favor of warrant infallibilism are unaffected by the contextualist’s proposal. The upshot of those arguments is that S can’t be warranted on basis B unless, given S’s circumstances, B couldn’t be false unless P is true. But that’s not the case for the denial of most piecemeal skeptical hypotheses. The basis available to S for “it’s not the case that I can afford a cruise vacation as a result of winning the lottery,” for example – namely, the improbability of winning – doesn’t ensure that this is correct, and neither does anything else in her circumstances. So background information does not deliver a warrant for such denials, whether or not one is a contextualist. .. Contextualism and Warrant by Entitlement The arguments against warrant by entitlement in Chapter  are also untouched by contextual variation. Strategic entitlement requires that believing (or trusting) P is a dominant strategy: we will be better off if we believe it when it is true than if we don’t believe it when it is true, and we will be no worse off believing it when it is false than not believing it when it is false. I argued that the very design of wholesale skeptical 

John Greco  – a contextualist – concedes that contextualism doesn’t provide the resources to explain standard Gettier-case intuitions, although he does suggest that it has those resources vis-à-vis fake-barn style cases. That’s irrelevant here, however, since the sort of Gettier cases to which Merricks appeals aren’t fake-barn style cases.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

hypotheses ensures that the first conjunct is false, and there is simply no guarantee that the second conjunct is inevitably true vis-à-vis piecemeal skeptical hypotheses. Appeal to contextual variability doesn’t undermine either point. I also argued that there’s no reason to think that we enjoy an entitlement of cognitive project for the denial of piecemeal skeptical hypotheses. This is essentially because there’s no assurance that any project directed toward assessing them will involve presuppositions that are no more secure than the denial of the skeptical hypothesis (which itself is a presupposition of the ordinary proposition against which the skeptical hypothesis is aimed). This is true no matter what context we’re in: whatever standards might be in play, we could, for example, conduct a DNA test, the results of which would certainly be relevant to the question whether it’s a disguised mule. I did concede that we could be thought to have an entitlement of cognitive project for the denial of wholesale skeptical hypotheses. For any investigation into those hypotheses – whose denials are also presuppositions of our ordinary beliefs – would invoke presuppositions that are no more secure, since any such investigation will require appeal to empirical claims that also presuppose that the skeptical hypothesis itself is false. But I also pointed out that it is one thing to suggest that this licenses our assuming (or, as Wright has it, trusting) that these claims are false and quite another to suggest that it ensures that we know that they are false. The basic idea behind entitlement of cognitive project, as I understand it, is that we are entitled to assume that the relevant claims are true if we are entitled to engage in any cognitive projects at all, since otherwise we will be endlessly investigating their presuppositions, and presuppositions of their presuppositions, and so on, and so never get started. That’s an argument, at best, for assuming (trusting) that they are true; but I see no reason to treat it as an argument that we know that they are true, even in ordinary contexts. Both DeRose and Cohen propose, not merely that we can reasonably assume that wholesale skeptical hypotheses are false, but that we know that they are false in ordinary contexts, notwithstanding our having no evidence to that effect. They are essentially forced to make that stronger claim as a result of their commitment to closure. But, as we’ve seen, there  

As noted in §., Wright himself doesn’t claim that the kinds of non-evidential rationality that he posits suffice (in any context) for knowledge. See Cohen , , and DeRose , chapter .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  is nothing in the intuitions to which the contextualist appeals – including the infelicity of abominable conjunctions – that supports that commitment.

. Abominable Conjunctions and Interest-Relative Invariantism The upshot is that there is little if anything to be said in favor of closureaffirming over closure-denying contextualism, and much to be said against the former’s claim that we know the denial of skeptical hypotheses in ordinary contexts. The discussion to this point has exclusively concerned the relationship between contextualism, abominable conjunctions, and ordinary knowledge of the denials of skeptical hypotheses. But the considerations above – or analogous considerations – apply equally well to any other view that introduces some kind of variability due to “non-epistemic” factors, and so to interest-relative invariantism (IRI), which locates the relevant variability at the subject rather than the attributor, as well. And it also applies to relativist views that locate the relevant variability at the context of assessment rather than that of attribution. Running through the arguments over again with these views in mind would tax the reader’s patience too far, so I’ll leave that as an exercise. But I’ll give one quick example. IRI advocates tend to emphasize the effect that a change in the “stakes” can have on judgments as to whether S knows P, that is, the practical implications of S’s being true versus false. But, in order to deal with piecemeal and wholesale skeptical hypotheses, the IRI advocate needs to claim that salience of such hypotheses also tends to influence such judgments, and to do so correctly, since there is very often little to no difference in practical consequences that accrue depending on whether they are true. Consider a closure-denying version of IRI akin to Heller’s view: when, for example, the BIV hypothesis is salient to the subject, the standards that she must meet in order to know rise to the point that she doesn’t know either ~BIV or ordinary claim H, but when it is not salient to her and the standards are lower, she does know H. Nevertheless, she doesn’t meet even those standards for knowledge of ~BIV. Can the advocate of closureaffirming IRI appeal to the infelicity of abominable conjunctions as counting in favor of that view against this closure-denying alternative? 



Other names for the relevant family of views include “sensitive moderate invariantism,” “subjectsensitive invariantism,” and “pragmatic encroachment.” See esp. Hawthorne , Stanley , and Fantl & McGrath . See MacFarlane  and .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

No. When I consider “I know H but not ~BIV,” ~BIV is salient to me. So I am in high-standards circumstances wherein I know neither H nor ~BIV (because I don’t meet those standards vis-à-vis either proposition). So the infelicity of “I know H but don’t know ~BIV” is explicable: this conjunction is false because the first conjunct is false under the circumstances, but that’s only because those circumstances involve my consideration of it. This is the same explanation provided by the contextualist, and it similarly indicates nothing about whether I know ~BIV when I’m in low-standards circumstances that permit knowledge of H, and so wherein I’m not considering the conjunction. Admittedly, this doesn’t explain why “S knows H but doesn’t know ~BIV” will be infelicitous to me when I’m not S. However, I can’t felicitously ascribe knowledge of H to S unless I also self-ascribe that knowledge; “S knows that she has hands but I don’t know that she has hands” is infelicitous. This is a pragmatic contradiction à la Moore’s paradox: it can be true, but it seems absurd to claim that it is. So the attribution by me of an abominable conjunction to S will be infelicitous as well, since I can’t felicitously assert its first conjunct given the standards that apply to me. But that tells us nothing about whether the abominable conjunction is, in fact, true of S. So the conjunction’s seeming infelicitous to those of us who consider it is entirely compatible with its being true of subjects who are not considering it.

. Abominable Conjunctions and Classical Moderate Invariantism The classical invariantist makes two claims. First, there is no attributor (or assessor) variability: if “S knows H” is true (or false) in my mouth, it’s true (false) in everybody else’s. Second, only “epistemic factors” – and so neither salience nor stakes – are relevant to whether S knows H. There are two kinds of such invariantism: the skeptical, according to which most (or all) knowledge claims made in ordinary life are false; and the moderate, according to which most (albeit not all) such claims made in ordinary life are true. We’ll only consider the moderate variety here.

 

I also think that third-person abominable conjunctions are not nearly as infelicitous as their firstperson cousins when considered in the right frame of mind. See §... Why? Because space is limited, the typical reader will not be a skeptic, and skepticism is a very unattractive alternative to closure denial (see §.).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  Closure-preserving classical moderate invariantists (CPMI) claim that we know the denials of skeptical hypotheses as well as most ordinary claims, whereas closure-denying classical moderate invariantists (CDMI) claim that, while we know (most) of the ordinary things we claim to know, we don’t know the denials of skeptical hypotheses. In what follows, we’ll consider the dispute between CPMI and CDMI. The infelicity of abominable conjunctions seems to put an effective weapon in the hands of the CPMI advocate in that dispute. The CDMI advocate can’t explain away that infelicity in the way that, we saw, the closure-denying contextualist (or closure-denying IRI advocate) can. If “S knows that H but doesn’t know that ~BIV” is false in my mouth, it’s false in everyone else’s, and its being false can’t be written down to the extraordinarily high standards in place (or to the infelicity of “S knows H but I don’t”). However, the CPMI advocate has a hard time accounting for our tendency to deny that S knows either H or ~BIV when the BIV scenario is salient, and so our tendency to respond to abominable conjunctions by denying the first conjunct rather than the second. In at least this sense, CDMI more closely tracks that tendency: on CPMI, we know both H and ~BIV despite our tendency to deny that we know either; whereas, on CDMI, although we do know H despite our tendency to deny this, we don’t know ~BIV as we’re inclined to claim. Moreover, against CPMI are ranged the arguments of Chapters – in opposition to the claim that we can know the denials of skeptical hypotheses. It seems to me that these should at least give one pause before taking the infelicity of abominable conjunctions to be decisive: it’s very difficult to do so without being a skeptic (and so not a moderate invariantist).

. Abominable Conjunctions and the Knowledge Rule ..

The Knowledge-Rule Explanation

Nevertheless, it’s fair to ask why we should find such conjunctions infelicitous if CDMI is true. There is, in fact, a relatively easy answer to this, at least when the abominable conjunction is rendered in the first person (“I know that I have hands but not that I’m not a BIV”), so long as the knowledge rule of assertion is correct. 

Or, for that matter, so long as the rule of assertion, whatever it is, implies that “P, but I don’t know that P” is infelicitous. Advocates of other rules – including the truth rule (Weiner ) and the reasonable belief rule (Lackey ) – typically attempt to show that this infelicity follows from the

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

I initially take myself to know that I have hands. Once the BIV hypothesis becomes salient to me, however, I realize that I can only know that I have hands if I’m not a BIV. But I also realize that I don’t know that I’m not a BIV. I can’t simply conjoin these two claims and affirm “I’m not a BIV, but I don’t know that I’m not a BIV.” That violates the knowledge rule: I should only assert what I know. So, under the circumstances, I can conform to the knowledge rule only if I either: claim that I know that I’m not a BIV; claim that I am a BIV or refuse to express an opinion on the matter; or retract my claim to know that I have hands. But it’s obvious to me that I don’t know that I’m not a BIV. And it would be bizarre to claim that I am a BIV – I certainly don’t know that – and also bizarre to refuse to express an opinion on the matter while fully aware that I must not be a BIV if, as I have claimed, I know that I have hands. So I retract my claim to know that I have hands. That doesn’t mean that I don’t know that I have hands. I can felicitously claim of someone else that they have hands, and so are not a BIV, but don’t know that they are not a BIV; I just can’t felicitously claim this of myself. Failure to conform to the knowledge rule generates a pragmatic paradox, not a logical one: it could well be true of me – so far as that rule goes, at least – that I have hands, and so am not a BIV, but don’t know that I’m not a BIV, just as it might be true of anyone else. I just can’t felicitously say (or think) that. And it could also be true – so far as that rule goes – that I know that I have hands, and so am not a BIV, but don’t know that I’m not. I just can’t felicitously assert or think that either, and for the same reason. So I can’t affirm the abominable conjunction while conforming to the knowledge rule, even though it might be true of me. One might reply that the abominable conjunction is “I know that I have hands but not that I’m not a BIV,” not “I know that I have hands, so I’m

  



rule they favor. Nevertheless, I’ll continue to appeal to the knowledge rule here. But everything below applies given any rule of assertion that implies that “P but I don’t know that P” is infelicitous (as, surely, any plausible such rule must, since it clearly is infelicitous). Note that I’m not invoking closure. What I realize is that I can’t know that I have hands if I’m a BIV, not that I can’t know that I have hands if I don’t know that I’m not a BIV. Recall from §. that my recognizing that P implies Q requires that I am disposed to affirm Q if I am disposed to affirm P. Of course, I might not say that I’m not a BIV, while nevertheless thinking it. But I assume that the knowledge rule applies to thought as well as speech. Many of the same phenomena that count in favor of the rule occur in thought as well as in speech: it is, for example, as awkward to think “P but I don’t know P” as it is to say it. Accordingly, advocates of the knowledge rule typically extend it this way. So this is a safe assumption. The explanation that appeals to closure renders that abominable conjunction logically paradoxical: given that closure is true, the conjunction must be false. (However, see §...)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  not a BIV, but I don’t know that I’m not a BIV.” But if someone doesn’t recognize that they can’t know that they have hands if they’re a BIV, they won’t find the conjunction infelicitous. They’re just conceding that they don’t know something that, so far as they are aware, is entirely unrelated to their knowing that they have hands. So it’s only infelicitous if, upon considering the BIV hypothesis, they realize that it must be false if they know that they have hands. Hawthorne famously appeals to the knowledge rule in defense of closure: he suggests that a closure denier who conforms to the rule would behave like Lewis Carroll’s tortoise, happily affirming that they have hands, cheerfully agreeing that “I am not a BIV” follows, and yet refusing to assert “I am not a BIV.” The absurdity of such behavior indicates, he suggests, that we at least behave as though closure is true. But the explanation of the infelicity of first-person abominable conjunctions offered above doesn’t appeal to closure at all, although it does appeal to the very rule that Hawthorne cites. It’s not because I recognize that closure is true, and that I don’t know that I’m not a BIV (or so I think), that I retract my initial claim to know that I have hands because, I realize, nobody could know the one without knowing the other. Rather, it’s because I recognize that, if I do know that I have hands, then I must not be a BIV, but that I can’t affirm this while denying that I know it. I’m put in the same position if I merely affirm “I am not a BIV” when I take myself not to know that. But presumably it could be true of me that I’m not a BIV but don’t know that I’m not. The infelicity of first-person abominable conjunctions is explicable by appeal to the knowledge rule on its own; appealing to closure in addition is entirely superfluous. But, unlike the closure-based explanation, it is entirely compatible with the knowledge-rule explanation that, in fact, the abominable conjunction is true. ..

Gettier Versions of Abominable Conjunctions

There are, moreover, two reasons to favor the knowledge-rule explanation over the closure explanation. First, there are abominable conjunctions wherein the first proposition doesn’t entail the second. “The tank is empty”   

Hawthorne ,  (referencing Carroll ). See also Veber  for a variety of ways of pressing this concern. If you insist that everyone that is not a BIV knows that they’re not, substitute “the report isn’t a misprint.” It’s obviously possible for the report not to be a misprint even though I don’t know that it isn’t.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

does imply “it’s not the case that the gauge is stuck and the tank isn’t empty.” But it doesn’t imply just “the gauge is stuck”; it could be stuck when the tank is, coincidentally, empty. Nevertheless, “I know that the tank is empty but don’t know that the gauge isn’t stuck” is also infelicitous, at least when I realize that I can’t know that the tank is empty on the basis of the gauge’s reading when the gauge is stuck. So appeal to closure doesn’t explain this infelicity. But appeal to the knowledge rule does. I recognize that the gauge must not be broken if I know that the tank is empty by appeal to it. But I also recognize that I don’t know that the gauge isn’t broken. And I can’t claim that it’s not broken and that I don’t know that it’s not broken; that violates the knowledge rule. Closure does, however, explain that infelicity if we endorse the KK rule, according to which S’s knowing that P requires that S knows that she knows that P. Then closure applies from “S knows that P” to “Q”: if she knows the former, she knows the latter (so long as she recognizes that the one follows from the other). But the KK rule is contentious; many closure advocates deny it. So, presumably, the closure-based explanation of abominable conjunctions should not depend upon it. Note also that “the gauge reads empty and the tank is coincidentally empty, but the gauge is broken and stuck on empty” describes a Gettier case vis-à-vis my belief that the tank is empty. That I’m not so Gettiered follows from my knowing that the tank is empty by reading the gauge (although it doesn’t follow from “the tank is empty” itself ). In general, “S knows that P” implies “S’s belief that P isn’t Gettiered.” If abominable conjunctions of the sort we are now considering – where the first proposition doesn’t imply the second, but the knowing of it does – were infelicitous because they are never true, then S could never know P without also knowing that her belief in P wasn’t Gettiered. But that is, at least, contentious. It’s virtually universally agreed that I can’t be Gettiered if I know; but it’s hardly universally agreed that I must know that I’m not Gettiered. Moreover, if I did have to know that, then I would have to know that every condition of my knowing is satisfied. In order to know that the tank  

As per §.., if I don’t realize that my knowing that the tank is empty requires that the gauge isn’t stuck (whether or not the tank is empty), then I won’t find the conjunction infelicitous. It is infelicitous to claim “I know that P but I don’t know that I know that P.” But the knowledge rule explains that too: it’s just an instance of “X, but I don’t know that X,” with “I know that P” substituting for X. This is compatible with its being true that I know that P but don’t know that I know that P. I suspect that this is the source of the intuitive appeal of the KK rule, insofar as it exists.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  is empty on the basis of the gauge’s reading, for example, the gauge must not be stuck. But if the gauge is stuck, it is either so when the tank is not empty or when it is, coincidentally, empty. The first is a standard piecemeal skeptical hypothesis; the second is a Gettier-style version of that hypothesis. So, if I need to know that both are false, I must know that the gauge isn’t stuck, period. And the same goes for every other condition of my knowing that the tank is empty on the basis of the gauge’s reading, as well as for the conditions of my knowing that those conditions are satisfied, and so on. The buck-passing argument of Chapter  only relies on the claim that I need to know that those conditions of my knowing P that follow from P are met; the threat of skepticism is all the more pressing if I also need to know that every such condition holds whether or not it follows from P. ..

Transmission and Retraction

The second reason for favoring the knowledge-rule explanation of the infelicity of abominable conjunctions is that closure doesn’t explain why I would retract my previous claim to know P, at least not if closure is true because warrant always transmits. We do sometimes retract a claim to know a proposition when we realize that another proposition follows from it: if the second proposition is contradictory, or obviously false, then the first must be false as well, and we don’t know false propositions. But that’s not what’s going on in Dretske cases. “It’s not a disguised mule” is neither contradictory nor obviously false. In fact, background information, if there is any, favors it. So if I start out taking myself to know that it’s a zebra, and recognize that “it’s not a disguised mule” follows from it, then why in the world would I not happily affirm the latter? The obvious answer is that I realize (or take myself to realize) that I have no way of knowing it. But what I actually (take myself to) realize is that I have no other way of knowing it: I haven’t washed the animal, conducted a DNA test, and so on. But so what? Suppose I infer from “it’s a zebra” to “it’s a mammal,” but I have no other way to know that it’s a mammal than by means of that inference. That hardly undermines my claim to know that it’s a mammal. It’s ridiculous to suggest that I can’t learn that it’s a mammal because I have no non-inferential way to know this; the inference itself suffices for that purpose. Retraction only makes sense (given closure) if I also realize (or take myself to realize) that I can’t learn the conclusion by inference from the premise. But then I realize that transmission fails: even if I do know that

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

it’s a zebra, I can’t learn that it’s not a disguised mule by inference from it. And since, I think, I have no other way to learn this, I think I don’t know it at all. Closure then implies that I don’t know that it’s a zebra. But if we think that transmission fails, why do we think that closure succeeds? One might appeal to front-loading: I need to know – or, at least, have a warrant for – “it’s not a disguised mule” in order to acquire knowledge of “it’s a zebra” on the basis of its appearance. Recognizing that I have no transmission-independent warrant for “it’s not a disguised mule,” I then retract my initial claim to know that it’s a zebra. But front-loading is a stronger – and far more contentious – principle than is closure. While front-loading preserves closure in the cases to which it applies, closure doesn’t imply front-loading. And it’s far from obvious that, for example, an ordinary visitor to a zoo can only learn that it’s a zebra if they already have a warrant for “it’s not a disguised mule” (and for the denials of every other skeptical hypothesis). Front-loading also runs up against the buck-passing argument of Chapter . So, if this is why we find abominable conjunctions infelicitous, that’s unfortunate; we’re responding to a principle that is philosophically contentious, far from obviously correct and, if true, implies that we know next to nothing at all. At any rate, it wouldn’t be closure that explains that infelicity. That F explains A and implies C doesn’t mean that C explains A. The knowledge-rule explanation, however, does explain retraction. I realize that it must not be a disguised mule if I know that it’s a zebra. But I can’t felicitously affirm “it’s not a disguised mule but I don’t know that it isn’t.” So I can only felicitously either claim to know that it’s not a disguised mule or retract my claim to know that it’s a zebra. But I’m very committed to the claim that I don’t know that it’s not a disguised mule. So, I retract. ..

Third-Person Abominable Conjunctions

There are, however, three problems with the knowledge-rule explanation. First, it doesn’t explain why we would find third-person abominable conjunctions to be infelicitous. Second, it doesn’t explain why we would not just retract our earlier claim to know P but positively affirm that we 

The sun’s shining from a certain position in the sky explains the length of the flagpole’s shadow, and “the sun is shining from such-and-such position” implies “something is shining.” But that something is shining doesn’t explain the shadow’s length.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  don’t know P. Third, it still threatens a kind of skepticism concerning, not whether we ever know P, but whether we can ever claim to know P. We’ll examine these in order. “Q is true but S (who isn’t me) doesn’t know that Q is true” doesn’t violate the knowledge rule. So “S knows P; knowing P requires that Q is true; so Q is true; but S doesn’t know Q” doesn’t violate the knowledge rule either. But, one might claim, this – as well as the more succinct “S knows P but doesn’t know Q” – is still infelicitous (when P and Q are as per Dretske cases). Closure would explain this, since it applies to S as much as to me. So the closure explanation is better. But third-person conjunctions are not, or at least not very, infelicitous when considered in the right frame of mind. My wife and I go to the train station to catch the train to Kalamazoo. We consult the timetable posted on the wall, which indicates that the train leaves in half an hour. My wife goes to get coffee. While she’s gone, it occurs to me that that the schedule might have changed since the timetable was posted. I have no particular reason to suspect this, but I’ve got nothing else to do; I might as well check. I go to the ticket agent and ask; he confirms that it’s up to date. Reassured, I sit back down. My wife returns with the coffee. I don’t bother to mention the schedule change possibility or my consultation with the ticket agent, and that possibility doesn’t occur to her at all. I would not only take myself to know when the train leaves; I would take my wife to know that as well. But I feel no inclination to think that she knows that the schedule hasn’t changed. She didn’t ask the ticket agent, I did; that possibility didn’t even occur to her. If, before she returned with the coffee but after I’ve queried the ticket agent, another passenger sitting across from me were to ask me whether she knows when the train is leaving – they’re worried that she won’t get back in time – I wouldn’t hesitate to say “yes, she does.” And if they worry aloud that the schedule might have changed and ask me whether either of us knows that it hasn’t, I’d reply “my wife doesn’t; she just looked at the timetable and went to get coffee. But I checked with the ticket agent, who assured me that it hasn’t changed. So, no worries; she knows when it’s leaving, and will get back in time.” I don’t feel any tension in these responses. What’s significant about this example is that I have been assured that the possibility that occurred to me – that the schedule has changed – is not 

Add, if you like, that the agent also indicates that the schedule hasn’t changed in decades, that the transit authority isn’t considering any such change, and so on, so that it is highly improbable that it has changed and the nearest world in which it has is distant.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

realized. If, after wondering whether the schedule has changed but before consulting the ticket agent, I consider the conjunction “my wife knows when the train leaves but not that the schedule hasn’t changed,” I will unsurprisingly be unwilling to affirm the first conjunct. I’m now wondering whether the schedule changed; and if it has, neither of us knows when the train leaves. But the assurance provided by the ticket agent makes all the difference. In general, the salience of third-person abominable conjunctions of the form “S knows P but doesn’t know Q” naturally leads me to wonder whether Q is true. Since, I realize, Q must be true in order for the first conjunct to be true – S can’t know P if Q is false – I’m also wondering whether S knows P. But if I’m wondering whether S knows P, then I won’t affirm that S knows P. So it would be infelicitous for me to affirm the first conjunct, and so the conjunction. But if I’m then assured that Q is true, then I can affirm that S knows P; the threat to her (and my) knowing posed by the possibility that Q is false is allayed. But none of this gives me any reason to claim that S knows Q; I’m the one who investigated the matter, not her.

 



Note that “she knows that the train leaves in half an hour but I don’t” is infelicitous. The reader might notice the similarity between this and Cohen’s airport case (Cohen ). I think the same reactions would be appropriate in that case, as well as in the other standard cases to which contextualists appeal, if similarly modified. If the relevant concern is considered by one of the relevant parties, and that person is then reassured that the concern is unfounded, then that person will be content to ascribe knowledge to the other party, even while conceding that the other party doesn’t know that the concern is allayed. Moreover, it doesn’t seem to matter what the stakes are. If, in the train scenario, it’s very important that we get to Kalamazoo, the same pattern of thoughts and responses seems appropriate. The contextualist might try to respond that, when I tell the other passenger that my wife knows when the train leaves, all I’m really conveying is that her opinion is correct. But that would amount to a (very) low knowledge-ascription standard, which is precisely not the standard that should govern the conversation since the passenger and I are both explicitly considering, and taking seriously, the possibility that the schedule has changed. This, I think, significantly undermines the force of the contextualist’s appeal to such examples. This is another place where the widespread focus on wholesale skeptical hypotheses is problematic. In a sense, I can assure myself that my wife isn’t a BIV and knows, of course, that she has hands; I saw her go for coffee, after all, eyes and hands intact. But if I wonder whether she is a BIV, I’ll naturally also wonder whether I am as well, and realize that, if I am, then what (I thought) I saw is irrelevant. (I won’t even have a wife.) But there is no cosmic agent who can assure me that I’m not a BIV. I’m stuck at the same point that I was at in the train example after the schedule-change possibility occurred to me but before I consulted the ticket agent, perennially wondering whether the skeptical hypothesis is true. (At least, I’m stuck there so long as I consider the issue; see §..) It’s no surprise that I won’t affirm the abominable conjunction “my wife knows that she has hands but not that she’s not a BIV”; in wondering whether I’m a BIV, I’m wondering whether my senses tell me anything, including anything about her, at all. Nevertheless, it may still be true that I’m not a BIV, that my wife (as I can see) isn’t either, and that my wife knows that she has hands, even though neither of us know that we’re not BIVs.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  If, unlike me, you still sense a tension in the pattern of ascriptions in the train example, I hope you will concede this much: it’s less infelicitous than in first-person cases. Moreover, the alternatives seem to be more infelicitous. It would be odd for me to respond to the other passenger by saying “my wife doesn’t know when the train is leaving.” That seems an unduly harsh verdict given that, as I am fully aware, her epistemic circumstances are as favorable as they are (she consulted an up-to-date timetable). And it would be odder still to respond by saying “she knows that the schedule hasn’t changed”; that seems an unduly generous verdict given that, as I am fully aware, her epistemic circumstances are as unfavorable as they are (she didn’t ask the ticket agent; it didn’t even occur to her to do so). Closure does a terrible job of explaining this. On the closure explanation, it’s logically (or conceptually) impossible for her to know that the train departs in half an hour without knowing that the schedule hasn’t changed. So we should find one or the other of these two claims – that she knows when the train departs and that she doesn’t know that the schedule hasn’t changed – to be utterly infelicitous; one of them must be wrong. But that’s not at all how the intuitions go: the conjunction seems less infelicitous than does either claim. Nor can this be written down to uncertainty as to which claim is false, since there’s no obvious explanation for that uncertainty. Both I and the other passenger are fully apprised of the relevant epistemic features of the situation, so there’s no obvious reason why either of us should be unsettled as to which it is. Moreover, my responding to the other passenger by saying “she must either not know when the train departs or know that the schedule hasn’t changed, but I’m not sure which” seems more infelicitous – or, at least, no more felicitous – than does my saying “she doesn’t know that the schedule hasn’t changed; she just checked the timetable and left for coffee. But no worries, I checked; it hasn’t changed. So she knows when it’ll leave.” So any residual infelicity felt when reading through the train example is a very slim reed upon which to rest one’s argument for closure, particularly 



That is, it’s impossible so long as she – bizarrely – comes to believe that the schedule hasn’t changed as a result of inferring this from “the train leaves in half an hour,” since that’s required by knowledge closure’s antecedent. In the scenario described above, she doesn’t do that. So, as per §.., it’s compatible with closure that the abominable conjunction is true of her. But then we shouldn’t find the conjunction to be infelicitous. Insofar as there is a closure-based explanation for the supposed infelicity of third-person abominable conjunctions, it must be (somehow) claimed that she can’t know that the train leaves in half an hour unless she knows that the schedule hasn’t changed. I assume that the closure advocate won’t attempt to explain this by claiming that, although closure is true, we are uncertain that it is.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

given the weight of the arguments against it. In general, third-person “abominable” conjunctions seem infelicitous, when they do, because, when considering them, we wonder whether Q is true, and we can’t wonder whether it’s true, and so whether the other person knows P, while taking it to be true. But that infelicity dissipates when we do take ourselves to know that Q is true; we’re content to view S as knowing P. Q’s being true is a condition of S’s knowing P, and she doesn’t know that that condition is satisfied. So she doesn’t know that every condition of her knowing P is satisfied. But who does? What matters is that they are. ..

Asserting “I Don’t Know”

The second objection to the knowledge-rule explanation is that we’re not merely unwilling to assert, for example, “I know that I have hands” when the BIV hypothesis is salient; we’re willing to positively assert “I don’t know that I have hands.” The knowledge-rule explanation doesn’t explain that. I take myself not to know that I’m not a BIV, and I recognize that I must not be in order to know that I have hands. So I’m unwilling to claim that I know that I have hands; my knowing that depends on my not being a BIV, I realize, but I’m not willing to claim that I know that. But that doesn’t explain why I would claim that I don’t know that I have hands; that would suggest that I think that I am a BIV (or that some other condition of my knowing has failed). But I have no reason to think that. Much is made of this in the literature, in various ways. DeRose, for example, appeals to it in defense of contextualism, since contextualism predicts that one doesn’t know that one has hands in high-standards contexts and so can felicitously say that one doesn’t. But I think that much more is made of this than should be made. To see why, suppose that I’m not convinced either that I do or that I don’t know P, for whatever reason. If you ask me whether I know P, I could say “I’m sorry, I can’t say that I do.” But it also seems acceptable to say “I’m sorry, I don’t.” On the face of it, that’s odd. If I’m undecided as to whether there are aliens, that obviously doesn’t license my saying that there  



As per §.., the answer had better be “nobody” if we’re not going to end up skeptics. Of course, if closure is true, my wife’s not knowing that the schedule hasn’t changed implies that she doesn’t satisfy all the conditions required for knowing when the train leaves; for then one of those conditions is that she knows that it hasn’t changed. But this claim only has force if, even while emphasizing that the schedule hasn’t changed, the conjunction still seems abominable. But it doesn’t. See DeRose , .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  aren’t any. And yet, it seems acceptable to express my indecision as to whether I know P by saying that I don’t know it. Why is that? Here’s an hypothesis. We sometimes use “I don’t know P” to indicate, not that we don’t actually know P, but only that we’re not convinced that we do know P. When we consider who knows what, after all, we are typically concerned with what we should treat as settled – or indicate to others what we treat as settled – and so mobilize in subsequent thought and action. If I either take myself to not know P or am unconvinced that I do know P, P will be unsettled for me; in either case, I don’t take myself to know it. I suspect that we often use “I don’t know P” to indicate, not that it is settled for us that we don’t know P, but rather that it is not settled for us that P is true. But it doesn’t matter what the explanation actually is. All that matters is that it’s acceptable to say “I don’t know P,” not only when one is convinced that one doesn’t know P, but also when one is not convinced that one knows P. In fact, this seems to me to happen all the time. In the train example, after I wonder whether the schedule has changed but before I ask the ticket agent, I’m wondering whether my wife and I do know when the train leaves: if the schedule hasn’t changed then we do know this, but if it has, we don’t. If the passenger across from me asked me at that point whether I know when the train leaves, I could respond by saying “I’m not sure; I do know when it leaves if the timetable I consulted is up to date. But it might not be; I’m going to check with the ticket agent.” But I could also say “No, I don’t. I did check the timetable on that wall. But it might not be up to date. I’m going to check with the ticket agent.” In either case, I’ve indicated to them that, at the moment, it’s an open question for me whether the timetable is up to date, so I can’t assure them that the train does leave in half an hour. Moreover, it seems somewhat more appropriate to respond in the former manner. Of, course, if I view my available responses to their query as to whether I know when the train leaves to be only “yes” or “no,” I should obviously choose the latter; the former would inaccurately indicate that the departure time is settled for me. But if I allow myself the opportunity to indicate that I’m unsure as to whether I do know this – because my knowing it depends on the timetable’s being up to date, which I’m now wondering about – then it seems a more accurate representation of the situation to just say that. It’s not as if I know that the timetable isn’t up to date, so that I definitely don’t know when the train leaves. 

Note that the timetable’s not being up to date doesn’t mean that the train doesn’t leave in half an hour. The departure time of that particular train might be the same on the old and new timetables.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

It might be thought that my second-order lack of conviction concerning whether I know that the train leaves in half an hour carries over to my firstorder opinion as to whether the train leaves in half an hour, so I don’t know this simply because I don’t believe it with sufficient strength. But that doesn’t apply to my wife, who hasn’t considered the schedule-change possibility and still believes that the train leaves in half an hour. Nevertheless, the response “I’m not sure that she does know when the train leaves; that depends on whether the timetable is up to date” seems a more appropriate, because more accurate, representation of my situation when reflecting on what she knows than does “she doesn’t know when it leaves.” So, to return to the objection: when I say, or think, “I don’t know P” when the skeptical possibility is salient, I may not be registering a conviction that I don’t in fact know P, but only an unwillingness to claim (or think) that I do, since I’m wondering about precisely that. And, if I’m given the option, it would be more appropriate to affirm instead that I am not in a position to claim that I do know P. Indeed, this seems to me the better interpretation of my response. Contemplation of skeptical hypotheses seems to me to initially trigger hesitation with respect to the question whether I know the ordinary claim: I was comfortable claiming that knowledge before, but I’m now uncomfortable doing so. It seems a natural interpretation of this reaction, not that I’ve suddenly realized that a condition of my knowing isn’t satisfied – that the schedule has changed, for example – but rather that I’m uncertain as to whether a condition of my knowing is satisfied – namely, that it hasn’t changed.





In the same way, in Cohen’s airport case it seems at least as appropriate, if not more so, for Mary (or John) to say “I’m not sure that Smith does know that there is a layover in Chicago; if the itinerary he consulted is correct then he does, but it could contain a misprint; we should check with the ticket agent” as it would be for her to say “Smith doesn’t know that there is a layover in Chicago.” Suppose that Mary checked the printed itinerary against an online schedule before arriving at the airport, so that she is assured that the printed itinerary is correct, and that she never bothered to mention this to John. John overhears Smith saying that there is a layover in Chicago but worries that Smith’s itinerary contains a misprint. After a surreptitious glance at Smith’s itinerary, Mary notes that it’s identical to hers. It would be strange for her to say to John “I checked yesterday; the itinerary that both Smith and I have is correct. So I know that there’s a stopover in Chicago. But he doesn’t.” Analogous comments apply to the other cases to which contextualists appeal. It’s also worth noting that the outcomes of some recent experiments testing contextualist predictions suggest that people tend to be uncertain as to whether the high-stakes subject knows rather than confident that she doesn’t know (Buckwalter, unpublished).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem 

. Assumptions and Skepticism .. Second-Order Skepticism Thanks to the knowledge rule, it turns out that we can only claim ordinary knowledge so long as we ignore skeptical hypotheses. For, as soon as we attend to those hypotheses, we recognize that they must be false in order for those ordinary claims to be true, but also that we don’t know that those hypotheses are false. So we have to stop claiming ordinary knowledge on pain of violating that rule, even if those claims are true. One might worry that this means that we are never in a position to claim ordinary knowledge. It might seem odd that we can only make such claims if we steadfastly ignore skeptical possibilities; putting one’s head in the sand is a strange way to earn the right to claim such knowledge. But if we can’t earn that right this way then the skeptic will have won, in a sense: perhaps we do know, he might concede; but we’ll never be in a position to legitimately claim that knowledge. And that’s skepticism enough. Call this second-order skepticism, as opposed to first-order skepticism. First-order skepticism denies us knowledge of the propositions that we ordinarily claim to know (whether or not we can reasonably claim to know them), whereas second-order skepticism denies us the right to claim to know those propositions (whether or not we do in fact know them). Advocates of CPMI might think that they don’t confront the threat of second-order skepticism, since they claim that we do know the denial of skeptical hypotheses. But they’d be wrong. Everyone recognizes a tendency to deny that we have ordinary knowledge when we contemplate skeptical hypotheses. CPMI advocates typically put this down to our being unwarranted in asserting that we have ordinary knowledge – and, moreover, being warranted in asserting that we don’t – when we contemplate skeptical hypotheses. But then we can’t properly claim ordinary knowledge whenever we do contemplate those hypotheses. So, if we can only properly claim ordinary knowledge if we can also do so when contemplating skeptical hypotheses, as the second-order skeptic suggests, then we could never properly claim ordinary knowledge. If putting one’s head in the sand is a problem, it is so for the CPMI advocate as well. But it isn’t a problem at all; sometimes, putting our heads in the sand is a perfectly reasonable thing to do. Our epistemic relation to propositions we don’t know isn’t uniform. Even though we don’t know that they are true, we can reasonably assume that some of them are, and conduct our

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

cognitive, dialectical, and practical lives accordingly. That is, I suggest, our situation with respect to at least the vast majority of skeptical hypotheses. .. What Are Assumptions? But doesn’t assuming that P involve thinking that P? But there are any number of skeptical hypotheses – plausibly an infinite number of them – for any ordinary claim to know; we obviously haven’t considered, and then assumed, that all of them are false. This depends on what an assumption is. Despite the ubiquity of references to assumptions in philosophy, logic, mathematics, and so on, there is, remarkably, very little direct discussion of this issue in the philosophical literature. One of the rare exceptions is a short but, I think, insightful article by P. S. Delin, P. Chittleborough, and C. R. Delin (DCD hereafter). DCD point out that it is assumed (!) by many that an assumption is a mental entity – something akin (or equivalent) to an implicit belief – coding a proposition that then functions as a kind of implicit premise in the thinking of an agent who “makes” that assumption. But, they argue, this proposal is difficult to reconcile with what we are willing to call assumptions. You are boiling an egg for your breakfast. You may be said to assume that the egg will not dissolve or explode, and that the stove will not fly away, and, more prosaically, that the egg will be ready within the time allotted to the meal, and that the stove will not catch fire . . . [T]he list of assumptions may in fact be, not merely large, but functionally infinite. We don’t just assume that the egg will not melt or explode. We assume it will not turn into a wombat or a crow, and that the stove will not go on strike, or stop heating while it engages us in conversation . . . Clearly one could go on listing such assumptions as long as one’s imagination, and the patience of one’s auditor, held out. 



There is, however, an immense literature concerning the related notion, “presupposition.” But, while there are points of contact between them, assumptions and presuppositions are not the same thing, at least not as the latter are referenced in the literature. For example, assumptions fail the “negation test.” When I tell someone that they can get to South Haven by taking , I’m assuming that  hasn’t been closed due to a traffic accident. But I don’t need to assume this if I were to assert instead that they can’t get to South Haven by taking . (Indeed, I might affirm the latter because I come to believe that that assumption is false.) Whereas “I caused that traffic accident” – a presupposition of “I regret causing that traffic accident” – passes the test: “I don’t regret causing that traffic accident” still requires that I caused it. Insofar as assumptions are analogous to presuppositions, they are closer to Stalnaker’s pragmatic presuppositions than to semantic presuppositions (see Stalnaker  and ).  Delin, Chittleborough, and Delin . Delin, Chittleborough, and Delin , –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  It is, DCD suggest, highly implausible that these assumptions are stored propositions that are then activated in one’s thought, speech, and action. They propose instead that it is better to think of an assumption as an absence rather than a presence of something in one’s cognition. The stove-user does not usually think ‘The stove will not turn into a wombat’. He or she merely fails to consider the possibility that it might . . . Whenever we think, reason or argue certain boundaries obtain, limiting the scope of the thinking, or the solution set we will consider, or the universe of things we will regard as relevant. These boundaries or limits cannot be directly observed. They are not in any sense ‘things’, but are aspects, more or less complex, of the framework within which our thinking is confined.

In short, to assume P is not to think that P is true, not even subconsciously; it is to fail to consider the possibility that P might be false. An assumption describes a limit on one’s thought rather than something that one thinks. Assumptions, so understood, need not be avoidable, or even conceivable by the agent who makes them. Our visual processing is plausibly constrained by a variety of assumptions about light (it travels in a straight line), about space (it is Euclidean at the relevant scale), and so on. These assumptions might well be hard-wired, structural characteristics of our visual system: try as we might, we can’t “see” in a way that is not so constrained. We can, however, now think in a way that isn’t so constrained. But many conceptual developments were needed in mathematics before non-Euclidean geometries became so much as available to thought. Before that, our manner of thinking about geometric matters assumed that the only geometry is Euclidean geometry, notwithstanding the fact that we were in no position to conceive of an alternative. Assumptions, so understood, are also essentially subterranean: to explicitly consider whether an assumption is true is to no longer assume it. So  

Delin, Chittleborough, and Delin , –. This is why identifying one’s assumptions is so hard, and requires “lateral” rather than linear thought. A good example is the nine-dot problem, which is the origin of the phrase “thinking outside the box.” Present an array of nine dots arranged in three rows of three, and ask people to draw four straight lines through every dot without lifting the pen from the paper. People often find it difficult to do this because they assume that the lines can’t extend beyond the box delimited by the outside dots, and there is no solution on that assumption. It’s not as though they think “I mustn’t draw outside the box”; that possibility simply doesn’t occur to them. Their reasoning is, rather, constrained in such a way that only solutions that conform to that assumption do occur to them. To solve the problem, they need to recognize that their reasoning has been so constrained, realize that it need not be so constrained, and thereby widen the scope of admissible solutions.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

to declare “I’m assuming P” is, in a way, self-defeating; insofar as one considers whether P is true to the point that one sees fit to declare this, one is no longer assuming it. Such a declaration indicates, rather, that one was assuming it and, perhaps, that one intends to continue to do so. But acting on that intention isn’t easy; like a genie in a bottle, once an assumption is out in the open it can be difficult to get it back into place. Doing so requires that one disregard a possibility that one has explicitly considered, and restrict one’s subsequent thought, speech, and action in the way that, one has realized, they were restricted before the possibility arose. .. Dismissing versus Answering the Skeptic It’s difficult to do this, but not impossible. When the misprint skeptic interrupted my celebration of the Broncos’ win in §.., I was forced to, as it were, step outside of my own behavior and consider whether it’s appropriate for me to continue to behave that way: should I continue to claim that I know that they won, celebrate their win, etc.? The skeptic has changed the “conversational score”: so long as I interact with him, I’m no longer assuming that the report isn’t a misprint, so I can’t now speak and behave as would be appropriate under that assumption. But I also can’t respond by simply asserting that I know that the Broncos won. He will remind me that I only know this if it isn’t in fact a misprint and that I don’t know that. I can, however, change the score back. Upon reflection, I decide that it’s perfectly reasonable to assume that the report is not a misprint; it’s very unlikely that it is, after all, so it’s a safe assumption to make. I then dismiss that possibility and continue with the celebration. I haven’t really answered the skeptic, at least not to his satisfaction. To attempt to do that would be to concede that I can’t reasonably ignore the possibility that he presses upon me, so that I can only either accept the challenge to show that I do know that the report isn’t a misprint or stop claiming that I know that the Broncos won. What I do, instead, is decide that it is reasonable for me to 



We do, of course, say “assume P” in the course, for example, of a conditional proof. But what follows is the inferential behavior expected of one who assumes that P is true in the sense described here; it’s as though we’ve signaled that we’re going to proceed, during the segment of the proof constrained by that assumption, as we would if we did assume it in that sense, and then stop doing so when the assumption is discharged. This conversational interaction with a real misprint sceptic can be modeled in one’s own mind. In thinking about the skeptical possibility one has become, as it were, one’s own skeptical interlocutor; one is thinking in a way that is no longer governed by the assumption that the skeptical possibility is false.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  ignore that possibility, even though I don’t know that it isn’t realized, because it’s very unlikely that it is. That licenses my assuming that it isn’t realized, as I did before I was accosted by the skeptic. I then think and behave as before, and so as though the misprint possibility never came up; I’ve stuck my head back in the sand. But there’s nothing wrong with my doing so. I’m just assuming that it’s false once again – and reasonably so – and getting on with my life. Before I leave the skeptic to stew in his own juices, I can say this much to him: “When I said that I knew that the Broncos won before you brought up the misprint possibility, I was assuming that the report isn’t a misprint. You’re right; I didn’t know that it isn’t. But it was – and still is – a reasonable thing to assume; it is, after all, very unlikely that it is a misprint. And if that assumption is correct – it isn’t a misprint – then I did know that the Broncos won.” That seems to be an entirely felicitous thing to say. But it wouldn’t be if I was guided by closure. My claim that I knew that the Broncos won couldn’t be true even if that assumption is true, since – I think – I didn’t know that it is. ..

Reasonable Assumptions

The skeptic might try one last shot: why is it reasonable to assume that skeptical hypotheses are false? I very much doubt that there is one answer applying to every skeptical hypothesis. In some cases, we may need background evidence to the effect that the assumption is probably true (as, perhaps, with the misprint case). In other cases, we might not even need that evidence. We of necessity conduct our cognitive and practical lives against the background of a vast body of assumptions that we never consider or investigate; we don’t even possess the conceptual resources to so much as contemplate all of them. If those assumptions are not reasonable – so that we are in no position to claim to know the many things that we claim to know, the knowing of which requires that those assumptions are true – then we can claim to know very little if anything at all. But the disastrous consequences for our cognitive, dialectical, and practical lives that would result from our taking ourselves to be in no position to claim to know anything are obvious. We are particularly poorly positioned to investigate wholesale skeptical hypotheses. As Wright suggests, any such investigation would require assuming once again that they are false (see §..). But, as Wright also 

To assume does not, therefore, inevitably make an ass out of you and me.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

suggests, it may be reasonable to assume that they are false for that very reason; unless we are allowed those assumptions, we wouldn’t be able to investigate anything at all. Wholesale skeptical hypotheses are also, by design, such that there would be no discernable difference in the course of our experiential lives if they were true. So, in that sense at least, no practical cost could accrue to our assuming that they’re true when they’re not; as we saw in §.., we’d have no reason to “do” anything differently if they were true. Of course, those assumptions might actually be false; and, even if not, there might be a much greater threat of their being so than we think. So our ordinary claims to know might well also be false. But the goal here isn’t to demonstrate that skeptical hypotheses are false; it’s to show that our not knowing that they are false need pose no threat to our ordinary knowledge. The skeptic might reply that no assumption is reasonable unless we know – or, at least, are in a position to know – that it is true. But I see no reason to agree. We distinguish between knowledge and reasonable assumption all the time, and happily affirm that we often don’t know what we nevertheless reasonably assume. The skeptic would need a principled argument to the effect that we are always mistaken when we do that, and there is, so far as I know, no such argument on the table. I won’t attempt to identify the grounds for the reasonability of our assuming the falsehood of various skeptical hypotheses further here. That’s an enormous task, and one that others have already undertaken. But, if such assumptions are reasonable when, at least, most ordinary knowledge



 

As the latter possibility indicates, its being reasonable to assume that the skeptical hypothesis is false, and the hypothesis’ being false, might not suffice for knowledge. It might be reasonable for me to assume that the skeptical hypothesis is false because it probably is false given my evidence – I have good testimonial evidence that this paper rarely produces misprints, for example – but that evidence is misleading: in fact, this paper (or this edition of it) is riddled with misprints. Then, even if the specific report that the Broncos won is not a misprint, I still might not count as knowing this. (This amounts to a fake-barn-style Gettier case.) If not, then knowing that the Broncos won requires, not only that the report isn’t a misprint, but also that it is objectively improbable that the report is a misprint (and, perhaps, that it is improbable given my evidence that it is a misprint). But none of this suffices for my knowing that the report is not a misprint. I need, however, take no stand here on the question whether it must be objectively improbable that the report isn’t a misprint or, more generally, whether fake-barn style Gettier cases really are cases of unknown belief. See §.. Notwithstanding Chapter , the view sketched here is analogous to that of Crispin Wright if one’s reasonably assuming P is equivalent to one’s having a warrant to trust that P in his sense. It is also akin to Annalisa Coliva’s “moderatist” position (which also incorporates closure failure). See the references in Chapter , fn. .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  claims are made, then we are within our rights to make those claims. And, if those assumptions are also true, then those claims are correct. .. Summary The view that I’m presenting on behalf of the CDMI advocate is, then, as follows. The negation of skeptical hypotheses – including the Gettier versions of those hypotheses – express necessary conditions of our knowledge of ordinary propositions. In the course of our ordinary thought, speech, and action, we assume that those conditions hold, but don’t know that they do. Some of those conditions are also deductive consequences of the ordinary propositions themselves, although some – the Gettier versions – are not. If we explicitly consider whether those conditions hold, we no longer assume that they do. We then run up against the knowledge rule: we can’t felicitously affirm that they hold while conceding that we don’t know that they do. So, in contexts wherein we are considering whether the skeptical hypothesis is false, we can’t felicitously claim the corresponding ordinary knowledge. But that doesn’t imply that we don’t in fact have that knowledge. And it doesn’t imply that we can’t reasonably assume that the skeptical hypothesis is false in the course of our ordinary lives wherein we assert, and act on, ordinary claims to know. This is reminiscent of closure-denying contextualism: I can’t felicitously claim to know either P or Q when Q is salient – and, in particular, when I take the skeptical possibility seriously and so wonder whether it’s true – but I can felicitously claim to know P (but not Q) when Q isn’t salient. And it’s reasonable for me to claim to know P – so long as Q isn’t salient – when my assuming that Q is true is itself reasonable. It’s entirely plausible, moreover, that what it is reasonable for me to assume depends, in part, on 



It could, however, be claimed that, if those assumptions are unreasonable, then I don’t know the ordinary proposition even if those assumptions are true. If, for example, it’s probable given my evidence that the report is a misprint, then it’s plausible that I don’t know that the Broncos won on the basis of the report even if that report itself is not a misprint. If, on the contrary, I do know so long as the report isn’t a misprint – and, perhaps, it is objectively improbable that it is a misprint – even if it probably is a misprint given my (perhaps misleading) evidence, then I can know that a proposition is true even though it’s not reasonable for me to claim that knowledge. I need take no stand on this issue here. It is also reminiscent of the position defended by Harman & Sherman  and , and Di Bello . The difference is that these authors claim that assumptions are required in order to know ordinary claims. My suggestion, instead, is that assumptions are required in order to claim to know them (and those assumptions must be reasonable in order for those claims to be reasonable). Whether one knows them in fact turns only on whether those assumptions are true. (See, however, fn. ).

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

what’s at stake: if it matters a lot whether P is true, then the standards that determine whether it is reasonable to assume that Q is true, and so to claim to know P, may well be much higher. The difference is that, on the present view, this variation isn’t a result of the fact that I don’t know P when Q is salient; it’s only infelicitous to claim that I do, although I might nevertheless know it. And there is at least this much advantage in the present view: it’s not much of a surprise that salience of various possibilities – and so what we are and are not assuming in a context of speech or thought – would have an impact on what we can reasonably say or think, including what we can reasonably say or think we know, in that context. But it’s unclear why it should have an impact on the truth of the claims themselves, particularly when the posited impact is to render the resulting truth conditions unsatisfiable by just about everyone. The suggestion that closure is required to explain the infelicity of abominable conjunctions is, on this view, ultimately a result of conflating what one does know in fact with what one claims to know. From the firstperson perspective, these are the same: to me, what I know just is what I claim to know. But they come apart from the third-person standpoint: I can coherently think that someone else doesn’t know what they claim to know and that they do know what they don’t, or can’t, claim to know. I suspect that a lot of erroneous epistemological theorizing has resulted from a failure to keep these two perspectives separated. The suggestion that only closure can explain abominable conjunctions – and, more generally, that closure is true – is an example.

. The Spreading Problem My response to the spreading problem is, mercifully, much shorter. The problem originates from the fact that there is more than one available inferential path from the ordinary claim to the denial of the skeptical hypothesis. Up to now, we’ve been concerned with the direct inference from, for example, “it’s a zebra” to “it’s not a disguised mule.” But, as Hawthorne and others have pointed out, there are other ways to get from the former to the latter that are less direct but that utilize inference rules that are such that it is unintuitive that closure would not apply to any agent employing them.

 

See fn. . Hawthorne’s objections are directed specifically at Dretske’s view. But they apply to any closuredenying view, including mine.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  Two such routes are prominent in the literature. One – the disjunction path – runs as follows: () () () ()

It’s a zebra. So, it’s not a mule. So, it’s not a mule or it’s not disguised to look like a zebra. (, Disj. Intro.) So, it’s not a disguised mule. (, Equiv.)

The other – the conjunction path – runs as follows: () It’s a zebra. () So, it’s a zebra and it’s not a disguised mule. (, Equiv.) () So, it’s not a disguised mule. (, Conj. Elim.) Whichever path is referenced, the argument against closure based on it is essentially the same, running as follows. “It’s crazy to suggest that knowledge isn’t closed over equivalence, and just as crazy to suggest that it’s not closed over disjunction introduction (or conjunction elimination). But successive use of these inferences gets us from ‘it’s a zebra’ to ‘it’s not a disguised mule’. So, if S does know that it’s a zebra, then she will know that it’s not a disguised mule by these paths, as long as she recognizes that these inferences are valid; so closure is preserved.” Notice that the issue here isn’t really closure; it’s transmission. Nobody resting their commitment to closure on this argument thinks that, although these paths don’t transmit, nevertheless some other source of warrant for “it’s not a disguised mule” steps in to save the day for closure. The claim being made is precisely that it is unintuitive to think that instantiations of equivalence and disjunction introduction or conjunction elimination don’t inevitably transmit. If they do transmit, that would not demonstrate on its own that the original, direct inference from “it’s a zebra” to “it’s not a disguised mule” also transmits. But it won’t suffice to simply point that out. If S does infer along either of these paths, then her doing so will still violate NIFN: if the conclusion is false, then she will have employed a method – routing, now, through multiple inferences rather than one – that is guaranteed to deliver “it’s not a disguised mule” whenever it is a disguised mule. And I’ve claimed that violations of NIFN don’t confer warrant. So I can’t concede  

“It’s not a disguised mule” is equivalent to “It’s not the case that it’s a mule and disguised to look like a zebra,” which is equivalent to () by DeMorgan’s Rule. “It’s a zebra” implies “it’s not a disguised mule” and, of course, “it’s a zebra”; so () implies both conjuncts of (). And () obviously implies (). So they’re equivalent.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010



Against Knowledge Closure

that these inferential paths (or any other paths that get from “it’s a zebra” to “it’s not a disguised mule”) do transmit. Some closure deniers respond to the spreading problem by trying to find formal ways to separate inference patterns that transmit from those that don’t. However attractive that approach might be, I think the prospects are dim. This is because whether an agent’s trip down an inferential path violates NIFN depends on more than just the inference rules employed, or the content of the relevant propositions; it also depends on S’s basis for the initial premise. And that’s not a formal matter. So I won’t respond to the problem by attempting to find a formal method to delimit inference patterns that transmit from those that don’t. Instead, I’ll just concede that it is intuitive that the disjunction and conjunction paths transmit when they are considered in the abstract. The rejection of those intuitions – when applied to Dretske cases – is a price that the closure denier has to pay. The closure advocate, however, has their own price to pay. The argument for closure from the spreading problem runs as follows: () ()

()

Considered in the abstract, it is intuitive that the disjunction and conjunction paths transmit warrant. So they do transmit warrant, despite the fact that it is unintuitive that any agent could acquire a warrant from “it’s a zebra” to “it’s not a disguised mule” by any path, given how the agent acquires her warrant for “it’s a zebra.” So the direct inference from “it’s a zebra” to “it’s not a disguised mule” transmits as well, despite the fact that it is unintuitive that it does so.

But one can argue equally well in reverse: () () ()



It is unintuitive that any agent could acquire a warrant from “it’s a zebra” to “it’s not a disguised mule” by any path, given how the agent acquires her warrant for “it’s a zebra.” So the inferences from “it’s a zebra” to “it’s not a disguised mule” don’t transmit warrant by any direct or indirect path. So the inference from “it’s a zebra” to “it’s not a disguised mule” doesn’t transmit warrant by the disjunction or conjunction paths, despite the fact that it is intuitive that they do when they are considered in the abstract.

See, for example, Hawke .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

Abominable Conjunctions, Contextualism, & Spreading Problem  We have here the same clash of intuitions that we already noted in §., between those elicited by considering certain inference patterns in the abstract, which favor transmission, and those elicited by considering a particular range of instantiations of those inference patterns, which favor transmission failure. There is no obvious reason why this conflict should be resolved in favor of transmission. Moreover, we’ve seen good reason to favor the transmission-failure resolution. To countenance transmission in Dretske cases is not only highly unintuitive; it also conflicts with another general principle concerning warrant transmission, namely, NIFN, which is itself highly intuitive. Moreover, as we’ve also seen, transmission fails for a variety of conditions of warrant; it doesn’t matter to the arguments for this whether the agent performs a direct or indirect inference from “it’s a zebra” to “it’s not a disguised mule.” So, insofar as there’s a position to favor here, it’s the denial of closure.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 09:47:18, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.010

 

Bootstrapping, Epistemic Circularity, and Justification Closure

In this chapter we’ll consider the bootstrapping problem, epistemic circularity, and justification closure in light of the results of the previous chapters.

. Bootstrapping ..

BS Reasoning

Jonathan Vogel presented a well-known example of what he calls “bootstrapping” that, he thinks, undermines reliabilist accounts of knowledge. I’ll present an analogous case. Electrician Roxanne has a tester that determines whether the hot and neutral wires in a household AC circuit are connected to an outlet properly, that is, that the hot wire is connected to the narrow-prong slot and the neutral wire is connected to the wide-prong slot, rather than vice versa. If the wires are connected properly, the tester reads PROPER and, if they are connected improperly, it reads IMPROPER. She inserts the tester into an outlet concerning which she has no prior information as to whether the wires are connected properly. If the tester reads PROPER, she reasons as follows: Proper () The tester reads PROPER. So, () The outlet is properly wired. So, () The tester reads PROPER and the outlet is properly wired. So, () The tester’s reading is correct this time. If the tester reads IMPROPER, she reasons as follows: Improper () The tester reads IMPROPER. So, () The outlet is improperly wired. So, 

Vogel . See also Fumerton .



Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



() The tester reads IMPROPER and the outlet is improperly wired. So, () The tester’s reading is correct this time. Roxanne then inserts the tester into  other outlets. She has no prior information as to whether those outlets are properly wired either. She reasons from ()–() each time. She then counts the number of times she has arrived at conclusion () and continues: () The tester’s reading has been correct  times. So, () The tester is reliable. Call this “BS” (bootstrapping) reasoning. Intuitively, this is no way to determine that a tester is reliable (so it’s BS in another sense) even if, as a matter of fact, the tester is working properly and so gave a correct reading in each case. But it’s difficult to see why. Roxanne can presumably know what the tester’s reading is by looking at it, so premise () seems unproblematic. And presumably an electrician can use a tester to learn whether an outlet was wired properly, so () also seems fine. () follows from () and () by conjunction introduction. The tester’s reading is correct if and only if either the reading is PROPER and the outlet is wired properly or the reading is IMPROPER and the outlet is wired improperly. Since () in Proper is the first disjunct and in Improper the second disjunct of the latter disjunction, () follows from both by disjunction introduction. The  instances of () together imply () by addition. And () seems to provide very good inductive evidence for (). So why does Roxanne’s reasoning to () seem to be utterly irrelevant to whether () is true? ..

A Problem for Everyone

Vogel claimed that this is a problem for reliabilism in particular. Roxanne’s belief concerning the tester’s reading is surely reliably produced. And we can assume that the tester is in fact reliable, so her belief that the outlet is 

Butzer ’s version of () is “the reading is accurate.” He argues that this either indicates more than merely the correctness of that reading – such as that Roxanne’s reading is apt, in Ernest Sosa’s sense of that term – or it is just trivially equivalent to (). In the former case, () doesn’t follow from (); that the tester reads PROPER and the outlet is properly wired doesn’t imply that Roxanne’s reading is apt. In the latter case, () is as warranted as is (), since they are equivalent. But, as the reading of () above indicates, they are not equivalent. () implies (), but () doesn’t imply (): the reading could be correct because the reading is IMPROPER and the outlet is improperly wired, in which case () is false. As we’ll see in §., this makes for quite a difference in the epistemic status of () versus ().

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

wired properly or improperly is reliably produced as well. The inferences up to () are deductive and so are, presumably, conditionally reliable. And the inference to () looks like a straightforward inductive inference, which is also conditionally reliable. So () should count as reliably produced. But Stewart Cohen suggests that this is a problem, not only for reliabilists, but also for anyone who endorses “basic knowledge,” where basic knowledge is knowledge acquired by consultation of a source that the agent doesn’t (yet) know to be reliable. That is, it’s a problem for anyone who denies this principle: Knowledge of Reliability (KR) A potential knowledge source K can yield knowledge for S only if S knows that K is reliable.

It is so, suggests Cohen, because those who endorse basic knowledge will, presumably, take Roxanne to acquire knowledge of () by appeal to () and will also, presumably, endorse the subsequent deductive and inductive inferences to (). Many philosophers who aren’t reliabilists nevertheless posit the existence of basic knowledge, and so deny KR. This includes both externalists (sensitivity and safety theorists, for example) and internalists (foundationalists and dogmatists). So the bootstrapping problem is a problem for those philosophers too. But it’s also a problem for those who endorse KR. A competent electrician like Roxanne presumably does know that the instruments she uses are 



  

A conditionally reliable process is a belief-dependent process – it takes beliefs as inputs – where the outputs are typically true in the worlds relevant for assessment of process reliability when the inputs are true. See Goldman . As per §., whether a process is conditionally reliable is not a straightforward function of the validity of the inference. But, as we did in that section, we can assume that Roxanne’s reasoning is, in fact, conditionally reliable. At least, it should count as reliably produced if the output of a conditionally reliable process with reliably produced inputs counts as unconditionally reliable. See §. for a reason to think otherwise (but also a reason to deny that reliabilism is compatible with closure).  Cohen . Cohen , . Or, if they don’t view () as an instance of basic knowledge, an analogous BS argument can be constructed utilizing whatever basic knowledge they do countenance. Vogel  suggests that this is less of a problem for internalists, since they can insist that Roxanne needs justification for the tester’s reliability in order to learn () from (). But an internalist could think that Roxanne’s awareness of the tester’s reading suffices on its own for (internalist) justification; bootstrapping is then still a problem. And an internalist who insists that Roxanne must have prior justification for the tester’s reliability runs the risk of vicious regress: whatever the source of that justification might be, Roxanne would presumably need justification for the claim that that source is reliable. And the same goes for the source of that justification. And so on. If that regress stops somewhere – so that some propositions are known by appeal to an internally accessible source without the agent’s having background knowledge of that source’s reliability – a BS argument can then be mounted by appeal to that knowledge in the corresponding second premise.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



reliable. But that doesn’t make her inference to () any less ridiculous. True, she can’t come to know that the tester is reliable by BS reasoning since she already knows that it is. But she also intuitively can’t acquire a second warrant for – or, for that matter, a second reason to believe in – the tester’s reliability that way either, despite the fact that her prior knowledge wouldn’t prevent her doing so. Suppose that Roxanne already knew which of the  outlets were properly wired before applying the tester. She then reasons to () as before, except that she believes the various () premises on the basis of that prior knowledge rather than on the basis of the tester’s reading. Her inference to () is now unobjectionable; she has plausibly acquired a warrant for the tester’s reliability that she did not have before. And she does so even if she already knew – and so was warranted in believing – that the tester is reliable by other means before she started. Her reasoning to () then delivers a second such warrant, one that confirms what she knew at the start. In the original bootstrapping case, however, she will only intuitively end up with a warrant for () if she does have a prior warrant for it, precisely because she can’t get one from BS reasoning. KR only describes a constraint on Roxanne’s acquisition of warrant for () from (): she must know that the tester is reliable in order to acquire that warrant. But, even if so, that can contribute nothing to the resolution of the bootstrapping problem unless that constraint can’t ever be satisfied; but if it can’t, the resolution is a skeptical one. The problem was never that it is unintuitive that Roxanne can acquire a warrant for () (or ()); the problem is instead that it’s unintuitive that she can acquire a 

 

As a result, Michael Bergmann’s response to the bootstrapping problem won’t work (Bergmann ). He suggests that, although the reasoning to () is unacceptable in a “questioned source context” wherein the agent antecedently doubts the instrument’s reliability, it is acceptable in an “unquestioned source context” wherein she doesn’t antecedently doubt its reliability. But, since Roxanne already knows that the tester is reliable, she believes that it is, and so doesn’t doubt that it is. So this is a context in which such reasoning should be acceptable on Bergmann’s account. But it is intuitively no less objectionable than it would be if Roxanne initially doubted that the tester is reliable. Insofar as there’s a problem to solve here – namely, the intuitive sense that such reasoning is absurd – Bergman’s response is no solution to it. (Markie  and Pryor  offer explanations that are similar to Bergmann’s; Cohen  responds to Markie in a manner analogous to the above.) Kallestrup  makes a similar point. The bootstrapping problem is usually expressed as a problem concerning the acquisition of justification for, or a reason to believe, the conclusion rather than in terms of warrant transmission. But, since the focus of this book is warrant acquisition, I have formulated it as per above. Analogous conclusions to those drawn here concerning putative warrant acquisition by BS reasoning, however, apply to formulations of the problem in terms of justification or reasons acquisition.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

warrant for () by inference from () and (). Imposing constraints on her warrant for () can provide no explanation for that intuition. So no solution to the bootstrapping problem is delivered by insisting that she know that the tester is reliable to start with; it’s a problem for those who advocate as well as those who repudiate KR. That is, it’s a problem for everyone. .. Bootstrapping and Deduction Every step after () and () except that from () to () is a deductive inference. Only a skeptic would deny that () is warranted; surely Roxanne can know what the tester’s reading is by looking at it. And only a skeptic would deny that () is warranted. As per above, we are allowing that she knows beforehand that the tester is reliable. If she can’t know whether the outlet is properly wired by appeal to the tester’s reading even when she knows that she is appealing to a reliable source of information – one that is as reliable as you like – then we will know precious little that we take ourselves to know. So those who wish to preserve closure but aren’t skeptics have no choice but to insist that the reasoning fails from () to (): while she does know that the tester has been correct in  cases, this doesn’t provide any support for the reliability of the tester. On its face, this is difficult to swallow: why in the world would knowing that the tester is correct  times be irrelevant to its reliability? After all, the tester was not just correct in most of these  cases; it got the right answer every single time. Nor can the problem be that  times doesn’t suffice; it could be ,, or  million, without rendering the inference to () any less objectionable. How could such an enormous, perfectly accurate track-record not be relevant to the tester’s reliability? Nevertheless, a number of philosophers have proposed – in the interests of solving the bootstrapping problem without denying closure – that the inference from () to () is the point at which the reasoning breaks down. A number of constraints on inductive reasoning are on offer that are designed to achieve this result. Rather than considering each such proposal, I’ll instead indicate why restricting Roxanne to purely deductive reasoning won’t solve the problem. 



The inference from () to () is inductive. So, the relevant warrant acquisition principle is broader than that with which we have been concerned up to this point. But we’ll soon see that the bootstrapping problem remains if we limit the principle to deductive inferences. See, for example, Vogel , Briesen , and Butzer .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



Suppose that the tester has a battery. If the battery is fresh it’s good for , applications. But if its charge falls below  percent, its reading will be incorrect: it will read IMPROPER when the outlet is wired properly and PROPER when the outlet is wired improperly. (If the battery is completely out of charge, it will produce no reading at all.) Roxanne knows all this, and had put a fully charged battery in the tester before inserting it into the  outlets. Another electrician working with Roxanne, who also knows this about the tester’s battery but didn’t see her put the charged battery in, asks her whether she knows that the battery’s charge is above  percent and so not producing incorrect readings. Roxanne could reasonably respond that she inserted a fully charged battery just before she started. But it would be bizarre for her to respond by saying instead “I just discovered that the tester has worked correctly  times, and it wouldn’t do so if the battery were low; each reading would be wrong.” (Imagine what her coworker’s reaction would be if he knew how Roxanne arrived at this conclusion.) But Roxanne’s reasoning is purely deductive. She knows that, if the battery is low, every reading will be incorrect. Since she has taken herself to learn that every reading is correct, it follows from what she already knows and what she takes herself to have learned that the battery isn’t low. Nevertheless, it would be ridiculous for her to attempt to reassure her coworker this way that the battery is not low. It might be countered that the fact that the coworker can’t be reassured this way doesn’t imply that Roxanne didn’t acquire a warrant for the battery’s not being low. It could be that one can acquire a warrant for a proposition even though one can’t appeal to that warrant in order to resolve someone else’s antecedent doubt concerning whether the proposition is true. But suppose that the electrical code requires that the electrician verify that the tester’s batteries are sufficiently charged each day, whether or not she takes herself to know this at the start of that day. The standard method is to insert the tester into some outlets concerning which it is known beforehand which ones are wired properly and which ones are not, and determine whether the tester’s readings align with that background knowledge (which they would not do if the battery were low). This morning, Roxanne knew that the battery was sufficiently charged; she had fully charged it overnight and put it in the tester herself. But, being a competent electrician, she intends to verify this as per code. 

See Bergmann , Pryor , and Markie .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

She was just about to use the standard method until it occurred to her that doing so is utterly unnecessary. She can infer that the tester’s reading is correct by BS reasoning from () to () each time she uses it. Since (we are assuming) that provides her with the information that each such reading is correct, every time she uses the tester to determine whether an outlet is wired properly she simultaneously verifies that the battery is sufficiently charged, thereby satisfying the code requirement. There is no need for an independent test as per the standard method. Indeed, she realizes, there’s no need to actually apply the tester at all. She is fully aware that, no matter what the tester’s readings will be, she will be in a position to infer that those readings are correct, and so that the battery is sufficiently charged, by performing BS reasoning from () to () each time she applies the tester. But if she recognizes that she will acquire the information that the tester’s readings are correct, then she has the information that the tester’s readings will be correct now in virtue of that recognition. But if they will be correct, then the battery must be sufficiently charged. So she infers that the battery is sufficiently charged, thereby verifying that it is. She has already satisfied the code requirement without having inserted the tester into a single outlet. Needless to say (I hope), Roxanne’s attempt to satisfy the code requirement this way is ludicrous. But this can’t be explained away by appeal to her (or anyone else’s) antecedent doubt concerning whether the battery is sufficiently charged, since she started the day fully confident that the battery was sufficiently – indeed, fully – charged. One might reply that Roxanne is relying on her antecedent knowledge that the battery is sufficiently charged when taking herself to learn () on the basis of (), so that the “verification” by BS reasoning assumes something that she is not entitled to assume. But there’s no obvious reason why she is not so entitled. The code only requires that she verify that the battery is sufficiently charged that day, whether or not she knew this at the 



In general, if S knows at time t that she will know that P at time t+n (for some n), then S knows at t that P. This is an intrapersonal version of the interpersonal principle that if S knows that R knows that P then S knows that P; in this case, R is S’s future self. (Ideally the relevant P would not contain temporal indexicals; the principle fails for “I am not now in Kalamazoo,” since S might be in Kalamazoo at t but not at t+n. In the case above, the relevant P can be written as the eternal sentence “the tester works correctly on the first (and second and third . . .) occasion of use on March , .”) Not even the writers of the code (nor the code itself?) need to be viewed as somehow entertaining that antecedent doubt. They may have included that requirement for purely legal reasons, despite being fully confident that licensed electricians like Roxanne would only use instruments whose batteries they know to be sufficiently charged.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



start of the day. She did know that the battery was fully charged to start with. As a result, we are assuming, she can acquire a new warrant to believe that the tester’s readings are correct each time by BS reasoning to (). Each () does imply that the battery is sufficiently charged (given S’s background information concerning what happens when the battery’s charge is below  percent). And she is performing those tests today, and so acquiring her new warrant for the battery’s sufficient charge on the requisite day. Intuitively, of course, Roxanne doesn’t acquire a warrant for the claim that the tester was correct  times, and so can’t acquire a second warrant for “the battery is sufficiently charged” by deductive inference from that claim. But the view under consideration is precisely that she does acquire a warrant for the claim that the tester was correct  times. And that does imply (given her background knowledge) that the battery is sufficiently charged. It could be denied that she acquires a warrant for the latter claim by inference from the former. But that’s a deductive inference, so this would require denying that warrant transmits over a (very simple) deductive inference. And that’s just what the view under consideration is trying to avoid. So, on that view, she does verify that the battery is sufficiently charged as per code. And that’s ridiculous; if it were discovered that this is how she takes herself to satisfy the code requirement, she’d be appropriately found guilty of violating it. So no solution to the bootstrapping problem results from restricting Roxanne to purely deductive reasoning. Nor can the blame be placed on the inference from the various ()s to (). Some closure advocates who are willing to concede the failure of multi-premise closure might be tempted by this, citing accumulation-of-risk considerations: the risk, for each of the  premises, that it is false might be low enough that they all count as warranted, while the accumulated risk that the conclusion is false falls below whatever threshold warrant requires. But Roxanne’s reasoning above doesn’t require  applications at all; it only requires one. If the battery is low on the first occasion of use, then that reading will be incorrect. But she supposedly learns that it is correct on that occasion. So she can infer that the battery is sufficiently charged on the basis of that one occasion. And she can recognize that this will be the result before she uses it that first time. 



Or, if she performs the reasoning of two paragraphs above, she does so on the requisite day. (Warrant acquisition by transmission through a piece of reasoning occurs when the agent performs that reasoning.) Roxanne’s inference to () can, moreover, be reconstructed as a sequence of two-premise inferences. She can infer from “the tester is correct the first time” and “the tester is correct the second time” to “the tester was correct two times”; from the latter and “the tester was correct the third time” to “the

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

There’s no reason to think that accumulation-of-risk considerations will prevent warrant transmission in this one inference. Nevertheless, it’s still ridiculous to think that Roxanne acquires an additional warrant for the claim that the battery is sufficiently charged – and so that she satisfies the code requirement – this way. The problem with bootstrapping occurs in the inference from () to ().

. Bootstrapping and NIFN So either (), (), (), or () is not warranted by BS reasoning. We’ve seen that only the skeptic will deny that () and () are warranted. So transmission either fails from () and () to () or from () to (). In either case, the primary point I need to make is made: solving the bootstrapping problem involves positing transmission failure. But there is more to say: the considerations in Chapter  suggest that transmission fails from () to (). For that inference violates NIFN. () is (equivalent to) a disjunction: either the tester reads PROPER and the outlet is wired properly or the tester reads IMPROPER and the outlet is wired improperly. Its denial implies two conditionals: if the outlet is wired properly then the tester will read IMPROPER, and if the outlet is wired improperly then the tester will read PROPER. If either consequent is true, BS reasoning inevitably delivers the false conclusion that the tester’s reading is correct. So, if () is false, BS reasoning will inevitably lead Roxanne to the false conclusion that () is true. If we understand the



tester was correct three times”; and so on, until reaching “the tester was correct  times.” An accumulation-of-risk response requires that one of these two-premise inferences fails to transmit warrant. That is, presumably, somewhat more disconcerting for those trying to salvage as much of closure as possible in the face of accumulation-of-risk considerations than is the suggestion that an inference with  premises fails to transmit. Moreover, the sources of risk ultimately trace to the risk that Roxanne might identify the tester’s reading incorrectly and the risk that the reading itself might be incorrect. But both risks can be minimized to whatever extent is required in order to ensure that the risk for the ultimate conclusion “the tester’s reading was correct  times” falls below whatever threshold is required for warrant transmission. It will also require closure failure unless a principle analogous to KR is true, namely, that Roxanne can only acquire a warrant for () from () if she is already warranted in believing that the tester’s reading will be correct (call this “knowledge of correctness” or KoC). But KoC provides no more a solution to the bootstrapping problem than does KR. So the attempt to solve the problem provides no motivation to endorse either principle. And KoC presents the same threat of vicious regress as does KR: whatever the source might be for Roxanne’s prior warrant for the claim that the tester’s reading will be correct, she will need a warrant for the claim that the outcome of that source will be correct; and so on. So a non-skeptical solution requires that, at some point, Roxanne can acquire a warrant for a proposition analogous to () from some source without having a prior warrant for the claim that that source’s reading will be correct. And yet, the analogous inference to () will still not transmit warrant. So closure, as well as transmission, fails.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



proposition she is evaluating as “the tester’s reading is incorrect,” then her method for evaluating this proposition guarantees false negatives – “the tester’s reading is correct (so not incorrect)” – in every positive case. BS reasoning to (), however, doesn’t violate NIFN. If () is “the tester reads PROPER and the outlet is properly wired” and () is false, then either the tester reads IMPROPER or the outlet is improperly wired or both (and correspondingly if () is “the tester reads IMPROPER”). If () is false because the tester reads IMPROPER (whether or not the outlet is properly wired) then Roxanne will arrive at the correct conclusion that () is false. So her method does not violate NIFN; she is not guaranteed to arrive at () whenever () is false. Nor does BS reasoning to () violate NIFN. That the outlet was improperly wired doesn’t imply that the tester will read PROPER, so that she will believe that the outlet was properly wired by appeal to the tester. (Indeed, assuming that she does acquire a warrant for () on the basis of (), the tester will read IMPROPER when the outlet is improperly wired.) And Roxanne’s method for evaluating () doesn’t violate NIFN either: the tester’s reading PROPER doesn’t imply that she will believe that it reads IMPROPER by looking at it. (Indeed, assuming that she does acquire a warrant for () on this basis, she will believe that it reads PROPER when it does read PROPER). So, although BS reasoning to (), (), or () doesn’t violate NIFN, such reasoning to () does. As with other violations of NIFN, the fact that BS reasoning inevitably delivers the conclusion that the reading is correct whenever it isn’t is recognizable a priori: Roxanne can recognize that BS reasoning will inevitably lead to the conclusion that the reading is correct whether or not it is. There is, then, hardly a point in her going through the motions. But, by the same token, it’s absurd for her to take that inevitable result as providing any information as to whether the reading is correct. In fact, BS reasoning to () also violates NIFN: if the tester’s reading is incorrect in one or more of the  applications, BS reasoning will still arrive at the result that they are all correct. BS reasoning to () violates NIFN too: if the tester is unreliable, then BS reasoning will inevitably 



One might take the proposition evaluated to be instead “the tester is correct.” If so, Roxanne’s method violates “NIFP”: no inevitable false positives. But violation of NIFP is as objectionable as is violation of NIFN (see Chapter , fn. ). Indeed, they amount to the same principle, since nothing substantial hinges on taking the proposition being evaluated to be “the tester is correct” versus “the tester is incorrect.” I’ll characterize BS reasoning as violating NIFN hereafter; those who wish to do so are welcome to substitute NIFP. White  and Cohen  emphasize this aspect of BS reasoning.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

arrive at the result that it is reliable. But these latter inferences violate NIFN because the inference from () to () does so. If it doesn’t – as when Roxanne believes () on the basis of prior background knowledge concerning which outlets are properly wired instead of on the basis of the tester’s reading – then neither do these latter inferences. The inference from () to () is where the rot sets in. This does not amount to a prohibition against “one-sided” methods, that is, methods that can only determine that a proposition is true but not that it is false (or vice versa). Some have suggested that this is the problem with BS reasoning, since Roxanne can’t arrive at the conclusion that the tester is unreliable (or that it’s incorrect in a particular application). But one-sided methods can deliver warrant. Suppose that, whenever the tester reads PROPER, the outlet is properly wired but that it sometimes reads IMPROPER when the outlet is properly wired (although it also sometimes reads IMPROPER when the outlet is improperly wired). And suppose Roxanne knows all this. As a result, she can’t learn from this tester that the outlet is improperly wired, but she can learn from it that it is properly wired. The latter doesn’t violate NIFN: her method – believing that the outlet is properly wired whenever the tester reads PROPER – won’t inevitably deliver the result that the outlet is properly wired when it isn’t. The problem with BS reasoning isn’t that it can’t deliver a warrant for “the outlet is improperly wired” (or for “the tester is unreliable”); the problem is that such reasoning is guaranteed to deliver “the tester’s reading is correct” in every case, and so whenever it isn’t correct.

.

Bootstrapping and Epistemic Circularity

The bootstrapping cases are instances of what is often called, following William Alston (), “epistemic circularity.” These are cases of putative warrant acquisition for a proposition endorsing the reliability of a source where the acquisition of that warrant depends on that very source. Some have suggested that all epistemically circular arguments are illegitimate. But the above analysis of bootstrapping doesn’t imply this, since not all epistemically circular arguments violate NIFN. So not all instances of epistemic circularity are instances of bootstrapping.

 

See Douven & Kelp  and Titelbaum . See, for example, Fumerton  and Vogel  and .

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



Recall the reliability-of-the-senses example from §..: we have acquired a lot of physical, psychological, neurophysiological, and biological information concerning how the senses deliver information about our proximate physical environment. That information, taken together, may well allow the construction of an argument to the conclusion that our senses are reliable in the very circumstances in which we acquired that information. Since the premises of such an argument reference empirical information acquired in the various relevant sciences, and that information ultimately derives from the senses, such an argument will be epistemically circular. But it won’t violate NIFN. That our senses are unreliable doesn’t imply that we would end up with the same putative scientific information and so come to the conclusion that our senses are reliable. There is no reason to think that we would do so even in the nearest world in which they’re not reliable; our sensory experience may well be simply chaotic in that world. But, even if we do come to that conclusion in the nearest such world – it’s a BIV world, say – we certainly won’t do so in every such world; our experiences are chaotic in at least one such world. So appeal to NIFN rules out bootstrapping while permitting some epistemically circular reasoning to go through. This is a good thing, and not only because it is hardly intuitive that we can’t learn about the reliability of our senses by means of scientific investigation. For it will also be difficult to explain why epistemically circular reasoning is illegitimate while avoiding skepticism. The obvious such explanation appeals to KR: we need to know that a source of information is reliable before we can rely on it. For then we can’t learn that a source is reliable if we can only do so by relying on that source; we’d have to know that it is reliable at the start. But this generates skepticism. Some of our sources are primary and others derivative: the latter sources process information delivered by other sources and the former don’t. If we can’t rely on primary sources to deliver information, then we can’t rely on derivative sources either, since they presuppose that the primary sources do deliver information to them. So we can’t appeal to putative information delivered by derivative sources in order to evaluate the reliability of primary sources. So we can only appeal to the primary sources themselves for that information. But such an appeal would violate 

Moreover, it’s likely impossible to cordon off scientific information relevant to the reliability of the senses from scientific information in general (see §..). So a prohibition on epistemically circular reasoning would threaten most, or all, putative scientific information.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

KR: we have to know that they are reliable before relying on them. So we can’t rely on any source at all. But if KR is false then it’s difficult to see what the problem with epistemically circular arguments would be. The premises would be warranted solely by the information delivered from primary sources, and so without prior knowledge of their reliability. If they imply that those sources are reliable – and the inferences don’t violate NIFN – then what’s wrong with those inferences? Epistemic circularity is very often illustrated with the sort of “trackrecord” argument exemplified by Roxanne’s BS reasoning. Alston’s seminal  paper, for example, does so, as do many who have commented on it. That’s unfortunate; it conflates bootstrapping with epistemic circularity, so that opposition to the former bleeds into opposition to the latter, and so needlessly threatens skepticism. The problem with track-record arguments is violation of NIFN, which is, I think, the source of the intuition that those arguments are unacceptable. That doesn’t presuppose that KR is true, and so doesn’t rule out all epistemically circular arguments. It, therefore, doesn’t imply skepticism.





 

One might suggest that, if there are two primary sources A and B, we can learn that A is reliable by using B and that B is reliable by using A. But if we can only use B if we know already that B is reliable as required by KR, and learn that from A, then A will be used to determine the reliability of B, which will then be used to determine the reliability of A; and similarly for B. So at some point either A or B will have to be relied on before knowing that it is reliable, which violates KR. The same goes no matter how many primary sources there are. Another possible worry about epistemic circularity is that it evinces an arbitrary air. That a magic ball says, for example, “you can rely on it” when asked whether it is reliable is hardly an indication that it is reliable. (Note that this doesn’t violate NIFN; possible magic -ball answers include “my sources say no.”) There is not the space here to properly address this concern, except to note that it also generates a vicious regress in the same way as does appeal to KR: if appealing to a source is illegitimate because doing so is arbitrary unless we have independent information concerning that source’s reliability, then we would also have to have independent information concerning that source of information, and so on. (The arbitrariness concern might indeed underlie endorsement of KR.) It is, I think, correspondingly intuitive that the problem occurs in the inference from () to (), which is, as we have seen, where appeal to NIFN locates it. Vogel  suggests that the problem with BS arguments is that they are rule-circular: their conclusions affirm the reliability of the rule of inference (or warrant-acquisition) employed to arrive at them. Whether or not this is a plausible reading of BS reasoning, it will be very difficult to explain why rule-circular arguments are illegitimate in a way that doesn’t imply that other epistemically circular arguments are also illegitimate. Indeed, since the conclusion of epistemically circular arguments is “source S is reliable” and is arrived at by appealing to S, one might think that epistemically circular arguments just are rule-circular arguments, where the relevant rule is (or includes) “believe what S says.” So a prohibition against rule-circular arguments will generate skepticism as per above.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



. More Easy Knowledge Cohen presents the bootstrapping problem as an example of “easy knowledge,” since it seems much too easy to acquire knowledge of – or, better, warrant for – reliability this way. His second example of easy knowledge is a standard Dretske case (namely, Red Table of §.). In the example, Cohen is in a store looking at a red table and believes that it is red because it looks that way. He infers from this that it’s not a white table illuminated with red lights in such a way that its being so illuminated is undetectable (or, at least, undetectable by the typical furniture-shopper). As Cohen notes, his learning that it is not a white table so illuminated seems much too easy. It will be no surprise what my explanation for this will be: as with other Dretske cases, that inference violates NIFN. A nice result is that we have the same explanation for why easy knowledge is too easy in this case as well as in the bootstrapping cases. Cohen was right to see them as illustrations of the same basic problem. But he was wrong to think that the problem is avoidable by endorsement of KR. The issue is again whether one can acquire a warrant by the relevant inference, that is, whether warrant transmits. Even if Cohen knew that the table is not white and illuminated by red lights to start with – he installed standard white bulbs over the table himself, say – it would still be too easy for him to acquire a second warrant by inferring as above. If his son asks whether he knows that the table isn’t white and illuminated by red lights, it would be sensible for him to respond “yes, I do; I installed white bulbs over it myself.” But it would be absurd for him to respond “yes, I do; you see, it looks red, so it is red, so it isn’t a white table illuminated by red lights.” The latter answer is absurd whether or not he is in a position to provide the former answer as well. But there’s no obvious reason why it would be absurd if that inference transmits warrant. The problem isn’t rejection of KR; it’s violation of NIFN.

. Justification Closure Our topic to this point has been knowledge closure and, in particular, warrant closure. But one might wonder whether the overall conclusion – that warrant  

Cohen . As Cohen  points out, this isn’t absurd only because his son’s doubt concerning the table’s actual color can’t be alleviated this way. It would be absurd for Cohen to take himself to have acquired a warrant for the conclusion, even if he had no doubts about it at the start.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

closure fails at least when NIFN is violated – applies also to justification closure. Warrant is not justification. Indeed, justification may not be a condition of warrant at all. And, even if it is, Gettier cases demonstrate that justification does not suffice for warrant. So justification closure is consistent with failure of warrant closure. One might, moreover, point out that, in at least Dretske cases whose conclusions deny piecemeal skeptical hypotheses, it’s not implausible that the agent is justified in believing the conclusion. Dretske himself concedes that, in Zebra, we do plausibly have background information about zoos suggesting that they are unlikely to disguise their animals. Even if such information doesn’t ensure that “it’s not a disguised mule” is warranted, one might think that it does ensure that it is justified. I will take justification closure to be the following claim: Justification Closure (JC) If S has a justified belief in P and recognizes that P implies Q, then S has a justification for Q.

This is modeled on the corresponding principle for warrant, WC, of §., for reasons analogous to those motivating the formulation of that principle. .. Justification Closure and Transmission If JC is preserved as per Zebra above it is so, not because S acquires justification by inference from “it’s a zebra,” but because she has an independent source of justification (namely, in virtue of background information). This is a good thing. For it is as implausible that a putative method of investigation that violates NIFN – as do the inferences involved in Dretske cases like Zebra – delivers justification as it is that such a method delivers warrant. Unlike knowledge (or, as we saw in Chapter , warrant), it is widely accepted that justification is fallible: one can have a justified yet false belief. Justification is, nevertheless, thought by many to be a guide to the truth, in some sense of that phrase. There isn’t room here to explore different  

 

See Dretske , . It is, however, more difficult to make this claim about the denials of wholesale skeptical hypotheses in light of the fact that we would have – or, at least, seem to have – the same background information that we take ourselves to have if such hypotheses were true. See §.. See §§.–.. Nothing significant in what follows will be affected by the choice of formulation. The relevant principle here is justification transmission: If S has a justified belief in P and recognizes that P implies Q, then S has a justification for Q in virtue of that belief and recognition.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



interpretations of that phrase. But, if the putative method of justification acquisition is constitutively blind to the falsehood of the proposition it affirms – use of that method is guaranteed to deliver the result that that proposition is true whenever it is false – then it’s hard to believe that such a method would count as a guide to the truth of that proposition in any plausible sense. Some – in particular, some internalists – might reject the claim that justification is essentially a guide to the truth. They might claim instead that justifications are reasons to believe and that, in some circumstances at least, one can have a reason to believe a proposition even though those circumstances are such that one’s reasons are in general a poor guide to the truth (as when, for example, one is a BIV). But not only is a method that violates NIFN blind to the falsehood of the proposition it affirms, it is recognizable from S’s internal standpoint that it is so; she need only reflect on her method to realize this. And there’s nothing to prevent her doing so; S can, for example, recognize by reflection that she believes that it’s a zebra because her perceptual experience suggests that it is, and that she infers from this that it’s not a disguised mule. Assuming that the method also delivers the result that the proposition is true whenever it is true – which is the best-case scenario – it is then constitutively guaranteed to deliver a putative justification for that proposition in every possible scenario, and so no matter whether it is true or false. And S can recognize that this is so from her internal standpoint. It is one thing for the method by which one comes by one’s putative justification to be a poor guide to the truth in some (unusual) circumstances. But it is quite another for it to be recognizable from the agent’s internal standpoint that it constitutes no guide at all in any possible circumstances. Moreover, we saw that transmission fails for safety, reliability, and evidentialist conditions of warrant. If justification is fallible then safety is not justification; no belief can be safe and yet false. But reliabilism and evidentialism – which can be, and typically are, fallibilist views – are specifically advertised as theories of justification. If reliability and (internalist-style) evidence don’t transmit, then neither does justification on these views. So it’s highly implausible that justification transmits in Dretske cases. If JC is, nevertheless, true of those cases, then S must have a 

The nearest world to the actual is the actual world itself, and so is unquestionably nearby. So, if the belief is false in the actual world, it is believed in a nearby world in which it is false. So the belief is unsafe.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

transmission-independent justification for the falsehood of every skeptical hypothesis relevant to P in order to have a justification for P. ..

Justification and Buck-Passing

Unfortunately, that claim is as susceptible to the buck-passing argument of Chapter  as is the corresponding claim for warrant. Assuming that S can’t acquire a justification for “it’s not a disguised mule” by inference from “it’s a zebra,” justification closure requires that she have an independent justification for “it’s not a disguised mule.” But however that justification is acquired, if the justifier providing that justification, J is fallible – there is a possible world in which J exists and yet P is false – then S also needs a justification for ~(J & ~P). Although this follows from P, S can’t acquire that justification by inference from P; that inference also violates NIFN. So she needs an independent justification for ~(J & ~P). If that justifier, J, is also fallible, then S also needs a justification for ~(J & J & ~P). She can’t acquire that justification by inference from ~(J & ~P); that’s another NIFN violation. So she needs an independent justification for ~(J & J & ~P). And so on. This can only terminate in a body of justifiers that together imply the denials of all skeptical hypotheses pertaining to P, and so imply P. But it is as implausible that we possess infallible justifications for our beliefs as it is that we possess basis-infallible warrants for them. So, assuming that justification skepticism is false, it is no more plausible that justification is closed than it is that warrant is closed. Justification skepticism is, moreover, worse than warrant (and so knowledge) skepticism, at least on the assumption that one’s knowing that P implies that one is justified in believing that P. For one can mitigate the bad news that we know nothing by the good news that we are justified in believing many things. But if we are not justified in believing anything then, on that assumption, it’s bad news all round: we don’t know anything either.

  

This claim is WP of §.. A justifier is that, whatever it is, which occupies one side of the justification relation when the proposition (or belief ) justified occupies the other. Skeptical hypotheses just are the various ways in which P is false and yet the justifier remains. If there are no worlds in which any such hypothesis is true compatibly with the justifiers that S brings to bear, then there is no world in which those justifiers are present and P false. So any world in which those justifiers are present is one in which P is true. So those justifiers together strictly imply P.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



. Justification, Skepticism, and Assumptions In Chapter  I suggested that any claim to know P involves assuming that the skeptical hypotheses pertaining to our knowledge of P are false and, moreover, that the reasonability of such a claim requires that so assuming is itself reasonable. One might insist that you can only reasonably assume that a proposition is true if you are justified in believing it. If so, my own view implies that you are justified in believing that the skeptical hypotheses relevant to one’s knowledge of P are false whenever you can reasonably claim to know P. This doesn’t imply that you can know P only if you are justified in believing that those skeptical hypotheses are false; on the view sketched in Chapter , knowing P might require only that those skeptical hypotheses are in fact false and not, in addition, that you can reasonably assume that they are false. Nevertheless, the suggestion that we can’t reasonably claim to know P, for any P, is itself a disturbing brand of skepticism, namely, the secondorder skepticism of §... If you can only reasonably assume that which you are justified in believing, then such skepticism can only be avoided if you are justified in believing that the skeptical hypotheses relevant to your knowledge of P are false. So the consequent of the justification closure conditional – S has a justification for the denial of the skeptical hypothesis – is true. That alone ensures that the conditional itself is true, and so that justification is closed, at least with respect to skeptical hypotheses. Suppose, moreover, that knowing P does require that it is reasonable for you to claim to know P. This could be because the former directly implies the latter. Or it could be because knowing P implies being justified in believing P, and being justified in believing P implies that it is reasonable for you to claim to know P. In any case, if that supposition is true, and you can only reasonably assume that skeptical hypotheses are false if you are justified in believing that they are, and the view of Chapter  – that you can only reasonably claim to know P if you can reasonably assume that skeptical hypotheses are false – is correct, then first-order skepticism can only be avoided if justification closure is true. Here’s a representation of the argument, where “SK” is a skeptical hypothesis relevant to your knowledge of P: () You know that P. (Assumption for conditional proof ) () Your knowing P implies that you can reasonably claim to know P. (Assumption) 

See, however, two paragraphs below.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

You can reasonably claim to know P. ( and ) You can only reasonably claim to know P if you can reasonably assume ~SK. (From Chapter ) () You can reasonably assume ~SK. ( and ) () You can only reasonably assume ~SK if you are justified in believing ~SK. (Assumption) () You are justified in believing ~SK. ( and ) () So, if you are justified in believing P and recognize that P implies ~SK, then you are justified in believing ~SK. (; truth of consequent implies conditional) () So justification closure is true with respect to P and ~SK. () () So, if you know P, then JC is true with respect to P and ~SK. (–, conditional proof ) () ()

Call this the A-J (assumption-justification) argument. One might resist A-J by denying (): you can know P without reasonably claiming that knowledge. I won’t, however, pursue that option because I remain officially neutral on this issue – for my purposes here, I need not take a stand. The upshot is that, if reasonable assumption requires justification, then my own view sketched in Chapter  can only avoid second-order skepticism if justification closure is true. And, given a not-implausible assumption (namely, () of A-J), that view can only avoid first-order skepticism if JC is true (at least with respect to skeptical hypotheses). The linchpin in both cases is the claim that reasonable assumption does require justification (that is, () of A-J). I think that claim is false: you can reasonably assume ~SK without being justified in believing ~SK. If that is correct, then the view of Chapter  is compatible with the failure of JC without implying either first- or secondorder skepticism. There is no space here to offer a full defense of this position. Instead, I’ll indicate that a certain class of views about justification – some member of which I believe to be correct – supports it. Some epistemologists have recently proposed views according to which, roughly, justified belief is potential knowledge: one is justified just in case one knows in cooperative external environments. DeRose, for example, has recently suggested that S is justified in believing P when there is a possible world in which an internal duplicate of S knows P. Such views are encouraged by the thought that knowledge involves both a cooperative 

DeRose , chapter , fn. . An internal duplicate of you has whatever characteristics are shared between you and a BIV version of you.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

Bootstrapping, Epistemic Circularity, & Justification Closure



external environment (one must not be Gettiered, for example) and an appropriate internal state of the agent. If we understand the latter state as being justified, then to be justified is to be in an internal state compatible with one’s knowing, as per DeRose’s suggestion. Suppose that a view along these lines is correct. Then there is much that we can reasonably assume but are not justified in believing. For example, S of Chapter  doesn’t know that she won’t win the lottery (as per intuition and the conclusion of that chapter). Her circumstances are, however, as conducive to her so knowing as they can be compatibly with her internal state. She estimates the probability that she will win on the basis of the evidence that she has available to her – concerning how the lottery is structured – correctly. That evidence is not misleading. She is holding a real ticket (and not a clever fake). And so on. So if anybody knows that they won’t win given the same internal state – wherein the belief is based on an assessment of the improbability of winning – then she does. But she doesn’t. So it’s not possible for an internal duplicate of her to know that she won’t win; she’d have to be in an internally different state – by seeing what seems to be the drawing of the winning ticket on what seems to be her TV, for example – in order for that to be possible. So, she is not justified in believing that she won’t win. She is, at best, only justified in believing that she probably won’t win. Nevertheless, it is presumably reasonable for her to assume that she will not win the lottery in light of the fact that her winning is so improbable. So, as per Chapter , it is reasonable for her to claim to know that she won’t be able to afford an around-the-world cruise on the basis of her modest bank account balance. Its being so reasonable requires that it is reasonable for her to assume that all relevant skeptical hypotheses are false, including the hypothesis that she can afford such a vacation because she will win the lottery. But that is reasonable. She can, moreover, know that she can’t afford such a vacation on the basis of her modest bank account balance. For there are circumstances in 

For analogous views see Bird , Ichikawa , Littlejohn , Reynolds , and Smith , , and forthcoming. Such views fall within the “knowledge-first” tradition initiated by Williamson b: justification is characterized by appeal to knowledge rather than vice versa. It does follow from such views that everything known is justified. But knowledge can’t be defined as justified true belief (with or without an additional anti-Gettier condition) since justification is defined in terms of knowledge. (Williamson’s own recent view – that to be justified in believing P just is to know P – also falls within this category of views: since actuality implies possibility, one is justified only if one can be in the same internal state while knowing P. See Williamson, forthcoming.)

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011



Against Knowledge Closure

which she does know this on that basis, namely, those in which she doesn’t win the lottery (and doesn’t unexpectedly inherit a fortune, doesn’t find buried treasure, and so on, all of which are improbable). So she is justified in believing that she can’t afford a cruise vacation, despite not being justified in believing that she won’t win the lottery. Premise () of A-J is false, and JC fails. There is no room to explore this approach to justification further here, let alone defend it. But if it – or a view along similar lines – is correct, then one can reasonably assume that a proposition is true notwithstanding not being justified in believing that it is. So it’s at least compatible with the view described in Chapter  that justification is not closed. Since justification closure can only come at the price of skepticism thanks to the buckpassing argument, that’s a good thing. And, at any rate, even if justification is closed, that doesn’t imply that warrant is closed as well. Given the myriad difficulties that result from maintaining that warrant, or knowledge, is closed, that’s also a good thing.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:09:48, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.011

References

Alspector-Kelly, Marc , “Why Safety Does Not Save Closure,” Synthese (): –. , “Wright Back to Dretske, or Why You Might as Well Deny Knowledge Closure,” Philosophy and Phenomenological Research (): –. Alston, William , “Epistemic Circularity,” Philosophy and Phenomenological Research (): –. Baumann, Peter , “Reliabilism—Modal, Probabilistic, or Contextualist,” Grazer Philosphische Studen (): –. , “Nozick’s Defense of Closure,” in The Sensitivity Principle in Epistemology, edited by Becker, Kelly & Black, Tim (Cambridge: Cambridge University Press), –. Becker, Kelly , “Epistemic Luck and the Generality Problem,” Philosophical Studies (): –. Bergmann, Michael , “Epistemic Circularity, Malignant and Benign,” Philosophy and Phenomenological Research (): –. Bird, Alexander , “Justified Judging,” Philosophy and Phenomenological Research (): –. Black, Tim a, “Defending a Sensitive Neo-Moorean Invariantism,” in New Waves in Epistemology, edited by Hendricks, Vincent & Pritchard, Duncan (Basingstoke: Palgrave MacMillan), –. b, “Solving the Problem of Easy Knowledge,” The Philosophical Quarterly (): –. BonJour, Laurence , “Externalist Theories of Knowledge,” Midwest Studies in Philosophy : –. , In Defense of Pure Reason (Cambridge: Cambridge University Press). Briesen, Jochen , “Reliabilism, Bootstrapping, and Epistemic Circularity,” Synthese : –. Brueckner, Anthony , “Klein on Closure and Skepticism,” Philosophical Studies : –. , “Strategies for Refuting Closure for Knowledge,” Analysis (): –. , “Fallibilism, Underdetermination and Skepticism,” Philosophy and Phenomenological Research (): –. Buckwalter, Wesley (unpublished), “Error Possibility, Contextualism and Bias.” 

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012



References

Butzer, Tim , “Bootstrapping and Dogmatism,” Philosophical Studies : –. Carroll, Lewis , “What the Tortoise Said to Achilles,” Mind : –. Cartwright, John , “Cosmologist Claims the Universe May Not Be Expanding,” Nature: International Journal of Weekly Science  July , www.nature.com/news/cosmologist-claims-universe-may-not-be-expanding.. Chandler, Jake , “The Transmission of Support: A Bayesian Re-Analysis,” Synthese : –. Clark, Michael , “Knowledge and Grounds: A Comment on Mr. Gettier’s Paper,” Analysis (): –. Coffman, E. J. , “Warrant Without Truth?,” Synthese : –. Cohen, Stewart , “Two Kinds of Skeptical Argument,” Philosophy and Phenomenological Research (): –. , “Contextualism, Skepticism, and the Structure of Reasons,” Nous : –. , “Replies,” Philosophical Issues : –. , “Basic Knowledge and the Problem of Easy Knowledge,” Philosophy and Phenomenological Research (): –. , “Why Basic Knowledge Is Easy Knowledge,” Philosophy and Phenomenological Research (): –. , “Bootstrapping, Defeasible Reasoning, and A Priori Justification,” Philosophical Perspectives : –. Coliva, Annalisa , “Moderatism, Transmission Failures, Closure, and Humean Scepticism,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. , Extended Rationality: A Hinge Epistemology (Basingstoke: Palgrave MacMillan). Comesaña, Juan , “A Well-Founded Solution to the Generality Problem,” Philosophical Studies (): –. Conee, Earl & Feldman, Richard , Evidentialism: Essays in Epistemology (Oxford: Oxford University Press). Davies, Martin , “Externalism, Architecturalism, and Epistemic Warrant,” in Knowing Our Own Minds, edited by Wright, Crispin, Smith, Barry & Macdonald, Cynthia (Oxford: Oxford University Press), –. , “Externalism and Armchair Knowledge,” in New Essays on the A Priori, edited by Boghossian, Paul & Peacocke, Christopher (Oxford: Oxford University Press), –. , “The Problem of Armchair Knowledge,” in New Essays on Semantic Externalism and Self-Knowledge, edited by Nuccetelli, Susana (Cambridge, MA: MIT Press), –. , “Epistemic Entitlement, Warrant Transmission and Easy Knowledge,” Aristotelian Society, Supplementary Volume (): –. Delin, P. S., Chittleborough, P., & Delin, C. R. , “What Is an Assumption?,” Informal Logic (), –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012

References



DeRose, Keith , “Contextualism and Knowledge Attributions,” Philosophy and Phenomenological Research (): –. , “Solving the Skeptical Problem,” The Philosophical Review (): –. , “Knowledge, Assertion and Lotteries,” Australasian Journal of Philosophy (): –. , The Case for Contextualism (Oxford: Oxford University Press). , “Insensitivity Is Back, Baby!,” Philosophical Perspectives : –. , The Appearance of Ignorance (Oxford: Oxford University Press). Di Bello, Marco , “Epistemic Closure, and Topics of Inquiry,” Synthese : –. Dodd, Dylan , “Evidentialism and Skeptical Arguments,” Synthese : –. Douven, Igor & Kelp, Christof , “Proper Bootstrapping,” Synthese : –. Dretske, Fred , “Epistemic Operators,” Journal of Philosophy (): –. , “Conclusive Reasons,” Australasian Journal of Philosophy : –. , Knowledge and the Flow of Information (Cambridge, MA: MIT Press). , “The Case against Closure,” in Contemporary Debates in Epistemology, edited by Steup, Matthias, Turri, John & Sosa, Ernest (Malden, MA: Blackwell), –. Ebert, Philip , “Transmission of Warrant-Failure and the Notion of Epistemic Analyticity,” Australasian Journal of Philosophy : –. Fantl, Jeremy & McGrath, Matthew , Knowledge in an Uncertain World (Oxford: Oxford University Press). Feldman, Richard , “In Defense of Closure,” Philosophical Quarterly : –. Fumerton, Richard , Metaepistemology and Skepticism (Lanham: Rowman & Littlefield). Gettier, Edmund , “Is Justified True Belief Knowledge?,” Analysis (): –. Goldman, Alvin , “What Is Justified Belief?,” in Justification and Knowledge, edited by Pappas, George (Dordrecht: Reidel), –. , Epistemology and Cognition (Cambridge, MA: Harvard University Press). Greco, John , “Agent Reliabilism,” Philosophical Perspectives : –. , “Knowledge and Success from Ability,” Philosophical Studies : –. , Achieving Knowledge: A Virtue-Theoretic Account of Epistemic Normativity (Cambridge: Cambridge University Press). , “Contextualism and Gettier Cases,” in The Routledge Handbook of Epistemic Contextualism, edited by Ichikawa, Jonathan (London: Routledge), –. Harman, Gilbert & Sherman, Bret , “Knowledge, Assumptions, Lotteries,” Philosophical Issues : –. Hawke, Peter , “Questions, Topics, and Restricted Closure,” Philosophical Studies (): –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012



References

Hawthorne, John , Knowledge and Lotteries (Oxford: Oxford University Press). , “The Case for Closure,” in Contemporary Debates in Epistemology, edited by Steup, Matthias, Turri, John & Sosa, Ernest (Malden, MA: Blackwell), –. Heller, Mark , “Relevant Alternatives and Closure,” Australasian Journal of Philosophy (): –. Hetherington, Stephen , “Understanding Fallible Warrant and Fallible Knowledge: Three Proposals,” Pacific Philosophical Quarterly (): –. Howard-Snyder, Daniel & Howard-Snyder, Frances , “Infallibilism and Gettier’s Legacy,” Philosophy and Phenomenological Research (): –. Ichikawa, Jonathan , “Justification Is Potential Knowledge,” Canadian Journal of Philosophy (): –. Jenkins, Carrie , “Entitlement and Rationality,” Synthese : –. Kallestrup, Jesper , “Bootstrap and Rollback: Generalizing Epistemic Circularity,” Synthese : –. Klein, Peter , “Skepticism and Closure: Why the Evil Genius Argument Fails,” Philosophical Topics (): –. , “Foundationalism and the Infinite Regress of Reasons,” Philosophy and Phenomenological Research : –. , “Closure Matters: Academic Skepticism and Easy Knowledge,” Philosophical Issues : –. Kornblith, Hilary , “A Reliabilist Solution to the Problem of Promiscuous Bootstrapping,” Analysis : –. Kripke, Saul , “Nozick on Knowledge,” in Philosophical Troubles (Oxford: Oxford University Press): –. Lackey, Jennifer , “Norms of Assertion,” Nous (), –. Lasonen-Aarnio, Maria , “Single-Premise Deduction and Risk,” Philosophical Studies (): –. Lehrer, Keith , Theory of Knowledge (Boulder, CO: Westview Press). Lehrer, Keith & Paxson, Thomas, Jr. , “Knowledge: Undefeated Justified True Belief,” The Journal of Philosophy (): –. Lewis, David , Counterfactuals (Cambridge, MA: Harvard University Press). Littlejohn, Clayton , “Lotteries, Probabilities and Permissions,” Logos and Episteme (): –. Lockhart, Thomas , “Why Warrant Transmits Across Epistemological Disjunctivist Moorean-Style Arguments,” Synthese (): –. Luper, Stephen , “The Epistemic Predicament: Knowledge, Nozickian Tracking, and Scepticism,” Australasian Journal of Philosohy (): –. , “Dretske on Knowledge Closure,” Australasian Journal of Philosophy  (): –. MacFarlane, John , “The Assessment Sensitivity of Knowledge Attributions,” Oxford Studies in Epistemology  (Oxford: Oxford University Press), –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012

References



, “Relativism and Knowledge Attributions,” in Routledge Companion to Epistemology (London: Routledge), –. Markie, Peter , “Easy Knowledge,” Philosophy and Phenomenological Research (): –. McDowell, John , “Singular Thought and the Extent of Inner Space,” in Subject, Thought, and Context, edited by Pettit, Philip & McDowell, John (Oxford: Clarendon Press), –. , “Knowledge and the Internal,” Philosophy and Phenomenological Research, (): –. , “The Disjunctive Conception of Experience as Material for a Transcendental Argument,” in Disjunctivism: Perception, Action, Knowledge, edited by Haddock, Adrian & MacPherson, Fiona (Oxford: Oxford University Press), –. McEvoy, Mark , “Belief-Independent Processes and the Generality Problem for Reliabilism,” Dialectica (): –. McGlynn, Aidan , “On Epistemic Alchemy,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. McKinsey, Michael , “Transmission of Warrant and Closure of Apriority,” in New Essays on Semantic Externalism and Self-Knowledge, edited by Nuccetelli, Susana (Cambridge, MA: MIT Press), –. McLaughlin, Brian , “Skepticism, Externalism, and Self-Knowledge,” The Aristotelian Society Supplementary Volume : –. , “McKinsey’s Challenge, Warrant Transmission, and Skepticism,” in New Essays on Semantic Externalism and Self-Knowledge, edited by Nuccetelli, Susana (Cambridge, MA: MIT Press), –. Merricks, Trenton , “Warrant Entails Truth,” Philosophy and Phenomenological Research (): –. , “More on Warrant’s Entailing Truth,” Philosophy and Phenomenological Research (): –. Moore, G. E. a, “Four Forms of Skepticism,” in Philosophical Papers, edited by Moore, G. E. (New York, NY: Collier), –. b, “Proof of an External World,” in Philosophical Papers, edited by Moore, G. E. (New York, NY: Collier), –. Moretti, Luca , “Wright, Okasha and Chandler on Transmission Failure,” Synthese : –. Murphy, Peter , “Closure Failures for Safety,” Philosophia : –. , “A Strategy for Assessing Closure,” Erkenntnis : –. Murphy, Peter & Black, Tim , “Sensitivity Meets Explanation: An Improved Counterfactual Condition on Knowledge,” in The Sensitivity Principle in Epistemology, edited by Becker, Kelly & Black, Tim (Cambridge: Cambridge University Press), –. Nelkin, Dana, , “The Lottery Paradox, Knowledge, and Rationality,” The Philosophical Review, (): –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012



References

Nozick, Robert , Philosophical Explanations (Cambridge, MA: Harvard University Press). Okasha, Samir (), “Wright on the Transmission of Support: A Bayesian Analysis,” Analysis : –. Plantinga, Alvin a, Warrant: The Current Debate (Oxford: Oxford University Press). b, Warrant and Proper Function (Oxford: Oxford University Press). Pritchard, Duncan , “Closure and Context,” Australasian Journal of Philosophy (), –. a, Epistemic Luck (Oxford: Oxford University Press). b, “Wittgenstein’s On Certainty and Contemporary Anti-Scepticism,” in Readings of Wittgenstein’s On Certainty, edited by Moyal-Sharrock, Daniele and Brenner, William (Basingstoke: Palgrave-Macmillan): –. , “Anti-Luck Epistemology,” Synthese : –. , “Sensitivity, Safety, and Anti-Luck Epistemology,” in The Oxford Handbook of Scepticism, edited by Greco, John (Oxford: Oxford University Press), –. , “Safety-Based Epistemology: Whither Now?,” Journal of Philosophical Research : –. , “Anti-Luck Virtue Epistemology,” Journal of Philosophy (): –. , “Entitlement and the Groundlessness of our Believing,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. , Epistemic Angst: Radical Scepticism and the Groundlessness of Our Believing (Princeton, NJ: Princeton University Press). , “Epistemic Angst,” Philosophy and Phenomenological Research (): –. Pryor, James , “The Skeptic and the Dogmatist,” Noûs (): –. , “What’s Wrong with Moore’s Argument?” Philosophical Issues : –. Reed, Baron , “How to Think about Fallibilism,” Philosophical Studies : –. Reynolds, Steven , “Justification as the Appearance of Knowledge,” Philosophical Studies : –. Roush, Sherrilyn , Tracking Truth: Knowledge, Evidence and Science (Oxford: Oxford University Press). , “Closure on Skepticism,” Journal of Philosophy (): –. , “Sensitivity and Closure,” in The Sensitivity Principle in Epistemology, edited by Becker, Kelly & Black, Tim (Cambridge: Cambridge University Press), –. Ryan, Sharon , “Does Warrant Entail Truth?,” Philosophy and Phenomenological Research (): –. Sharon, Assaf & Spectre, Levi , “Evidence and the Openness of Knowledge,” Philosophical Studies (), –. Sherman, Bret & Harman, Gilbert , “Knowledge and Assumptions,” Philosophical Studies : –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012

References



Silins, Nicholas , “Transmission Failure Failure,” Philosophical Studies : –. Smith, Martin , “Transmission Failure Explained,” Philosophy and Phenomenological Research : –. , “Knowledge, Justification and Normative Coincidence,” Philosophy and Phenomenological Research (): –. , Between Probability and Certainty: What Justifies Belief (Oxford: Oxford University Press). forthcoming, “Four Arguments for Denying That Lottery Beliefs Are Justified,” in Lotteries, Knowledge and Rational Belief: Essays on the Lottery Paradox, edited by Douven, Igor (Cambridge: Cambridge University Press). Sosa, Ernest a, “How to Defeat Opposition to Moore,” Philosophical Perspectives : –. b, “How Must Knowledge Be Modally Related to What Is Known?,” Philosophical Topics : –. , A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume  (Oxford: Clarendon Press). , Judgment and Agency (Oxford: Oxford University Press). Stalnaker, Robert , “A Theory of Conditionals,” American Philosophical Quarterly (Monograph Series ): –. , “Presuppositions,” The Journal of Philosophical Logic : –. , “Pragmatic Presuppositions,” in Semantics and Philosophy, edited by Munitz, Milton & Unger, Peter (New York, NY: New York University Press), –. Stanley, Jason , Knowledge and Practical Interests (Oxford: Clarendon Press). Titelbaum, Michael , “Tell Me You Love Me: Bootstrapping, Externalism, and No-Lose Epistemology,” Philosophical Studies : –. Turri, John , “Justification,” Philosophy and Phenomenological Research (): –. , “Manifest Failure: The Gettier Problem Solved,” Philosopher’s Imprint  (): –. Veber, Michael , “The Argument from Abomination,” Erkenntnis : –. Vogel, Jonathan , “Tracking, Closure, and Inductive Knowledge,” in The Possibility of Knowledge: Nozick and his Critics, edited by Luper-Foy, Stephen (Totowa, NJ: Rowman & Littlefield), –. a, “Are There Counterexamples to the Closure Principle?,” in Doubting: Contemporary Perspectives on Skepticism, edited by Roth, Michael & Ross, Glenn (Dordrecht: Kluwer Academic Publishers), –. b, “Cartesian Skepticism and Inference to the Best Explanation,” Journal of Philosophy : –. , “Reliabilism Leveled,” Journal of Philosophy (): –. , “Subjunctivitis,” Philosophical Studies : –. , “Epistemic Bootstrapping,” Journal of Philosophy (): –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012



References

, “E & ~H,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. Wallbridge, Kevin , “Solving the Current Generality Problem, Logos and Episteme (): –. Warfield, Ted , “When Closure Does and Does Not Fail: A Lesson from the History of Epistemology,” Analysis (): –. Warfield, Ted & David, Marian , “Knowledge-Closure and Skepticism,” in Epistemology: New Essays, edited by Smith, Quentin (Oxford: Oxford University Press), –. Weiner, Matthew , “Must We Know What We Say?,” Philosophical Review (), –. Williamson, Timothy a, “Skepticism and Evidence,” Philosophy and Phenomenological Research (): –. b, Knowledge and its Limits (Oxford: Oxford University Press). , “Very Improbable Knowing,” Erkenntnis :–. forthcoming, “Justifications, Excuses, and Sceptical Scenarios,” in The New Evil Demon, edited by Dutant, Julien & Dorsch, Fabian (Oxford: Oxford University Press). Wittgenstein, Ludwig , On Certainty (Oxford: Basil Blackwell). White, Roger , “Problems for Dogmatism,” Philosophical Studies : –. Wright, Crispin , “Facts and Certainty,” Proceedings of the British Academy, –. Reprinted in Skepticism, edited by Williams, Michael , (Aldershot: Dartmouth Publishing), –. , “Cogency and Question-Begging: Some Reflections on McKinsey’s Paradox and Putnam’s Proof,” Philosophical Issues : –. , “(Anti-)Sceptics Simple and Subtle: G. E. Moore and John McDowell,” Philosophy and Phenomenological Research (): –. , “Some Reflections on the Acquisition of Warrant by Inference,” in New Essays on Semantic Externalism and Self-Knowledge, edited by Nuccetelli, Susana (Cambridge, MA: MIT Press), –. , “On Epistemic Entitlement: Warrant for Nothing (and Foundations for Free?),” Aristotelian Society, Supplementary Vol. (): –. , “The Perils of Dogmatism,” in Themes from G. E. Moore: New Essays in Epistemology and Ethics, edited by Nuccetelli, Susana & Seay, Gary (Oxford: Oxford University Press), –. , “Comment on John McDowell’s ‘The Disjunctive Conception of Experience as Material for a Transcendental Argument’,” in Disjunctivism: Perception, Action, Knowledge, edited by Haddock, Adrian & MacPherson, Fiona (Oxford: Oxford University Press), –. , “McKinsey One More Time,” in Self-Knowledge, edited by Hatzimoysis, Anthony (Oxford: Oxford University Press), –. , “Replies,” in Mind, Meaning and Knowledge: Themes from the Philosophy of Crispin Wright, edited by Coliva, Annalisa (Oxford: Oxford University Press), –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012

References



, “On Epistemic Entitlement (II): Welfare State Epistemology,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. Wunderlich, Mark , “Vector Reliability: A New Approach to Epistemic Justification,” Synthese (): –. Zalabardo, José , “Inference and Skepticism,” in Scepticism and Perceptual Justification, edited by Dodd, Dylan & Zardini, Elia (Oxford: Oxford University Press), –. Zagzebski, Linda , “The Inescapability of Gettier Problems,” The Philosophical Quarterly (): –.

Downloaded from https://www.cambridge.org/core. Access paid by the UCSF Library, on 06 Oct 2019 at 07:08:34, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.012

Index

abominable conjunctions and classical moderate invariantism, , – closure explanation of, – and comparative judgments, – and contextualism, – and denials of knowledge of skeptical hypotheses, – Gettier versions of, –,  and interest-relative invariantism, – knowledge rule explanation of, – and retraction, , – and skepticism, – third-person, – airport case (Cohen), n, n, n alchemy, problem of, , n Alspector-Kelly, Marc, n, n, n, n anti-Gettier condition defeasibility,  no false lemmas,  argument by counterexample, –, , ,  assumption-justification (AJ) argument,  assumptions and claims to know, – and justification,  nature of, – reasonability of, – and skepticism, , – bank case (DeRose), n basic knowledge,  basing relation,  basis internalism vs. externalism, – Baumann, Peter, n Bergmann, Michael, n Black, Tim, – Bonjour, Laurence, n bootstrapping and basic knowledge, – bootstrapping (BS) reasoning, – and deduction, –

and epistemic circularity, – and knowledge of reliability (KR), –, – and NIFN, – and reliabilism, – and skepticism, – bootstrapping (BS) reasoning, – Brueckner, Tony, n, n BS reasoning. See bootstrapping; bootstrapping reasoning buck-passing argument, – and evidence,  and justification,  and safety, – and warrant preservation (WP), – Buckwalter, Wesley, n Butzer, Tim, n Carroll, Lewis, n circularity epistemic. See bootstrapping; and epistemic circularity path,  premise, n, n,  structural,  warrant, – classical moderate invariantism, – closure-preserving (CPMI) vs. closure-denying (CDMI). See abominable conjunctions; and classical moderate invariantism closure of justification. See justification closure closure of knowledge. See knowledge closure closure of warrant. See warrant; closure of (WC) closure vs. transmission, – Coffman, E. J., – Cohen, Stewart, n, n, n, n, n, n, n, , n,  Coliva, Annalisa, , n conditional reliability. See reliability; conditional



Downloaded from https://www.cambridge.org/core. The Librarian-Seeley Historical Library, on 12 Jan 2020 at 12:47:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.013

Index Connee, Earl and Feldman, Richard,  contextualism and abominable conjunctions. See abominable conjunctions; and contextualism closure-affirming vs. closure-denying. See abominable conjunctions; and contextualism and direct warrant,  disconfirmation of,  and front-loading, – and safety, – and skeptical hypotheses, ,  and transmission, – and warrant by entitlement, – and warrant infallibilism, – Davies, Martin,  defeasibility. See anti-Gettier conditions; defeasibility Delin, P. S., Chittleborough, P., & Delin, C. R.,  DeRose, Keith, , –, , –, –, , ,  Di Bello, Marco, n disjunctivism, –, –, n, n Dodd, Dylan,  dogmatism, , n, ,  double-safety account (DeRose), n, , n downgrading, – doxastic vs. propositional justification, , n,  Dretske cases BIV, , , –, –, , –, –, , –, , –, –, n, n Car, –, , –, ,  Cruise, , , , , –, , – Directions, –, n Gas Gauge, –, , , , –, – generalization of, – introduction,  Misprint, , , , , , , n, n, n President, , – Red Table, ,  Restaurant,  Zebra, , –, , n, –, n, –, n, , n Zoo-Testing-R-Us, , , n Dretske, Fred, –, , , n, –, , n, , , n, ,  E=K, –, n easy knowledge,  electrician case, –, 



enabling conditions, –,  entitlement, warrant by entitlement of cognitive project and piecemeal skeptical hypotheses,  and wholesale skeptical hypotheses, – nature of, – general strategy, – strategic entitlement nature of,  and piecemeal skeptical hypotheses, – and wholesale skeptical hypotheses, – ERA. See expanded relevant alternatives evidence and probability, , –, –, –, –,  evidential externalism. See externalism; evidential evidentialism, – expanded relevant alternatives (ERA), – externalism evidential, – justification, –. See also justification internalism method, – fallibilism and infallibilism basis, –, –, , , –,  evidential, ,  justification, , ,  warrant, , , – far-safe vs. near-safe belief, –, –, , , , n Feldman, Michael, n,  first-order skepticism. See skepticism; first-order vs. second-order front-loading argument (FLA), –,  front-loading explanation of transmission failure. See transmission failure; front-loading explanation of front-loading strategy, , –, , ,  Fumerton, Richard, n garbage chute case (Sosa), n generality problem, , n Gettier cases and abominable conjunctions. See abominable conjunctions; Gettier versions of clock case, – and contextualism, – and entitlement,  and front-loading,  and luck,  and reliabilism,  sheep-in-the-field case, – and warrant infallibilism, – Goldman, Alvin, n Greco, John, n

Downloaded from https://www.cambridge.org/core. The Librarian-Seeley Historical Library, on 12 Jan 2020 at 12:47:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.013



Index

Harman, Gilbert & Sherman, Bret, n Hawthorne, John, n, , , n, , , –, ,  Heller, Mark, n, –, ,  hinge proposition, , , , ,  Howard-Snyder, Daniel & Howard-Snyder, Frances, , – how-possibly explanation, , ,  ice cube case (Vogel),  incremental confirmation, , n indistinguishability argument, , – infallibilism. See fallibilism interest-relative invariantism (IRI), n, – internalism evidential. See externalism; evidential justification, n, . See also externalism; justification method. See externalism; method justification, , , , , n, , , n, n, – justification closure and buck-passing. See buck-passing; and justification definition,  and transmission, – KC. See knowledge closure KK rule,  knowledge closure Classical Formulation, –, n,  Hawthorne’s Formulation, –, n KC, –, –,  multi-premise closure (MPC), , n,  Knowledge of Reliability (KR), –, – KR. See Knowledge of Reliability (KR) Kripke, Saul,  Lasonen-Aarnio, Maria, n,  leaching problem,  lead paint test case, –,  Lehrer, Keith, n Lehrer, Keith & Paxson, Thomas, n Lewis, David,  lottery intuition,  and justification, – method-relative SCA (SCA-M) explanation of,  parity reasoning explanation of, – safety explanation of, – subjunctive conditionals account (SCA) explanation of, – warrant infallibilist explanation of, –, 

lottery proposition, –, , –, –. See also Vogel propositions Luper, Stephen,  Markie, Peter, n McDowell, John, ,  McGlynn, Aidan, n Merricks, Trenton, , , –, ,  Merricks’ arguments for warrant infallibilism supervenience argument, – warrant transfer argument, – method externalism. See externalism; method method individuation. See No Inevitable False Negatives (NIFN); and method individuation missing missing explanation, – moderate invariantism. See classical moderate invariantism sensitive. See interest-relative invariantism moderatism, , n Moore, G. E., n, , –,  Moore’s paradox,  Moorean response to skepticism, , – Murphy, Peter, n near-safe belief. See far-safe vs. near-safe belief Nelkin, Dana, n NIFN. See No Inevitable False Negatives (NIFN) NIFN-C, – NIFN-S, – NIFP. See No Inevitable False Positives (NIFP) No Accident (NA), – no false lemmas. See anti-Gettier condition; no false lemmas No Inevitable False Negatives (NIFN) and background information, – and basis fallibilisn,  and bootstrapping. See bootstrapping; and NIFN and comparative judgments,  and contextualism, – definition,  and direct warrant,  and Dretske cases, – and easy knowledge,  vs. front-loading, – and generality problem, – and justification,  and the lottery intuition, n and method externalism, – and reliability,  and safety, n and sensitivity, –

Downloaded from https://www.cambridge.org/core. The Librarian-Seeley Historical Library, on 12 Jan 2020 at 12:47:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.013

Index and the spreading problem, – and warrant circularity, – definition,  vs. front-loading, – No Inevitable False Positives (NIFP), n, n. See also No Inevitable False Negatives Norman case (Bonjour), n Nozick, Robert, –, , ,  Omar’s new shoes case (Vogel), n parity reasoning. See lottery intuition; parity reasoning explanation of penetration vs. transmission. See transmission; vs. penetration Plantinga, Alvin,  Plantinga warrant (P-warrant), , , ,  pragmatic encroachment. See interest-relative invariantism Pritchard, Duncan, n, n, ,  probability and Dretske cases, –, n and evidence. See evidence and probability and underdetermination, – propositional justification. See doxastic vs. propositional justification Pryor, James, n,  red barn case (Kripke), , n Reichenbach’s vindication of induction,  reliabilism. See transmission; and reliabilism reliability of the senses case, ,  reliability, conditional, –, ,  reliability, consequential vs. inconsequential, – retraction, , –, , – Roush, Sherrilyn, n, n, n Rule of Sensitivity,  Ryan, Sharon, n safety and buck-passing. See buck-passing; and safety double-safety (DeRose). See double-safety account (DeRose) and the lottery intuition. See lottery intuition; safety explanation of safety/sensitivity hybrid, . See also expanded relevant alternatives salmon case (Hawthorne), n SCA. See lottery intuition; subjunctive conditionals account explanation of



SCA-M. See lottery intuition; method-relative SCA explanation of second-order skepticism. See skepticism; first-order vs. second-order sensitive moderate invariantism. See interest-relative invariantism sensitivity and closure,  and NIFN. See No Inevitable False Negatives; and sensitivity and the lottery intuition. See lottery intuition; subjunctive conditionals account of Sharon, Assaf & Spectre, Levi, n silver detector case, – simple skepticism, – skeptical arguments skeptical closure argument, , , , , , ,  skeptical front-loading argument, , ,  skeptical underdetermination argument, ,  skeptical hypotheses Gettier versions,  piecemeal vs. wholesale, –, , , –, , , , , – retraction of. See retraction skepticism and abominable conjunctions. See abominable conjunctions; and skepticism and assumptions. See assumptions; and skepticism and background information, – and classical moderate invariantism. See abominable conjunctions; and classical moderate invariantism vs. closure denial,  and contextualism. See contextualism; and skeptical hypotheses and downgrading,  and entitlement, –. See also entitlement; warrant by and epistemic circularity, – first-order vs. second-order,  and front-loading. See buck-passing and interest-relative invariantism (IRI). See abominable conjunctions; and interest-relative invariantism and justification, – and NIFN,  and parity reasoning,  and safety, – and transmission, – and warrant infallibilism, 

Downloaded from https://www.cambridge.org/core. The Librarian-Seeley Historical Library, on 12 Jan 2020 at 12:47:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.013



Index

skepticism (cont.) and warrant preservation (WP). See buck-passing; and warrant preservation Sosa, Ernest, n, –,  spreading problem, – Stalnaker, Robert,  strategic entitlement. See entitlement; strategic subject-sensitive invariantism. See interest-relative invariantism (IRI) subjunctive conditionals account. See lottery intuition; subjunctive conditionals account (SCA) explanation of Thelma, Louise, and Lena case (DeRose), n transmission vs. closure. See closure vs. transmission and contextualism. See contextualism; and transmission and evidentialism, – of knowledge,  and reliabilism, – and retraction, – vs. penetration, –, n and safety,  of warrant. See warrant; transmission of (WT) transmission failure front-loading explanation of, , – NIFN explanation of, – truetemp case (Lehrer), n underdetermination and front-loading, –

and the skeptical closure argument, – skeptical argument. See skeptical argument; skeptical underdetermination argument Vogel proposition, –, , , –, , , , –, , . See also lottery proposition Vogel, Jonathan,  Warfield, Ted & David, Marian,  warrant circularity. See circularity; warrant closure of (WC), ,  by default, , , , , n, . See also entitlement; warrant by definition, – direct warrant, , , – enabling conditions of, – by entitlement. See entitlement; warrant by Plantinga warrant (P-warrant). See warrant; definition transmission of (WT), , , –, , – WC. See warrant; closure of (WC) Williamson, Timothy, n, , n, –, , , n, n Williamson’s insight, , , , , –, , , , –,  Wittgenstein, Ludwig, , , ,  Wright, Crispin, , , –, –, –, –,  WT. See warrant transmission (WT)

Downloaded from https://www.cambridge.org/core. The Librarian-Seeley Historical Library, on 12 Jan 2020 at 12:47:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108604093.013