A trivalent theory of indicative conditionals automatically enforces Stalnaker’s thesis— the equation between probabilities of conditionals and conditional probabilities. This result holds because the trivalent semantics requires, for principled reasons, a modification of the ratio definition of conditional probability in order to accommodate the possibility of undefinedness. I analyze precisely how this modification allows the trivalent semantics to avoid a number of well-known triviality results, in the process clarifying why these results hold for many bivalent theories. I suggest that the slew of triviality published in the last 40-odd years need not be viewed as an argument against Stalnaker’s thesis: it can be construed instead as an argument for abandoning the bivalent requirement that conditionals somehow be assigned a truth-value in worlds in which their antecedents are false.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Over the past three days, I’ve
discussed the epistemic role of consciousness in perception, cognition, and
Following Reichenbach, it is widely held that in making a direct inference, one should base one’s conclusion on a relevant frequency statement concerning the most specific reference class for which one is able to make a warranted and relatively precise-valued frequency judgment. In cases where one has accurate and precise-valued frequency information for two relevant reference classes, R1 and R2, and one lacks accurate and precise-valued frequency information concerning their intersection, R R2, it is widely held, following Reichenbach, that no inference may be drawn. In contradiction to Reichenbach and the common wisdom, I argue for the view that it is often possible to draw a reasonable informative conclusion, in such circumstances. As a basis for drawing such a conclusion, I show that one is generally in a position to formulate a reasonable direct inference for a reference class that is more specific than either of R1 and R2.
Easy-road mathematical fictionalists grant for the sake of argument that quantification over mathematical entities is indispensable to some of our best scientific theories and explanations. Even so they maintain we can accept those theories and explanations, without believing their mathematical components, provided we believe the concrete world is intrinsically as it needs to be for those components to be true. Those I refer to as “mathematical surrealists” by contrast appeal to facts about the intrinsic character of the concrete world, not to explain why our best mathematically imbued scientific theories and explanations are acceptable in spite of having false components, but in order to replace those theories and explanations with parasitic, nominalistically acceptable alternatives. I argue that easy-road fictionalism is viable only if mathematical surrealism is and that the latter constitutes a superior nominalist strategy. Two advantages of mathematical surrealism are that it neither begs the question concerning the explanatory role of mathematics in science nor requires rejecting the cogency of inference to the best explanation.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic
role of consciousness in introspection.What is introspection? …
The philosopher wrote:
The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. …
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic role of consciousness in cognition.Could there be a cognitive zombie – that is, an unconscious
creature with the capacity for cognition? …
It is widely recognized that the process used to make observations often has a significant effect on how hypotheses should be evaluated in light of those observations. Arthur Stanley Eddington (1939, Ch. II) provides a classic example. You’re at a lake and are interested in the size of the fish it contains. You know, from testimony, that at least some of the fish in the lake are big (i.e., at least 10 inches long), but beyond that you’re in the dark. You devise a plan of attack: get a net and use it to draw a sample of fish from the lake. You carry out your plan and observe: O : 100% of the fish in the net are big.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic role of consciousness in perception.Human perception is normally conscious: there is something
it is like for us to perceive the world around us. …
To say that evidence is normative is to say that what evidence one possesses, and how this evidence relates to any proposition, determines which attitude among believing, disbelieving and withholding one ought to take toward this proposition if one deliberates about whether to believe it. It has been suggested by McHugh that this view can be vindicated by resting on the premise that truth is epistemically valuable. In this paper, I modify the strategy sketched by McHugh so as to overcome the initial difficulty that it is unable to vindicate the claim that on counterbalanced evidence with respect to P one ought to conclude deliberation by withholding on P. However, I describe the more serious difficulty that this strategy rests on principles whose acceptance commits one to acknowledging non-evidential reasons for believing. A way to overcome this second difficulty, against the evidentialists who deny this, is to show that we sometimes manage to believe on the basis of non-epistemic considerations. If this is so, one fundamental motivation behind the evidentialist idea that non-epistemic considerations could not enter as reasons in deliberation would lose its force. In the second part of this paper I address several strategies proposed in the attempt to show that we sometimes manage to believe on the basis of non-epistemic considerations and show that they all fail. So, I conclude that the strategy inspired by McHugh to ground the normativity of evidence of the value of truth ultimately fails.
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers & Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful.
About 30 years ago, William Alston (1988) penned the locus classicus for a puzzle that is at the heart of contemporary debates on epistemic normativity. Alston’s puzzle, in short, comes from realizing that the most natural way of understanding talk of epistemic justification seems to be in tension with the limited control we have over our belief formation. In this paper, I want to clarify and expand this puzzle, as well as examine the nature and full consequences of a deflationary approach to its resolution.
The study of iterated belief change has principally focused on revision, with the other main operator of AGM belief change theory, namely contraction, receiving comparatively little attention. In this paper we show how principles of iterated revision can be carried over to iterated contraction by generalising a principle known as the ‘Harper Identity’. The Harper Identity provides a recipe for defining the belief set resulting from contraction by a sentence A in terms of (i) the initial belief set and (ii) the belief set resulting from revision by ¬A.
A venerable view holds that a border between perception and cognition is built into our cognitive architecture, and that this imposes limits on the way information can flow between them. While the deliverances of perception are freely available for use in reasoning and inference, there are strict constraints on information flow in the opposite direction. Despite its plausibility, this approach to the perception-cognition border has faced criticism in recent years. This paper develops an updated version of the architectural approach, which I call the dimension restriction hypothesis (DRH).
It is generally assumed that relations of necessity cannot be known by induction on experience. In this paper, I propose a notion of situated possibilities, weaker than nomic possibilities, that is compatible with an inductivist epistemology for modalities. I show that assuming this notion, not only can relations of necessity be known by induction on our experience, but such relations cannot be any more underdetermined by experience than universal regularities. This means that any one believing in a universal regularity is as well warranted to believe in the corresponding relation of necessity.
There are two notions in the philosophy of probability that are often used interchangeably: that of subjective probabilities and that of epistemic probabilities. This paper suggests they should be kept apart. Specifically, it suggests that the distinction between subjective and objective probabilities refers to what probabilities are, while the distinction between epistemic and ontic probabilities refers to what probabilities are about. After arguing that there are bona fide examples of subjective ontic probabilities and of epistemic objective probabilities, I propose a systematic way of drawing these distinctions in order to take this into account. In doing so, I modify Lewis’s notion of chances, and extend his Principal Principle in what I argue is a very natural way (which in fact makes chances fundamentally conditional). I conclude with some remarks on time symmetry, on the quantum state, and with some more general remarks about how this proposal fits into an overall Humean (but not quite neo-Humean) framework.
In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this paper is to investigate the extent to which classicists and Bayesians can (in some suitable sense of the word) agree. My conclusion is that, in certain situations, they can’t. The upshot is that, if we assume that the classicist isn’t allowed to have a higher degree of belief (credence) in a null hypothesis after he has rejected it than before, then (in certain situations), either he has to have trivial or incoherent credences to begin with, or fail to update his credences by conditionalization.
Debates in philosophy of probability over the nature and ontology of objective chance by and large remain inconclusive. No reductive account of chance has ultimately prospered. This article proposes a change of focus towards the functions and roles that chance plays in our cognitive practices. Its starting philosophical point is pluralism about objective probability. The complex nexus of chance is the interlinked set of roles in modelling practice of i) parametrized probabilistic dispositions (“propensities”); ii) distribution functions (“probabilities”); and iii) statistical finite data (“frequencies”). It is argued that the modelling literature contains sophisticated applications of the chance nexus to both deterministic and indeterministic phenomena. These applications may be described as lying on a spectrum between what I call ‘pure probabilistic’, and ‘pure stochastic’ models. The former may be found in the tradition of the method of arbitrary functions; the latter in present-day techniques for stochastic modelling in the complex sciences, as well as some orthodox approaches to quantum mechanics. These modelling practices provide positive arguments for the irreducible complexity of the chance nexus.
We seem to be responsible for our beliefs in a distinctively epistemic way. We often hold each other to account for the beliefs that we hold. We do this by criticising other believers as ‘gullible’ or ‘biased’, and trying to persuade others to revise their beliefs. But responsibility for belief looks hard to understand because we seem to lack control over our beliefs. This paper argues that we can make progress in our understanding of responsibility for belief by thinking about it in parallel with another kind of responsibility: legal responsibility for criminal negligence.
Julie chose b over a, even though she
knew b was more expensive than a. There is nothing puzzling about Julie’s choice. Perhaps Julie was
choosing among vacation options, and b was a week’s vacation
in Paris, while a was a week’s vacation in Peoria. In any
event, Julie evidently took the overall merits of b to
outweigh those of a, even if b was inferior from a
financial standpoint. (2)
Jimmy opted for d over c, despite his judging
c to be a healthier choice than d. Again, we find Jimmy’s decision unremarkable.
We examine the following consequentialist view of virtue: a trait is a virtue if and only if it has good consequences in some relevant way. We highlight some motivations for this basic account, and offer twelve choice points for filling it out. Next, we explicate Julia Driver’s consequentialist view of virtue in reference to these choice points, and we canvass its merits and demerits. Subsequently, we consider three suggestions that aim to increase the plausibility of her position, and critically analyze them. We conclude that one of those proposed revisions would improve her account.
All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the publisher, except by a reviewer who may quote passages in a review.
takes scientific understanding to be non-factive and maintains that there are epistemically useful falsehoods that figure ineliminably in scientific understanding and whose falsehood is no epistemic defect. Veritism, she argues, cannot account for these facts. This paper argues that while Elgin rightly draws attention to several features of epistemic practices frequently neglected by veritists, veritists have numerous plausible ways of responding to her arguments. In particular, it is not clear that false propositional commitments figure ineliminably in understanding in the manner supposed by Elgin. Moreover, even if scientific understanding were non-factive and false propositional commitments did figure ineliminably in understanding, the veritist can account for this in several ways without thereby abandoning veritism.
Fitch’s Paradox shows that if every truth is knowable, then every truth is known. Standard diagnoses identify the factivity/negative infallibility of the knowledge operator and Moorean contradictions as the root source of the result. This paper generalises Fitch’s result to show that such diagnoses are mistaken. In place of factivity/negative infallibility, the weaker assumption of any ‘level-bridging principle’ suffices. A consequence is that the result holds for some logics in which the “Moorean contradiction” commonly thought to underlie the result is in fact consistent. This generalised result improves on the current understanding of Fitch’s result and widens the range of modalities of philosophical interest to which the result might be fruitfully applied. Along the way, we also consider a semantic explanation for Fitch’s result which answers a challenge raised by Kvanvig (2006).
[warning: it’s proving hard to avoid typos in the formulas here. I’ve caught as many as I can, but please exercise charity in reading the various subscripts]. In the Lewisian setting I’ve been examining in the last series of posts, I’ve been using the following definition of indicates-to-x (I use the same notation as in previous posts, but add a w-subscript to distinguish it from an alternative I will shortly introduce):
The arrow on the right is the counterfactual conditional, and the intended interpretation of the B-operator is “has a reason to believe”. …
« Links, proofs, talks, jokes
A nerdocratic oath
Recently, my Facebook wall was full of discussion about instituting an oath for STEM workers, analogous to the Hippocratic oath for doctors. Perhaps some of the motivation for this comes from a worldview I can’t get behind—one that holds STEM nerds almost uniquely responsible for the world’s evils. …
Suppose that it’s public information/common belief/common ground among a group G that the government has fallen. What does this require about what members of G know about each other? Here are three possible situations:
One knows who each of the other group members is, attributing to (de re) to each whatever beliefs (etc) are required for it to be public information that p.
One has a conception corresponding to each member of the group. …
All standard epistemic logics legitimate something akin to the principle of closure, according to which knowledge is closed under competent deductive inference. And yet the principle of closure, particularly in its multiple premise guise, has a somewhat ambivalent status within epistemology. One might think that serious concerns about closure point us away from epistemic logic altogether—away from the very idea that the knowledge relation could be fruitfully treated as a kind of modal operator. This, however, need not be so. The abandonment of closure may yet leave in place plenty of formal structure amenable to systematic logical treatment. In this paper we describe a family of weak epistemic logics in which closure fails, and describe two alternative semantic frameworks in which these logics can be modelled. One of these—which we term plurality semantics—is relatively unfamiliar. We explore under what conditions plurality frames validate certain much-discussed principles of epistemic logic. It turns out that plurality frames can be interpreted in a very natural way in light of one motivation for rejecting closure, adding to the significance of our technical work. The second framework that we employ—neighbourhood semantics—is much better known. But we show that it too can be interpreted in a way that comports with a certain motivation for rejecting closure.
Probability is the most important concept in modern science,
especially as nobody has the slightest notion what it means.
—Bertrand Russell, 1929 Lecture
(cited in Bell 1945, 587)
‘The Democrats will probably win the next election.’
‘The coin is just as likely to land heads as tails.’
‘There’s a 30% chance of rain tomorrow.’
‘The probability that a radium atom decays in one year is
One regularly reads and hears probabilistic claims like these. But
what do they mean? This may be understood as a metaphysical question
about what kinds of things are probabilities, or more generally as a
question about what makes probability statements true or false.
Until recently, epistemology—the study of knowledge and
justified belief—was heavily individualistic in focus. The
emphasis was on evaluating doxastic attitudes (beliefs and disbeliefs)
of individuals in abstraction from their social environment. Social
epistemology seeks to redress this imbalance by investigating the
epistemic effects of social interactions and social systems. After
giving an introduction, and reviewing the history of the field in
sections 1 and 3, we move on to discuss central topics in social
epistemology in section 3. These include testimony, peer disagreement,
and judgment aggregation, among others.