Various theorists have endorsed the “communication argument”: communicative capacities are necessary for morally responsible agency because blame aims at a distinctive kind of moral communication. I contend that existing versions of the argument, including those defended by Gary Watson and Coleen Macnamara, face a “pluralist challenge”: they do not seem to sit well with the plausible view that blame has multiple aims. I then examine three possible rejoinders to the challenge, suggesting that a context-specific function-based approach constitutes the most promising modification of the communication argument. Keywords: Blame; moral responsibility; communicative theory of responsibility; function of blame.
A counterpossible is a counterfactual with an impossible antecedent. Counterpossibles present a puzzle for standard theories of counterfactuals, which predict that all counterpossibles are semantically vacuous. Moreover, counterpossibles play an important role in many debates within metaphysics and epistemology, including debates over grounding, causation, modality, mathematics, science, and even God. In this article, we will explore various positions on counterpossibles as well as their potential philosophical consequences.
Dynamic Causal Decision Theory (EDC, ch.s 7 and 8)
Posted on Thursday, 23 Sep 2021. Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality. …
The desirable gambles framework offers the most comprehensive foundations for the theory of lower previsions, which in turn affords the most general account of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and the machinery for coherent inference but detaches the interpretation of desirability from the multiplicative scale invariance axiom. The second strays from the additive combination axiom to accommodate repeated gambles that return rewards by a non-stationary processes that is not necessarily additive. Unlike the first approach, which is a conservative amendment to the desirable gambles framework, the second is a radical departure. Yet, common to both is a method for describing rewards called discounted utility.
. We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft on errorstatistics.com. …
When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves’ (or AJBT) test to answer this question (Thaler & Sunstein, 2008, 5). In a recent paper, L. A. Paul and Sunstein (ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally trans-formative experience, that is, one that results in our values changing (Paul, 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Pettigrew, 2019, Chapters 6 and 7).
In Fischer and Sytsma (2021) we put forward a bold hypothesis: the zombie argument against materialism is built on zombie intuitions – intuitions that are ‘killed’ (cancelled) by the context provided but kept cognitively alive by linguistic salience bias. We then provided evidence from corpus studies as well as surveys and experiments with typicality, plausibility, and agreement ratings to support this hypothesis. The four commentators have provided helpful and thought-provoking objections, in particular to our main experiment, that point to new hypotheses. Here, we’ll respond to the principal points our commentators raise, focusing on the new hypotheses and how they might be tested. We briefly summarise the target article in Sect.1, with a focus on the aspects targeted by commentators. Sect.2 discusses the primary objections Chalmers and Liu raised, namely, to the experimental materials we used, and spells out the competing hypotheses their objections motivate. Sect.3 reports a follow-up study that examined these hypotheses. In Sect.4, we turn to further concerns about the main experiment’s materials and procedure, raised by Frankish and Machery. In conversation with these two commentators, the final Sect.5 brings out the need for empirical investigation of laypeople’s intuitions about philosophical zombies (and other ‘problem intuitions’ motivating the ‘hard problem of consciousness’) and highlights what is new and important about our ambitious ‘aetiological strategy’ that seeks to develop and assess debunking explanations of intuitions.
The Precautionary Principle is typically construed as a conservative decision rule aimed at preventing harm. But Martin Peterson (JME 33: 5–10, 2007; The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, Oxford, 2017) has argued that the principle is better understood as an epistemic rule, guiding decision-makers in forming beliefs rather than choosing among possible acts. On the epistemic view, he claims there is a principle concerning expert disagreement underlying precautionary-based reasoning called the ecumenical principle: all expert views should be considered in a precautionary appraisal, not just those that are the most prominent or influential. In articulating the doxastic commitments of decision-makers under this constraint, Peterson precludes any probabilistic rule that might result in combining expert opinions. For combined or consensus probabilities are likely to provide decision-makers with information that is more precise than warranted. Contra Peterson, I argue that upon adopting a broader conception of probability, there is a probabilistic rule, under which expert opinions are combined, that is immune to his criticism and better represents the ecumenical principle.
Decision theory requires agents to assign probabilities to states of the world and utilities to the possible outcomes of different actions. When agents commit to having the probabilities and/or utilities in a decision problem defined by objective features of the world, they may find themselves unable to decide which actions maximize expected utility. Decision theory has long recognized that work-around strategies are available in special cases; this is where dominance reasoning, minimax, and maximin play a role. Here we describe a different work around, wherein a rational decision about one decision problem can be reached by “interpolating” information from another problem that the agent believes has already been rationally solved.
Preference Reflection (EDC, ch.7, part 2)
Posted on Monday, 20 Sep 2021. Why should you take both boxes in Newcomb's Problem? The simplest argument is that you are then guaranteed to get $1000 more than what you would get if you took one box. …
Evidence E is misleading with regard to a hypothesis H provided that Bayesian update on E changes one’s credence in H in the direction opposed to truth. It is known that pretty much any evidence is misleading with regard to some hypothesis or other. …
Ephecticism is the tendency towards suspension of belief. Epistemology often focuses on the error of believing when one ought to doubt. The converse error—doubting when one ought to believe— is relatively underexplored. This essay examines the errors of undue doubt. I draw on the relevant alternatives framework to diagnose and remedy undue doubts about rape accusations. Doubters tend to invoke standards for belief that are too demanding, for example, and underestimate how farfetched uneliminated error possibilities are. They mistake seeing how incriminating evidence is compatible with innocence for a reason to withhold judgement. Rape accusations help illuminate the causes and normativity of doubt. I propose a novel kind of epistemic injustice, for example, wherein patterns of unwarranted attention to farfetched error possibilities can cause those error possibilities to become relevant. Widespread unreasonable doubt thus renders doubt reasonable and makes it harder to know rape accusations. Finally, I emphasise that doubt is often a conservative force and I argue that the relevant alternatives framework helps defend against pernicious doubt-mongers.
We consider a learning agent in a partially observable environment, with which the agent has never interacted before, and about which it learns both what it can observe and how its actions affect the environment. The agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) both about what is directly observable and about what actions do in the domain, given the learner’s observational constraints. We differentiate the level of domain knowledge attained by each algorithm, and characterize the type of observations required to reach it. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. Our work continues that of Bolander and Gierasimczuk (2015), which developed DEL-based learning algorithms based to learn domain information in fully observable domains.
The historically-influential perceptual analogy states that intuitions and perceptual experiences are alike in many important respects. Phenomenalists defend a particular reading of this analogy according to which intuitions and perceptual experiences share a common phenomenal character. Call this the 'phenomenalist thesis'. The phenomenalist thesis has proven highly influential in recent years. However, insufficient attention has been given to the challenges it raises for theories of intuition. In this paper, I first develop one such challenge. I argue that if we take the idea that intuitions and perceptual experiences have a common phenomenal character seriously, then a version of the familiar problem of perceptual presence arises for intuitions. I call this the 'problem of intuitive presence'. In the second part of the paper I sketch a novel enactivist solution to this problem.
Why ain'cha rich? (EDC, ch.7, part 1)
Posted on Friday, 17 Sep 2021. Topic: decision theory
Chapter 7 of Evidence, Decision and Causality looks at arguments for one-boxing or two-boxing in Newcomb's Problem. …
Traditionally, logic has been the dominant formal method within philosophy. Are logical methods still dominant today, or have the types of formal methods used in philosophy changed in recent times? To address this question, we coded a sample of philosophy papers from the late 2000s and from the late 2010s for the formal methods they used. The results indicate that (a) the proportion of papers using logical methods remained more or less constant over that time period but (b) the proportion of papers using probabilistic methods was approximately three times higher in the late 2010s than it was in the late 2000s. Further analyses explored this change by looking more closely at specific methods, specific levels of technical engagement, and specific subdisciplines within philosophy. These analyses indicate that the increasing proportion of papers using probabilistic methods was pervasive, not confined to particular probabilistic methods, levels of sophistication, or subdisciplines.
As with most topics in philosophy, there is no consensus about what experimental philosophy is. Most broadly, experimental philosophy involves using scientific methods to collect empirical data for the purpose of casting light on philosophical issues. Such a definition threatens to be too broad, however: Taking the nature of matter to be a philosophical issue, research at the Large Hadron Collider would count as experimental philosophy. Others have suggested more narrow definitions, characterizing experimental philosophy in terms of the use of scientific methods to investigate intuitions. This threatens to be too narrow, however, excluding such work as Eric Schwitzgebel’s comparison of the rates of theft of ethics books to similar volumes from other areas of philosophy for the purpose of finding out whether philosophical training in ethics promotes moral behavior. While restricting experimental philosophy to the study of intuitions is too narrow, this nonetheless covers most of the research in this area. Focusing on this research, we begin by discussing some of the methods that have been used by experimental philosophers. We then distinguish between three types of goals that have guided experimental philosophers, illustrating these goals with some examples.
Eugen Fischer and colleagues expand on a body of empirical work offering a debunking explanation of a key assumption involved in the argument from illusion. Following Snowden (1992), we can distinguish between the base case and the spreading step in the argument. Fischer et al. target the base case. In the most prominent current versions of the argument, the key move in the base case involves the phenomenal principle (Robinson, 1994, 32): “If there sensibly appears to a subject to be something which possesses a particular sensible quality then there is something of which the subject is aware which does possess that sensible quality.” In brief, Fischer et al. contend that the move here from a seemingly uncontroversial claim such as “the coin appears elliptical to me” to there being something of which the subject is aware that is elliptical requires that the initial claim be given a “literal interpretation” such that something elliptical has appeared to the subject. But they contend that under such an interpretation the claim should no longer be taken to be uncontroversial, assuming too much of what the argument needs to establish. And they argue that much of the intuitive appeal of this move can be explained in terms of accepting the claim based on the dominant usage of appearance verbs (e.g., I think the coin is elliptical), then shifting to the less salient phenomenal usage required for the conclusion. Fischer et al. then present the results of a series of nifty new studies in cross-cultural psycholinguistics to support the conclusion that people make stereotypical inferences warranted by the dominant sense of appearance verbs, even in contexts where this dominant sense is inappropriate.
Eugen Fischer and John Collins have brought together an impressive, and important, series of essays concerning the methodological debates between rationalists and naturalists, and how these debates have been impacted by work in experimental philosophy. The work at issue concerns the evidential value of intuitions, and as such is only a small part of the experimental philosophy corpus as I understand it. In fact, Fischer and Collins define experimental philosophy in this narrow sense in their introduction. On their view, experimental philosophy ‘‘builds on the assumption that, for better or worse, intuitions are crucially involved in philosophical work’’ (3). The parenthetical serves to emphasize that such work could either be pursued from a positive perspective aiming to vindicate the use of intuitions in philosophy or from a negative perspective aiming to undermine that use. Noting these two perspectives, it might then seem that experimental philosophy is neutral with regard to methodological debate: ‘‘experimental philosophy is not a party to the dispute between methodological rationalism and naturalism, but offers a new framework for settling it’’ (23).
In the long run, the development of artificial intelligence (AI) is likely to be one of the biggest technological revolutions in human history. Even in the short run it will present tremendous challenges as well as tremendous opportunities. The more we do now to think through these complex challenges and opportunities, the better the prospects for the kind of outcomes we all hope for, for ourselves, our children, and our planet.
Writing comments on a post about adversarial collaboration feels like a place where I should be adversarial (if in a collaborative spirit). But I agree with basically everything Eric says here. Frankly, this is all spot on. You probably don’t want to read 500 words from me just saying “yep, this” and agreeing with his excellent, sensible advice, though. So, let me attempt to be provocative: Eric doesn’t go far enough! (Not that he was trying to, of course.) All philosophers should be asking themselves what empirical evidence would actually test their views. Collaboration should be the rule, not the exception. And we should expect collaborations to have an adversarial element, treating this as a feature, not a bug.
Sometimes, learning about the origins of a belief can make it irrational to continue to hold that belief—a phenomenon we call ‘genealogical defeat’. According to explanationist accounts, genealogical defeat occurs when one learns that there is no appropriate explanatory connection between one’s belief and the truth. Flatfooted versions of explanationism have been widely and rightly rejected on the grounds that they would disallow beliefs about the future and other inductively-formed beliefs. After motivating the need for some explanationist account, we raise some problems for recent versions of explanationism. Learning from their failures, we then produce and defend a more resilient explanationism.
This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence that p; that, nonetheless, rational thinking is closed under entailment; that thinking does not supervene on credence; and that in many cases what one thinks on certain matters is, in a very literal sense, a choice. Finally, since there are strong reasons to believe that thinking just is believing, there are strong reasons to think that all this goes for belief as well.
Betting on collapse (EDC, ch.6)
Posted on Wednesday, 15 Sep 2021. Topic: decision theory
Chapter 6 of Evidence, Decision and Causality presents another alleged counterexample to CDT, involving a bet on the measurement of entangled particles. …
This paper is about two requirements on wish reports whose interaction motivates a novel semantics for these ascriptions. The first requirement concerns the ambiguities that arise when determiner phrases, e.g. definite descriptions, interact with ‘wish’. More specifically, several theorists have recently argued that attitude ascriptions featuring counterfactual attitude verbs license interpretations on which the determiner phrase is interpreted relative to the subject’s beliefs. The second requirement involves the fact that desire reports in general require decision-theoretic notions for their analysis. The current study is motivated by the fact that no existing account captures both of these aspects of wishing. I develop a semantics for wish reports that makes available belief-relative readings but also allows decision-theoretic notions to play a role in shaping the truth conditions of these ascriptions. The general idea is that we can analyze wishing in terms of a two-dimensional notion of expected utility.
Most authors who discuss willpower assume that everyone knows what it is, but our assumptions differ to such an extent that we talk past each other. We agree that willpower is the psychological function that resists temptations – variously known as impulses, addictions, or bad habits; that it operates simultaneously with temptations, without prior commitment; and that ’s skill at exec-use of it is limited by its cost, commonly called effort, as well as by the person utive functioning. However, accounts are usually not clear about how motivation functions during the application of willpower, or how motivation is related to effort. Some accounts depict willpower as the perceiving or formation of motivational contingencies that outweigh the temptation, and some depict it as a continuous use of mechanisms that interfere with reweighing the temptation. Some others now suggest that impulse control can bypass motivation altogether, although they refer to this route as habit rather than willpower.
The pattern of implicatures of modified numeral ‘more than n’ depends on the roundness of n. Cummins, Sauerland, and Solt (2012) present experimental evidence for the relation between roundness and implicature patterns, and propose a pragmatic account of the phenomenon. More recently, Hesse and Benz (2020) present more extensive evidence showing that implicatures also depend on the magnitude of n and propose a novel explanation based on the Approximate Number System (Dehaene, 1999). Despite the wealth of experimental data, no formal account has yet been proposed to characterize the full posterior distribution over numbers of a listener after hearing ‘more than n’. We develop one such account within the Rational Speech Act framework, quantitatively reconstructing the pragmatic reasoning of a rational listener. We show that our pragmatic account correctly predicts various features of the experimental data.
We were slightly concerned, upon having read Eric Winsberg, Jason Brennan and Chris Surprenant’s reply to our paper “Were Lockdowns Justified? A Return to the Facts and Evidence”, that they may have fundamentally misunderstood the nature of our argument, so we issue the following clarification, along with a comment on our motivations for writing such a piece, for the interested reader.
Jacinta hears the doorbell ring and, as a result, she comes to believe that Amit is home. There are different ways for this belief to come about. It might be the result of sheer habit, or even a direct causal effect of the stimulation of Jacinta’s auditory nerves (Jacinta might have some strange brain wiring). Alternatively, however, it might be that Jacinta infers or reasons that Amit is home. In that case, Jacinta’s belief that Amit is home is not (or not merely) a causal effect of her belief that the doorbell is ringing, but rather it is rationally based on it: Jacinta now believes that Amit is home on the grounds that, or for the reason that, the doorbell is ringing. Furthermore, on natural ways of filling in the story, this is a way for Jacinta to justifiably believe, and even know, that Amit is home.
Fixing the Past and the Laws (EDC, ch.5)
Posted on Monday, 13 Sep 2021
Chapter 5 of Evidence, Decision and Causality presents a powerful challenge to CDT (drawing on Ahmed (2013) and Ahmed (2014)). Imagine you have strong evidence that a certain deterministic system S is the true system of laws in our universe. …