(180): While the American pragmatist CS Peirce and the twelfth-century Confucian thinker Zhu Xi (朱朱) lived and worked in radically different contexts, there are nevertheless striking parallels in their view of knowledge and inquiry. Both reject the strict separation of theoretical and practical knowledge, conceiving of theoretical inquiry in a way that closely parallels practical reasoning, and they appeal to the fundamental nature of reality in order to draw conclusions about the way in which inquiry can be a component of the path towards moral perfection. Yet they prominently diverge in their account not only of the fundamental nature of reality, but also in their account of the way in which we have epistemic access to it. These connections between metaphysical fundamentality or structure and epistemology, I propose, have the potential to illuminate current discussions about fundamentality in metaphysics. Contemporary approaches that appeal either to grounding relations or to joint-carving ideology in characterizing metaphysical structure, I propose, implicitly rest on distinct sets of epistemological presuppositions that resemble the respective views of Zhu Xi or Peirce.
Various theorists have endorsed the “communication argument”: communicative capacities are necessary for morally responsible agency because blame aims at a distinctive kind of moral communication. I contend that existing versions of the argument, including those defended by Gary Watson and Coleen Macnamara, face a “pluralist challenge”: they do not seem to sit well with the plausible view that blame has multiple aims. I then examine three possible rejoinders to the challenge, suggesting that a context-specific function-based approach constitutes the most promising modification of the communication argument. Keywords: Blame; moral responsibility; communicative theory of responsibility; function of blame.
A counterpossible is a counterfactual with an impossible antecedent. Counterpossibles present a puzzle for standard theories of counterfactuals, which predict that all counterpossibles are semantically vacuous. Moreover, counterpossibles play an important role in many debates within metaphysics and epistemology, including debates over grounding, causation, modality, mathematics, science, and even God. In this article, we will explore various positions on counterpossibles as well as their potential philosophical consequences.
Sentences about logic are often used to show that certain embedding expressions (attitude verbs, conditionals, etc.) are hyperintensional. Yet it is not clear how to regiment “logic talk” in the object language so that it can be compositionally embedded under such expressions. In this paper, I develop a formal system called hyperlogic that is designed to do just that. I provide a hyperintensional semantics for hyperlogic that doesn’t appeal to logically impossible worlds, as traditionally understood, but instead uses a shiftable parameter that determines the interpretation of the logical connectives. I argue this semantics compares favorably to the more common impossible worlds semantics, which faces difficulties interpreting propositionally quantified logic talk.
Dynamic Causal Decision Theory (EDC, ch.s 7 and 8)
Posted on Thursday, 23 Sep 2021. Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality. …
Nietzsche characterizes the Third Essay of On the Genealogy of Morality as “offer[ing] the answer to the question whence the ascetic ideal […] derives its tremendous power although it is the harmful ideal par excellence” (EH GM). What draws people to ideals of self-denial and self-punishment? In short, I will argue, according to Nietzsche, the same as what draws many to physical self-harm: to stop feeling like you’re going to burst out of your skin. “The ascetic ideal,” in Nietzsche’s sense, is an ideal of categorically denying certain desires (instincts, impulses, etc.). What distinguishes this type of ideal is a certain “valuation[al]” (GM III:11) stance — a stance of condemnation (demonization, mistrust) of certain of one’s desires, and correspondingly of oneself for having them or “giving in” to them (cf. III:8, 10). Perhaps one feels a pang of guilt at the first glimpse of ill-will or unforgiveness in oneself. Or one is moved in confession to include simply that one “felt lust,” jealousy, anger. Merely having the desire is treated as problematic, something to feel bad about, reason for punishment.
Yet we know from syntax and crosslinguistic work that conditionals can also be formed with ‘if’-clauses that modify the verb (‘V if S’), as in (2), or a noun (‘N if S’), as in (4). Tests such as the VP-ellipsis and Condition C data in (3), and the coordination and island data in (5), confirm that the ‘if’-clause is a constituent of the verb phrase and noun phrase, respectively, rather than scoping over the rest of the sentence (e.g., Lasersohn 1996, Bhatt & Pancheva 2006).
The desirable gambles framework offers the most comprehensive foundations for the theory of lower previsions, which in turn affords the most general account of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and the machinery for coherent inference but detaches the interpretation of desirability from the multiplicative scale invariance axiom. The second strays from the additive combination axiom to accommodate repeated gambles that return rewards by a non-stationary processes that is not necessarily additive. Unlike the first approach, which is a conservative amendment to the desirable gambles framework, the second is a radical departure. Yet, common to both is a method for describing rewards called discounted utility.
. We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft on errorstatistics.com. …
When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves’ (or AJBT) test to answer this question (Thaler & Sunstein, 2008, 5). In a recent paper, L. A. Paul and Sunstein (ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally trans-formative experience, that is, one that results in our values changing (Paul, 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Pettigrew, 2019, Chapters 6 and 7).
Philippa Foot famously distinguishes between two senses in which a particular norm, request, or demand can be ``categorical". In the first sense, a categorical demand is one that applies to a person regardless of his or her aims or interests.1 In this sense, demands of morality are categorical. But so are the demands of etiquette, club rules, rules of feudal obedience, and so on.2 The second sense is the extent to which the demands in question don’t just apply to someone, but generate normative reasons for action.3 In this second sense one might say that demands of morality, unlike ancillary domains, are categorical: moral demands are normative.
Here is a plausible thesis:
Consciousness of one’s choice is necessary for moral responsibility. I go back and forth on (1). Here is a closely related thesis that is false:
Knowledge of one’s choice is necessary for moral responsibility. …
Suppose a digital computer can have phenomenal states in virtue of its computational states. Now, in a digital computer, many possible physical states can realize one computational state. Typically, removing a single atom from a computer will not change the computational state, so both the physical state with the atom and the one without the atom realize the same computational state, and in particular they both have the same precise phenomenal state. …
COVID-19 has substantially affected our lives during 2020. Since its beginning, several epidemiological models have been developed to investigate the specific dynamics of the disease. Early COVID-19 epidemiological models were purely statistical, based on a curve-fitting approach, and did not include causal knowledge about the disease. Yet, these models had predictive capacity; thus they were used to ground important political decisions, in virtue of the understanding of the dynamics of the pandemic that they offered. This raises a philosophical question about how purely statistical models can yield understanding, and if so, what the relationship between prediction and understanding in these models is. Drawing on the model that was developed by the Institute of Health Metrics and Evaluation, we argue that early epidemiological models yielded a modality of understanding that we call descriptive understanding, which contrasts with the so-called explanatory understanding which is assumed to be the only form of scientific understanding.
In Fischer and Sytsma (2021) we put forward a bold hypothesis: the zombie argument against materialism is built on zombie intuitions – intuitions that are ‘killed’ (cancelled) by the context provided but kept cognitively alive by linguistic salience bias. We then provided evidence from corpus studies as well as surveys and experiments with typicality, plausibility, and agreement ratings to support this hypothesis. The four commentators have provided helpful and thought-provoking objections, in particular to our main experiment, that point to new hypotheses. Here, we’ll respond to the principal points our commentators raise, focusing on the new hypotheses and how they might be tested. We briefly summarise the target article in Sect.1, with a focus on the aspects targeted by commentators. Sect.2 discusses the primary objections Chalmers and Liu raised, namely, to the experimental materials we used, and spells out the competing hypotheses their objections motivate. Sect.3 reports a follow-up study that examined these hypotheses. In Sect.4, we turn to further concerns about the main experiment’s materials and procedure, raised by Frankish and Machery. In conversation with these two commentators, the final Sect.5 brings out the need for empirical investigation of laypeople’s intuitions about philosophical zombies (and other ‘problem intuitions’ motivating the ‘hard problem of consciousness’) and highlights what is new and important about our ambitious ‘aetiological strategy’ that seeks to develop and assess debunking explanations of intuitions.
The debate between ΛCDM and MOND is often cast in terms of competing gravitational theories. However, recent philosophical discussion suggests that the ΛCDM–MOND debate demonstrates the challenges of multiscale modeling in the context of cosmological scales. I extend this discussion and explore what happens when the debate is thought to be about modeling rather than about theory, offering a model-focused interpretation of the ΛCDM–MOND debate. This analysis shows how a model-focused interpretation of the debate provides a better understanding of challenges associated with extension to a different scale or domain, which are tied to commitments about explanatory fit.
This is terrible journalism:While [donating $1 billion to protect forests] is certainly notable, Bezos’s commitment to protecting the environment serves as a stark reminder that much of his legacy and largely untaxed fortune was built by companies that have staggering carbon footprints. …
Say that a functional property F is pain-like provided that a human is in pain if and only if the human has F.
Assuming functionalism, there is a functional property F0 which is pain. Property F0 will be pain-like, but it won’t be the only pain-like property. …
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation.
I am one of those people who do not have vivid memories of pains. Suppose I stub my toe. While the toe is hurting, I know what the toe’s hurting feels like. After it stops hurting, for a while I still know what that felt like. …
Anicius Manlius Severinus Boethius (born: circa 475–7 C.E.,
died: 526? C.E.) has long been recognized as one of the most important
intermediaries between ancient philosophy and the Latin Middle Ages
and, through his Consolation of Philosophy, as a talented
literary writer, with a gift for making philosophical ideas dramatic
and accessible to a wider public. He had previously translated
Aristotle’s logical works into Latin, written commentaries on
them as well as logical textbooks, and used his logical training to
contribute to the theological discussions of the time. All these
writings, which would be enormously influential in the Middle Ages,
drew extensively on the thinking of Greek Neoplatonists such as
Porphyry and Iamblichus.
Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious. Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them. …
This paper aims to shed light on the relation between Boltzmannian statistical mechanics and Gibbsian statistical mechanics by studying the Mechanical Averaging Principle, which says that, under certain conditions, Boltzmannian equilibrium values and Gibbsian phase averages are approximately equal. What are these conditions? We identify three conditions each of which is individually sufficient (but not necessary) for Boltzmannian equilibrium values to be approximately equal to Gibbsian phase averages: the Khinchin condition, and two conditions that result from two new theorems, the Average Equivalence Theorem and the Cancelling Out Theorem. These conditions are not trivially satisfied, and there are core models of statistical mechanics, the six-vertex model and the Ising model, in which they fail.
The relational interpretation (or RQM, for Relational Quantum Mechanics) solves the measurement problem by considering an ontology of sparse relative facts. Facts are realized in interactions between any two physical systems and are relative to these systems. RQM’s technical core is the realisation that quantum transition amplitudes determine physical probabilities only when their arguments are facts relative to the same system. The relativity of facts can be neglected in the approximation where decoherence hides interference, thus making facts approximately stable.
Dupre and Nicholson (2018) defend the metaphysical thesis that the ‘living world’ is not composed of things or substances, as traditionally believed, but of processes. They advocate a process – as opposed to a substance – metaphysics and ontology, which results to be more empirically adequate to what contemporary biology suggests.
The Precautionary Principle is typically construed as a conservative decision rule aimed at preventing harm. But Martin Peterson (JME 33: 5–10, 2007; The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, Oxford, 2017) has argued that the principle is better understood as an epistemic rule, guiding decision-makers in forming beliefs rather than choosing among possible acts. On the epistemic view, he claims there is a principle concerning expert disagreement underlying precautionary-based reasoning called the ecumenical principle: all expert views should be considered in a precautionary appraisal, not just those that are the most prominent or influential. In articulating the doxastic commitments of decision-makers under this constraint, Peterson precludes any probabilistic rule that might result in combining expert opinions. For combined or consensus probabilities are likely to provide decision-makers with information that is more precise than warranted. Contra Peterson, I argue that upon adopting a broader conception of probability, there is a probabilistic rule, under which expert opinions are combined, that is immune to his criticism and better represents the ecumenical principle.
We propose that measures of information integration can be more straightforwardly interpreted as measures of agency rather than of consciousness. This may be useful to the goals of consciousness research, given how agency and consciousness are “duals” in many (though not all) respects.
The notion of growth is one of the most studied notions within economic theory and, traditionally, it is accounted for on the basis of a positivist thesis according to which assumptions are not relevant, as long as economic models have acceptable predictive power. Following this view, it does not matter whether assumptions are realistic or not. Arguments against this principle may involve a defense of the realistic assumptions over highly idealized or false ones. This article aims in a different direction. Instead of demanding more realism, we can accept the spirit of the mentioned thesis, but, instead, criticize the circularity that may arise by combining different assumptions that are necessary for the explanation of economic growth in mainstream economics. Such a circularity is a key aspect of the well-known problem of providing microfoundations for macroeconomic properties. It is here suggested that the notion of emergence could be appropriate to arrive at a better understanding of growth, clarifying the issues related to circularity, but without totally rejecting the usefulness of unrealistic assumptions.
We’re pleased to introduce our latest symposium discussing “Zombie intuitions”, by Eugen Fischer (University of East Anglia) and Justin Sytsma (Victoria University of Wellington), with commentaries by David Chalmers (NYU), Keith Frankish (Sheffield), Michelle Liu (Hertfordshire), and Edouard Machery (Pittsburgh). …
In this essay, I suggest that Spinoza acknowledges a distinction between formal reality that is infinite and timelessly eternal and formal reality that is non-infinite (i. e., finite or indefinite) and non-eternal (i. e., enduring). I also argue that if, in Spinoza’s system, only intelligible causation is genuine causation, then infinite, timelessly eternal formal reality cannot cause non-infinite, non-eternal formal reality. A denial of eternal-durational causation generates a puzzle, however: if no enduring thing – not even the sempiternal, indefinite individual composed of all finite, enduring things – is caused by the infinite, eternal substance, then how can Spinoza consistently hold that the one infinite, eternal substance is the cause of all things and that all things are modes of that substance? At the end of this essay, I sketch how Spinoza could deny eternal-durational causation while still holding that an infinite, eternal God is the cause of all things and that all things are modes. I develop the interpretation more in the companion essay.1