-
2403480.807479
I argue that moral dialogue concerning an agent’s standing to blame facilitates moral understanding about the purported wrongdoing that her blame targets. Challenges to a blamer’s standing serve a communicative function: they initiate dialogue or reflection meant to align the moral understanding of the blamer and challenger. On standard accounts of standing to blame, challenges to standing facilitate shared moral understanding about the blamer herself: it matters per se whether the blamer has a stake in the purported wrongdoing at issue, is blaming hypocritically, or is complicit in the wrongdoing at issue. In contrast, I argue that three widely recognized conditions on standing to blame—the business, non-hypocrisy, and non-complicity conditions—serve as epistemically tractable proxies through which we evaluate the accuracy and proportionality of blame. Standing matters because, and to the extent that, it indirectly informs our understanding of the purported wrongdoing that an act of blaming targets.
-
2412631.807569
I present an argument that undermines the standardly held view that chemical substances are natural kinds. This argument is based on examining the properties required to pick out members of these purported kinds. In particular, for a sample to be identified as -say- a member of the kind-water, it has to be stable in the chemical sense of stability. However, the property of stability is artificially determined within chemical practice. This undermines the kindhood of substances as they fail to satisfy one of two key requirements: namely that they are picked out by (some) natural properties and that they are categorically distinct. This is a problem specifically for the natural realist interpretation of kinds. I discuss whether there are other ways to conceive of kinds in order to overcome it.
-
2412657.80759
In his 1997 paper “Technology and Complexity” Dasgupta draws a distinction between systematic and epistemic complexity. Entities are called systematically complex when they are composed of a large number of parts that interact in complicated ways. This means that even if one knows the properties of the parts one may not be able to infer the behaviour of the system as a whole. In contrast, epistemic complexity refers to the knowledge that is used in, or generated by the making of an artefact and is embodied in it. Interestingly, a high level of systematic complexity does not entail a high level of epistematic complexity and vice versa.
-
2412730.807604
What distinguishes genuine intelligence from sophisticated simulation? This paper argues that the answer lies in symbolic coherence—the structural capacity to interpret information, revise commitments, and maintain continuity of reasoning across contradiction. Current AI systems generate fluent outputs while lacking mechanisms to track their own symbolic commitments or resolve contradictions through norm-guided revision. This theory proposes F(S), a structural identity condition requiring interpretive embedding, reflexive situatedness, and internal normativity. This condition is substrate-neutral and applies to both biological and artificial systems. Unlike behavioral benchmarks, F(S) offers criteria for participation in symbolic reasoning rather than surface-level imitation. To demonstrate implementability, the paper presents a justification graph architecture that supports recursive coherence and transparent revision. A diagnostic scalar, symbolic density, tracks alignment over symbolic time. By uniting philosophical insights with concrete system design, this framework outlines foundations for machines that may one day understand rather than simulate understanding.
-
2412750.807616
Several prominent scientists, philosophers, and scientific institutions have argued that science cannot test supernatural worldviews on the grounds that (1) science presupposes a naturalistic worldview (Naturalism) or that (2) claims involving supernatural phenomena are inherently beyond the scope of scientific investigation. The present paper argues that these assumptions are questionable and that indeed science can test supernatural claims. While scientific evidence may ultimately support a naturalistic worldview, science does not presuppose Naturalism as an a priori commitment, and supernatural claims are amenable to scientific evaluation. This conclusion challenges the rationale behind a recent judicial ruling in the United States concerning the teaching of “Intelligent Design” in public schools as an alternative to evolution and the official statements of two major scientific institutions that exert a substantial influence on science educational policies in the United States. Given that science does have implications concerning the probable truth of supernatural worldviews, claims should not be excluded a priori from science education simply because they might be characterized as supernatural, paranormal, or religious. Rather, claims should be excluded from science education when the evidence does not support them, regardless of whether they are designated as ‘natural’ or ‘supernatural’.
-
2412772.807632
It has long been known that brain damage has negative effects on one’s mental states and alters (or even eliminates) one’s ability to have certain conscious experiences. Even centuries ago, a person would much prefer to suffer trauma to one’s leg, for example, than to one’s head. It thus stands to reason that when all of one’s brain activity ceases upon death, consciousness is no longer possible and so neither is an afterlife. It seems clear from all the empirical evidence that human consciousness is dependent upon the functioning of individual brains, which we might call the “dependence thesis.” Having a functioning brain is, at minimum, necessary for having conscious experience, and thus conscious experience must end when the brain ceases to function.
-
2414972.807647
It has long been considered a truism that we can learn more from a variety of sources than from highly correlated sources. This truism is captured by the Variety of Evidence Thesis. To the surprise of many, this thesis turned out to fail in a number of Bayesian settings. In other words, replication can trump variation. Translating the thesis into IP we obtain two distinct, a priori plausible formulations in terms of ‘increased confirmation’ and ‘uncertainty reduction’, respectively. We investigate both formulations, which both fail for different parameters and different reasons, that cannot be predicted prior to formal analysis. The emergence of two distinct formulations distinguishing confirmation increase from uncertainty reduction, which are conflated in the Bayesian picture, highlights fundamental differences between IP and Bayesian reasoning.
-
2419334.80766
Suppose infinitely many blindfolded people, including yourself, are uniformly randomly arranged on positions one meter apart numbered 1, 2, 3, 4, …. Intuition: The probability that you’re on an even-numbered position is 1/2 and that you’re on a position divisible by four is 1/4. …
-
2441317.807672
A general challenge in life is how to avoid being duped or exploited by clever-sounding but ultimately facile reasoning. One thing’s for sure, you don’t want to internalize the following norm:
(Easy Dupe): Whenever you hear an argument for doing X, and you can’t immediately refute it, you are thereby rationally committed to doing X. …
-
2444408.807684
Many physicalists nowadays, and Bigelow for one, stand ready to carry metaphysical baggage when they find it worth the weight. This physicalist’s philosophy of mathematics is premised on selective, a posteriori realism about immanent universals. Bigelow’s universals, like D. M. Armstrong’s, are recurrent elements of the physical world; and mathematical objects are universals. The result is a thoroughgoing threefold realism: mathematical realism, scientific realism, and the realism that stands opposed to nominalism.
-
2457756.807695
John Kay and Mervyn King (2020) propose a definition of radical certainty as a form of ontological uncertainty, rather than of epistemic uncertainty: the radical form is considered to be not resolvable. Their notion of radical uncertainty can be likened to the notion of the 'unknown unknowns', which refers to the aspects of uncertainty that are not readily apparent or quantifiable. Thus, instead of seeking solutions through probabilistic methods, Kay&King invite us to embrace this form of uncertainty and develop forms of reasoning within a framework where we do not know what we do not know, drawing inspiration from modal approaches to open futures (and even pasts).
-
2514355.807706
Edward Craig’s function-first methodology says we can illuminate the concept of knowledge by asking what functions the concept evolved to fulfil. To do this, Craig imagines a fictional state of nature in which humans lacked the concept. Hilary Kornblith rejects every part of Craig’s methodology. He instead develops a naturalistic epistemology, according to which we should study knowledge—not its concept—through the scientific study of animal cognition.
-
2527571.807725
|A University Occupation in The Netherlands - via de Volkskrant|
Here is my best effort to reconstruct the reasoning behind these occupations. Premise 1. The Israeli government is doing terrible things in Gaza and should stop
P2. …
-
2585777.807737
In Factual Difference-Making, Holger Andreas and Mario Günther propose a theory of model-relative actual causation which performs remarkably well on a number of known problematic cases. They take this to show that we should abandon our counterfactual way of thinking about causation in favour of their factual alternative. I cast doubt on this argument by offering two similar theories. First, I show that the theory of Factual Difference-Making is equivalent to a partly counterfactual theory. Second, I give a fully counterfactual theory that makes the same judgments in the scenarios discussed by Andreas and Günther.
-
2585798.807751
Two recent, prominent theorems—the “no-go theorem for observer-independent facts” and the “Local Friendliness no-go theorem”—employ so-called extended Wigner’s friend scenarios to try to impose novel, non-trivial constraints on the possible nature of physical reality. While the former is argued to entail that there can be no theory in which the results of Wigner and his friend can both be considered objective, the latter is said to place on reality stronger constraints than the Bell and Kochen-Specker theorems. Here, I conduct a thorough analysis of these theorems and show that they suffer from a list of shortcomings that question their validity and limit their strength. I conclude that the “no-go theorem for observer-independent facts” and the “Local Friendliness no-go theorem” fail to impose significant constraints on the nature of physical reality.
-
2626271.807762
My daughter, S, who is five, has a special stuffed unicorn who she received for her third Christmas. Once white, she is now gray: the color of love—and drool. Once replete with a magnificent mane of pale pink yarn, she now boasts a tangled, grizzled, dishwater-colored ‘do. …
-
2627089.807773
Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgeb...
best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive ”indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
-
2637499.807784
Some of our conditional knowledge is counterepistemic: it is knowledge of an indicative conditional whose antecedent is false. Counterepistemic knowledge ascriptions give rise to puzzles—and in particular, to systematic violations of factivity. I critically examine propositionalist explanations, including contextualist and descriptivist accounts, and argue that they ultimately fail to accommodate the non-triviality and consistency of counterepistemic knowledge ascriptions. Instead, I propose a non-propositional theory drawing on ideas from the literatures on belief revision and on conversational update. On the positive account, counterepistemic knowledge is not reducible to knowledge of propositions, and knowledge states are more than relations to facts known. Besides distinguishing a live class of epistemic alternatives, a knowledge state also orders the possibilities it eliminates.
-
2687711.807799
Here is a very odd question that occurred to me: Is it good for there to be moral norms? Imagine a world just like this one, except that there are no moral norms for its intelligent denizens—but nonetheless they behave as we do. …
-
2697675.807811
This is a stand-alone essay, but if you’re curious you can read Part 1, Against Feet Revisited. 1. Timothy Steele’s book All the Fun’s in How You Say a Thing aims to offer “an explanation of English meter,” especially iambic pentameter. …
-
2697676.807831
“If Anyone Builds It, Everyone Dies”
Eliezer Yudkowsky and Nate Soares are publishing a mass-market book, the rather self-explanatorily-titled If Anyone Builds It, Everyone Dies. (Yes, the “it” means “sufficiently powerful AI.”) The book is now available for preorder from Amazon:
I was graciously offered a chance to read a draft and offer, not a “review,” but some preliminary thoughts. …
-
2700655.807842
It can be convenient to personify moral theories, attributing to them the attitudes that would be fitting if the theory in question were true: “(Token-monistic) utilitarianism treats individuals as fungible mere means to promoting the aggregate good.” “Kantianism cares more about avoiding white lies than about saving the life that’s under threat from the murderer at the door.”
If a theory has false implications about what attitudes of care or concern are actually morally fitting, then the theory is false. …
-
2715217.807854
Very short summary: In this essay, I show how the so-called AI-value alignment problem can be analyzed as a signaling game with a deception equilibrium. The evolution of deception in the context of the relationship between AIs and humans is unavoidable. …
-
2748311.807866
Suppose that believing that your cancer will probably be cured would improve your chances of survival and your quality of life. Or suppose that believing that your son committed a violent crime would cause you and your relationship with him serious harm. Are practical considerations like these normative reasons for and against these beliefs? That is, do these considerations genuinely count in favor of and against having these respective beliefs in the sense that they bear on what you really ought to believe? That’s the question at the heart of the pragmatism-anti-pragmatism debate: pragmatists say, “yes,” while anti-pragmatists say, “no.” According to the anti-pragmatist, the only normative reasons for or against belief are epistemic considerations, which are those that have to do with believing the truth and avoiding error. For example, the anti-pragmatist insists that, if the evidence suggests that your cancer will probably not be cured and that your son committed a violent crime, these evidential considerations are reasons for believing these things, which bear on whether you ought to believe them; the fact that believing these things would be good or bad for you is entirely irrelevant to whether you ought to believe them.
-
2750463.807881
The outstanding problem for common origin inferences (“COIs”) is to understand why they succeed when they do; and why they fail when they do. The material theory of induction provides a solution: COIs are warranted by background facts. Whether a COI succeeds or fails depends on the truth of its warranting propositions. Examples from matter theory and Newton’s Principia illustrate how COIs can fail; and an example from relativity theory illustrates a success. Hypotheses, according to the material theory, can be posited as a temporary expedient to initiate an inductive enterprise. This use of hypotheses enables COIs to serve as incentives for further research. It is illustrated with the example of the Copernican hypothesis.
-
2758772.807893
Since Meyer and Dunn showed that the rule γ is admissible in E, relevantists have produced new proofs of the admissibility of γ for an ever more expansive list of relevant logics. We show in this paper that this is not cause to think that this is the norm; rather γ fails to be admissible in a wide variety of relevant logics. As an upshot, we suggest that the proper view of γ-admissibility is as a coherence criterion, and thus as a selection criterion for logical theory choice.
-
2758865.807904
This is an introduction to the idea that our universe is just one of many universes: it is part of a multiverse. This idea is very topical. A multiverse of one kind or another is seriously advocated by many philosophers. And similarly for physics: many physicists advocate a multiverse---usually of a different kind than that of the philosophers. So the time seems ripe to assess the various versions of this idea. In this book, I will assess three versions of it. One is from philosophy: more specifically, from logic’s treatment of possibility. The other two are from physics: more specifically, the Everettian interpretation of quantum theory, and inflationary cosmology. I will discuss these in order; and then in a final Chapter, compare them and relate them to each other.
-
2758886.807916
This paper argues that functionalism, a dominant theory in philosophy of mind, fails to adequately explain the emergence of conscious experience within the Everettian (Many-Worlds) interpretation of quantum mechanics. While the universal wavefunction contains many possible ways of decomposition, functionalism cannot account for why consciousness appears only in decohered, classical-like branches and not in other parts of the wavefunction that are equally real. This limitation holds even if those other parts do not instantiate complex functional structure. We argue that consciousness, as it is observed in many worlds, defies the predictions and explanatory resources of functionalism. Therefore, functionalism must be supplemented or replaced in order to account for the observed phenomenology.
-
2758908.807928
Relational Quantum Mechanics posits that facts about the properties of physical systems are relative to other systems. As pointed out by Adlam in a recent manuscript, this gives rise to the question of the relationship between the facts that obtain relative to complex systems and the facts that obtain relative to their constituents. In this paper, I respond to Adlam’s discussion of what she calls the Combination Problem. My starting point is a maximally permissive solution that I suggest should be our default view. Subsequently, I advance three main claims. First, I argue that Adlam’s arguments in favour of a more restrictive approach is required are not compelling. Second, I argue that even if they were, she is wrong to claim that a ‘tamed’ version of RQM with postulated links between perspectives is in a better position to support such a restrictive approach. And third, I point out that the possibly most difficult aspect of the Combination Problem in fact pertains to the combination of quantum states and probabilities. While these issues do raise significant challenges for the permissive solution, I contend that they are likely to arise for any reasonable response to the Combination Problem. More tentatively, I propose a strategy to at least mitigate the difficulty that capitalises on the observer-dependence of relative quantum state assignments. Along the way, I address crucial foundational issues in Relational Quantum Mechanics, from cross-perspective communication to the link between relative facts and experiences to empirical adequacy.
-
2759128.807943
How ought scarce health research resources be allocated, where health research spans “basic”, translational, clinical, health systems and public health research? In this paper I first outline a previously suggested answer to this question: the “fair-share principle” stipulates that total health research funding ought to be allocated in direct proportion with suffering caused by each disease. Second, I highlight a variety of problems the fair-share principle faces. The principle is inattentive to problems of aggregation and distribution of harms incurred from disease and benefits accrued from research, and neglects considerations of cost-effectiveness. Moreover, the principle fails to recognise that using Global Burden of Disease Study estimates as proxies for “suffering” underdetermines health research resource allocation. Importantly, in drawing on these estimates, which are disease-centric and only take “proximal” causes of health loss into account, the fair-share principle disregards the social determinants of health. Along with them, the principle ignores public health research, which often focusses on “distal” causes of health loss to improve population health and reduce health inequalities. Following the principle therefore leads to inequitable priority-setting. I conclude that despite relatively widespread appeals to it, the fair-share principle is not an ideal to aim for during priority-setting.