1. 552582.224667
    Philosophers of mind and philosophers of science have markedly different views on the relationship between explanation and understanding. Reflecting on these differences highlights two ways in which explaining consciousness might be uniquely difficult. First, scientific theories may fail to provide a psychologically satisfying sense of understanding—consciousness might still seem mysterious even after we develop a scientific theory of it. Second, our limited epistemic access to consciousness may make it difficult to adjudicate between competing theories. Of course, both challenges may apply. While the first has received extensive philosophical attention, in this paper I aim to draw greater attention to the second. In consciousness science, the two standard methods for advancing understanding—theory testing and refining measurement procedures through epistemic iteration—face serious challenges.
    Found 6 days, 9 hours ago on PhilSci Archive
  2. 555304.224766
    Assume Peano Arithmetic (PA) is consistent. Then it can’t prove its own consistency. Thus, there is a model M of PA according to which PA is inconsistent, and hence, according M, there is a proof of a contradiction from a finite set of axioms of PA. …
    Found 6 days, 10 hours ago on Alexander Pruss's Blog
  3. 610249.22479
    One Approach to the Necessary Conditions of Free Will Logical Paradox and the Essential Unpredictability of Physical Agents Even today, there is no precise definition of free will – only mere hypotheses and intuitions. This is why this paper will approach the question of free will from a negative perspective, depicting a scenario in which free will seemingly exists. Subsequently, it will attempt to refute this scenario (as a necessary condition for free will). The absence of free will might seem absolute if scientific determinism holds true. Therefore, the goal of the study is to present a logical argument (paradox) that demonstrates the impossibility of an omniscient (P) predictor (scientific determinism), highlighting its inherent self-contradiction. This paradox reveals that the prediction (P = C) by a (P) physical agent of itself is objectively impossible. In other words, even a fully deterministic agent in a deterministic universe cannot predict its own future state, not even in a Platonic sense.
    Found 1 week ago on PhilSci Archive
  4. 610266.224811
    A nested interferometer experiment by Danan et al (2013) is discussed and some ontological implications explored, primarily in the context of time-symmetric interpretations of quantum theory. It is pointed out that photons are supported by all components of their wavefunctions, not selectively truncated "first order" portions of them, and that figures representing both a gap in the photon's path and signals from the cut-off path are incorrect. It is also noted that the Transactional Formulation (traditionally known as the Transactional Interpretation) readily accounts for the observed phenomena.
    Found 1 week ago on PhilSci Archive
  5. 610302.224828
    This paper proposes a novel constraint on artificial consciousness. The central claim is that no artificial system can be genuinely conscious unless it instantiates a form of self-referential inference that is irreducibly perspectival and non-computable. Drawing on Quantum Bayesianism (QBism), I argue that consciousness should be understood as an anticipatory process grounded in subjective belief revision, not as an emergent product of computational complexity. Classical systems, however sophisticated, lack the architecture required to support this mode of updating. I conclude that artificial consciousness demands more than computation—it demands a subject.
    Found 1 week ago on PhilSci Archive
  6. 610340.224842
    I argue that we need to distinguish between three concepts of actual causation: total, path-changing, and contributing actual causation. I provide two lines of argument in support of this account. First, I address three thought experiments that have been troublesome for unified accounts of actual causation, and I show that my account provides a better explanation of corresponding causal intuitions. Second, I provide a functional argument: if we assume that a key purpose of causal concepts is to guide agency, we are better off making a distinction between three concepts of actual causation.
    Found 1 week ago on PhilSci Archive
  7. 610358.22486
    Quantum mechanics with a fundamental density matrix has been proposed and discussed recently. Moreover, it has been conjectured that the universe is not in a pure state but in a mixed state in this theory. In this paper, I argue that this mixed state conjecture has two main problems: the redundancy problem and the underdetermination problem, which are lacking in quantum mechanics with a definite initial wave function of the universe.
    Found 1 week ago on PhilSci Archive
  8. 610377.224874
    Technological understanding is not a singular concept but varies depending on the context. Building on De Jong and De Haro’s (2025) notion of technological understanding as the ability to realise an aim by using a technological artefact, this paper further refines the concept as an ability that varies by context and degree. We extend its original specification for a design context by introducing two additional contexts: operation and innovation. Each context represents a distinct way of realising an aim through technology, resulting in three types (specifications) of technological understanding. To further clarify the nature of technological understanding, we propose an assessment framework based on counterfactual reasoning. Each type of understanding is associated with the ability to answer a specific set of what-if questions, addressing changes in an artefact’s structure, performance, or appropriateness. Explicitly distinguishing these different types helps to focus efforts to improve technological understanding, clarifies the epistemic requirements for different forms of engagement with technology, and promotes a pluralistic perspective on expertise.
    Found 1 week ago on PhilSci Archive
  9. 670212.224894
    The philosopher Joseph S. Ullian died late last year. He is probably best-known for an introduction to epistemology co-authored with W. V. Quine, that is very much of its time. But what caught my eye in the obits was his reputation as a baseball fanatic. …
    Found 1 week ago on Under the Net
  10. 678017.224909
    This is a bit of a shaggy dog story, but I think it’s fun, and there’s a moral about the nature of mathematical research. Act 1 Once I was interested in the McGee graph, nicely animated here by Mamouka Jibladze: This is the unique (3,7)-cage, meaning a graph such that each vertex has 3 neighbors and the shortest cycle has length 7. …
    Found 1 week ago on Azimuth
  11. 706140.224923
    These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. …
    Found 1 week, 1 day ago on Scott Aaronson's blog
  12. 723419.224936
    Let T0 be ZFC. Let Tn be Tn − 1 plus the claim Con(Tn − 1) that Tn − 1 is consistent. Let Tω be the union of all the Tn for finite n. Here’s a fun puzzle. It seems that Tω should be able to prove its own consistency by the following reasoning: If Tω is inconsistent, then for some finite n we have Tn inconsistent. …
    Found 1 week, 1 day ago on Alexander Pruss's Blog
  13. 782268.224951
    We develop a theory of policy advice that focuses on the relationship between the competence of the advisor (e.g., an expert bureaucracy) and the quality of advice that the leader may expect. We describe important tensions between these features present in a wide class of substantively important circumstances. These tensions point to the presence of a trade-off between receiving advice more often and receiving more informative advice. The optimal realization of this trade-off for the leader sometimes induces her to prefer advisors of limited competence – a preference that, we show, is robust under different informational assumptions. We consider how institutional tools available to leaders affect preferences for advisor competence and the quality of advice they may expect to receive in equilibrium.
    Found 1 week, 2 days ago on Dimitri Landa's site
  14. 783355.224968
    There are two main strands of arguments regarding the value-free ideal (VFI): desirability and achievability (Reiss and Sprenger 2020). In this essay, I will argue for what I will call a compatibilist account of upholding the VFI focusing on its desirability even if the VFI is unachievable. First, I will explain what the VFI is. Second, I will show that striving to uphold the VFI (desirability) is compatible with the rejection of its achievability. Third, I will demonstrate that the main arguments against the VFI do not refute its desirability. Finally, I will provide arguments on why it is desirable to strive to uphold the VFI even if the VFI is unachievable and show what role it can play in scientific inquiry. There is no single definition of the VFI, yet the most common way to interpret it is that non-epistemic values ought not to influence scientific reasoning (Brown 2024, 2). Non-epistemic values are understood as certain ethical, social, cultural or political considerations. Therefore, it is the role of epistemic values, such as accuracy, consistency, empirical adequacy and simplicity, to be part of and to ensure proper scientific reasoning.
    Found 1 week, 2 days ago on PhilSci Archive
  15. 783372.224983
    There is an overwhelmingly abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between ‘AI’ and ‘ethics’ can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and set the grounds for more informed discussions about scope, methods, and trainings of AI ethicists.
    Found 1 week, 2 days ago on PhilSci Archive
  16. 796551.224999
    According to the principle of existential inertia: - If x exists at t1 and t2 > t1 and there is no cause of x’s not existing at t2, then x exists at t2. This sounds weird, and one way to get at the weirdness for me is to put it in terms of relativity theory. …
    Found 1 week, 2 days ago on Alexander Pruss's Blog
  17. 796551.225015
    A simplified version of Goedel’s first incompleteness theorem (it’s really just a special case of Tarski’s indefinability of truth) goes like this: - Given a sound semidecidable system of proof that is sufficiently rich for arithmetic, there is a true sentence g that is not provable. …
    Found 1 week, 2 days ago on Alexander Pruss's Blog
  18. 811765.22503
    Critical theory arose as a response to perceived inadequacies in Marxist theory, and perceived changes in modern capitalism. Critical theorists emphasized the ability of capitalism to shape the thought and experience of individuals: it distorts how modern society and its products appear to us, and how we think about them. So, aesthetic experience – like all other experience – is moulded to and compromised by capitalism. For critical theory, if we seek to understand aesthetics we need to acknowledge this distorting effect. Critical theorists ask us to pay attention to how art, and aesthetic experience, suffer under capitalism, and become part of the way in which capitalism prevents the formation of a better life.
    Found 1 week, 2 days ago on Stanford Encyclopedia of Philosophy
  19. 818580.225044
    In this paper we provide an ontological analysis of so-called “artifactual functions” by deploying a realizable-centered approach to artifacts which we have recently developed within the framework of the upper ontology Basic Formal Ontology (BFO). We argue that, insofar as material artifacts are concerned, the term “artifactual function” can refer to at least two kinds of realizable entities: novel intentional dispositions and usefactual realized entities. They inhere, respectively, in what we previously called “canonical artifacts” and “usefacts”. We show how this approach can help to clarify functions in BFO, whose current elucidation includes reference to the term “artifact”. In our framework, having an artifactual function implies being an artifact, but not vice versa; in other words, there are artifacts that lack an artifactual function.
    Found 1 week, 2 days ago on Kathrin Koslicki's site
  20. 869437.225065
    Prioritarianism is generally understood as a kind of moral axiology. An axiology provides an account of what makes items, in this case outcomes, good or bad, better or worse. A moral axiology focuses on moral value: on what makes outcomes morally good or bad, morally better or worse. Prioritarianism, specifically, posits that the moral-betterness ranking of outcomes gives extra weight (“priority”) to well-being gains and losses affecting those at lower levels of well-being. It differs from utilitarianism, which is indifferent to the well-being levels of those affected by gains and losses.[ 1 ] Although it is possible to construe prioritarianism as a non-axiological moral view, this entry follows the prevailing approach and trains its attention on axiological prioritarianism.
    Found 1 week, 3 days ago on Stanford Encyclopedia of Philosophy
  21. 869451.225079
    Dehumanization is widely thought to occur when someone is treated or regarded as less than human. However, there is an ongoing debate about how to develop this basic characterization. Proponents of the harms-based approach focus on the idea that to dehumanize someone is to treat them in a way that harms their humanity; whereas proponents of the psychological approach focus on the idea that to dehumanize someone is to think of them as less than human. Other theorists adopt a pluralistic view that combines elements of both approaches. In addition to explaining different views on what it means to dehumanize someone, this article focuses on related issues, such as how to resolve the so-called “paradox of dehumanization”; the causes and consequences of dehumanization; the sorts of contexts in which dehumanization typically occurs; and the relation between dehumanization and objectification.
    Found 1 week, 3 days ago on Stanford Encyclopedia of Philosophy
  22. 882354.225093
    This paper defends the view that logic gives norms for reasoning. This view is often thought to be problematic given that logic is not itself a theory of reasoning and that valid inferences can lead to silly or pointless beliefs. To defend it, I highlight an overlooked distinction between norms for reasoning and norms for belief. With this distinction in hand, I motivate and defend a straightforward account of how logic gives norms for reasoning, showing that it avoids standard objections. I also show that, given some substantive assumptions, we can offer an attractive account of why logic gives norms for reasoning in the way I propose, and of how it is (also) relevant to norms for belief.
    Found 1 week, 3 days ago on Conor McHugh's site
  23. 884953.225114
    Statistics play an essential role in an extremely wide range of human reasoning. From theorizing in the physical and social sciences to determining evidential standards in legal contexts, statistical methods are ubiquitous, and thus various questions about their application inevitably arise. As tools for making inferences that go beyond a given set of data, they are inherently a means of reasoning ampliatively, and so it is unsurprising that philosophers interested in the notions of evidence and inductive inference have been concerned to utilize statistical frameworks to further our understanding of these topics. However, the field of statistics has long been the subject of heated philosophical controversy. Given that a central goal for philosophers of science is to help resolve problems about evidence and inference in scientific practice, it is important that they be involved in current debates in statistics and data science. The purpose of this topical collection is to promote such philosophical interaction. We present a cross-section of these subjects, written by scholars from a variety of fields in order to explore issues in philosophy of statistics from different perspectives.
    Found 1 week, 3 days ago on Elay Shech's site
  24. 884970.22513
    Robert W. Batterman’s A Middle Way: A Non-Fundamental Approach to Many-Body Physics is an extraordinarily insightful book, far-reaching in its scope and significance, interdisciplinary in character due to connections made between physics, materials science and engineering, and biology, and groundbreaking in the sense that it reflects on important scientific domains that are mostly absent from current literature. The book presents a hydrodynamic methodology, which Batterman explains is pervasive in science, for studying many-body systems as diverse as gases, fluids, and composite materials like wood, steel, and bone. Following Batterman, I will call said methodology the middle-out strategy. Batterman’s main thesis is that the middle-out strategy is superior to alternatives, solves an important autonomy problem, and, consequently, implies that certain mesoscale structures (explained below) ought to be considered natural kinds. In what follows, I unpack and flesh out these claims, starting with a discussion of the levels of reality and its representation. Afterward, I briefly outline the contents of the book’s chapters and then identify issues that seem to me to merit further clarification.
    Found 1 week, 3 days ago on Elay Shech's site
  25. 896447.225154
    I will give an argument for causal finitism from a premise I don’t accept: - Necessary Arithmetical Alethic Incompleteness (NAAI): Necessarily, there is an arithmetical sentence that is neither true nor false. …
    Found 1 week, 3 days ago on Alexander Pruss's Blog
  26. 898740.22517
    I take a pragmatist perspective on quantum theory. This is not a view of the world described by quantum theory. In this view quantum theory itself does not describe the physical world (nor our observations, experiences or opinions of it). Instead, the theory offers reliable advice—on when to expect an event of one kind or another, and on how strongly to expect each possible outcome of that event. The event’s actual outcome is a perspectival fact—a fact relative to a physical context of assessment. Measurement outcomes and quantum states are both perspectival. By noticing that each must be relativized to an appropriate physical context one can resolve the measurement problem and the problem of nonlocal action. But if the outcome of a quantum measurement is not an absolute fact, then why should the statistics of such outcomes give us any objective reason to accept quantum theory? One can describe extensions of the scenario of Wigner’s friend in which a statement expressing the outcome of a quantum measurement would be true relative to one such context but not relative to another. However, physical conditions in our world prevent us from realizing such scenarios. Since the outcome of every actual quantum measurement is certified at what is essentially a single context of assessment, the outcome relative to that context is an objective fact in the only sense that matters for science. We should accept quantum theory because the statistics these outcomes display are just those it leads us to expect.
    Found 1 week, 3 days ago on PhilSci Archive
  27. 898758.225187
    Extrapolating causal effects is becoming an increasingly important kind of inference in Evidence-Based Policy, development economics, and microeconometrics more generally. While several strategies have been proposed to aid with extrapolation, the existing methodological literature has left our understanding of what extrapolation consists of and what constitutes successful extrapolation underdeveloped. This paper addresses this lack in understanding by offering a novel account of successful extrapolation. Building on existing contributions pertaining to the challenges involved in extrapolation, this more nuanced and comprehensive account seeks to provide tools that facilitate the scrutiny of specific extrapolative inferences and general strategies for extrapolation. Offering such resources is important especially in view of the increasing amounts of real-world decision-making in policy, development, and beyond that involve extrapolation.
    Found 1 week, 3 days ago on PhilSci Archive
  28. 898779.225205
    In a recent publication, Kukla (2014) has argued that we should we abandon naturalistic and social constructivist considerations in attempts to define health due to their alleged failure to account for their normativity and instead define them purely in terms of ‘social justice’. Here, I shall argue that such a purely normativist project is self-defeating, and hence, that health and disease cannot be defined through recourse to social justice alone.
    Found 1 week, 3 days ago on PhilSci Archive
  29. 898819.225253
    I dispute the conventional claim that the second law of thermodynamics is saved from a "Maxwell's Demon" by the entropy cost of information erasure, and show that instead it is measurement that incurs the entropy cost. Thus Brillouin, who identified measurement as savior of the second law, was essentially correct, and putative refutations of his view, such as Bennett's claim to measure without entropy cost, are seen to fail when the applicable physics is taken into account. I argue that the tradition of attributing the defeat of Maxwell's Demon to erasure rather than to measurement arose from unphysical classical idealizations that do not hold for real gas molecules, as well as a physically ungrounded recasting of physical thermodynamical processes into computational and information-theoretic conceptualizations. I argue that the fundamental principle that saves the second law is the quantum uncertainty principle applying to the need to localize physical states to precise values of observables in order to eQect the desired disequilibria aimed at violating the second law. I obtain the specific entropy cost for localizing a molecule in the Szilard engine, which coincides with the quantity attributed to Landauer's principle. I also note that an experiment characterized as upholding an entropy cost of erasure in a "quantum Maxwell's Demon" actually demonstrates an entropy cost of measurement.
    Found 1 week, 3 days ago on PhilSci Archive
  30. 979754.22527
    Political disagreement tends to display a “radical” nature that is partly related to the fact that political beliefs and judgments are generally firmly held. This makes people unlikely to revise and compromise on them. …
    Found 1 week, 4 days ago on The Archimedean Point