-
3021.573132
(This is the originally submitted version of the paper, a significantly revised version of which is forthcoming in the Australasian Journal of Philosophy. Please cite the final version!) Abstract. According to an influential line of argument, our beliefs about which material objects exist were influenced by selective pressures that are insensitive to the true ontology of material objects, and are therefore debunked (Merricks 2001, Korman 2014, 2015, Rose and Schaffer 2017). Extant responses to this line of reasoning presuppose controversial philosophical theses, such as anti-realism about material objects, theism, or a special faculty of apprehension. The present paper develops a novel strategy for responding to debunking arguments against belief in ordinary objects, which I call “semideflationism”: our beliefs about which material objects exist are the consequents of conditional statements that we are a priori entitled to believe and whose antecedents we have empirical justification to believe. Semi-deflationism offers an attractive epistemology of material objects that. It also shares certain similarities with Amie Thomasson’s (2007, 2014) analytic deflationism, but it is immune to several difficulties with it. Most importantly, semi-deflationism doesn’t imply that seemingly difficult debates about the ontology of material objects can be trivially settled, and it leaves open the possibility that although our beliefs about which objects exist are rational, they are ultimately undermined by substantive arguments for revisionary views.
-
4171.573294
It is overwhelmingly plausible that part of what gives individuals their particular legal or institutional statuses is the fact that there are general laws or other policies in place that specify the conditions under which something is to have those statuses. For instance, particular acts are illegal partly in virtue of the existence and content of applicable law. But problems for this apparently plausible view have recently come to light. The problems afflict both attempts to ground legal statuses in general laws and an analogous view concerning the role of general moral principles in grounding moral statuses. Here I argue that these problems can be solved. The solution in the legal case is to recognize an element of self-reference in the law’s specification of what gives things their legal statuses. The relevant kind of self-reference is a familiar part of the legal and procedural world. It is immanent in at least some familiar legal or broadly conventional, procedural practices. The lessons of this discussion of legal statuses can then be applied to the meta-ethical debate over moral statuses, yielding a view on which moral principles also incorporate an element of self-reference.
-
7562.573311
We have beliefs about what is good or bad, about what reasons we have, about what we should do. These normative beliefs are connected to desire and motivation. To a rough approximation, most people desire what they believe is good, and not what they believe is bad; they are motivated to make the world better, not worse. This connection seems to contradict Humean ideas about the inertness of belief. David Lewis [1988; Lewis 1996] argued that it even clashes with elementary principles of decision theory. I’m going to explain how we can amend decision theory to make room for the connection between normative belief and desire. To do so, we need to focus on a form of desire that is often overlooked: intrinsic desire. Intrinsic desire can be linked to beliefs about intrinsic goodness. This not only avoids Lewis’s arguments, it also diffuses the threat of “fetishism”, raised in [Smith 1994]: it explains why an agent who cares about normative matters needn’t have a single intrinsic desire, to bring about what is good.
-
40171.573322
Abortion is the intentional termination of a pregnancy, either via
surgery or via the taking of medication. Ordinary people disagree
about abortion: many people think abortion is deeply morally wrong,
while many others think abortion is morally permissible. Philosophy
has much to contribute to this discussion, by distinguishing and
clarifying different arguments against abortion, distinguishing and
clarifying different responses to those arguments, offering novel
arguments against abortion, offering novel defenses of abortion, and
offering novel views about the relevant issues at stake. This entry’s central question is: is abortion morally wrong?
-
47735.573333
The article offers a novel reconstruction of Hilbert’s early metatheory of formal axiomatics. His foundational work from the turn of the last century is often regarded as a central contribution to a “model-theoretic” viewpoint in modern logic and mathematics. The article will re-assess Hilbert’s role in the development of model theory by focusing on two aspects of his contributions to the axiomatic foundations of geometry and analysis. First, we examine Hilbert’s conception of mathematical theories and their interpretations; in particular, we argue that his early semantic views can be understood in terms of a notion of translational isomorphism between models of an axiomatic theory. Second, we offer a logical reconstruction of his consistency and independence results in geometry in terms of the notion of interpretability between theories.
-
60010.573344
As anyone who has talked with a language-learner knows, syntactically incorrect sentences often succeed in expressing a proposition. This is true even in the case of formal languages. Formal semantics, say of the Tarski sort, has difficulties with syntactically incorrect sentences. …
-
67684.573355
The notion of shape space was introduced in the second half of the 20th Century as a useful analytical tool for tackling problems related to the intrinsic spatial configuration of material systems. In recent years, the geometrical properties of shape spaces have been investigated and exploited to construct a totally relational description of physics (classical, relativistic, and quantum). The main aim of this relational framework—originally championed by Julian Barbour and Bruno Bertotti—is to cast the dynamical description of material systems in dimensionless and scale-invariant terms only. As such, the Barbour-Bertotti approach to dynamics represents the technical implementation of the famous Leibnizian arguments against the reality of space and time as genuine substances. The question then arises about the status of shape space itself in this picture: Is it an actual physical space in which the fundamental relational dynamics unfolds, or is it just a useful mathematical construction? The present paper argues for the latter answer and, in doing so, explores the possibility that shape space is a peculiar case of a conceptual space.
-
67725.573366
The paper revisits Janssen’s seminal proposal of Common Origin Inferences (COIs), a powerful and scientifically fruitful inference pattern that (causally) traces striking coincidences back to a common origin. According to Janssen, COIs are a decisive engine for rational theory change across disciplines and eras. After a careful reconstruction of Janssen’s central tenets, we critically assess them, highlighting three key shortcomings: its strong realist and ontological commitments, its restriction to (or strong penchant for) causal/ontic explanations, and its intended employment for conferring evidential-epistemic status. To remedy these shortcomings, we moot a natural generalisation and amelioration of Janssen’s original conception—COI*s: Constraint-Omnivorous Inferences. COI*s warrant inference to pursuit-worthy hypotheses: it’s rational to further study, work on, elaborate/refine or test hypotheses that account for multiple constraints in one fell swoop. As a demonstration of the utility of COI*- reasoning, we finally show how it sheds light on, and dovetails, the three most significant breakthroughs in recent cosmology: the Dark Matter hypothesis, the Dark Energy postulate, and the theory of cosmic inflation.
-
67749.573377
This paper outlines an approach to analysing minimal cognition that brings out its social and historical dimensions. It proposes a model, the coordinated systems approach (CSA), which understands cognition as a coordinated coalition of loosely autonomous processes responsible for goal-directedness in a system. On this view, even individual cognition has something of a social flavour to it. The central concept of the paper is stigmergy: a process where the material trace of actions of system elements in their environment is a sign that coordinates a group of semi-autonomous processes in future actions – this is the social dimension. The historical dimension refers to longer term processes which establish the coordinative power of the sign and endow it with normative force. According to this proposal, a full explanation of cognitive capabilities should reference both dimensions. In the second half of the paper the CSA is let loose on some puzzles in 4E cognition. Can the model deal with old problems such as that of cognitive bloat, or new problems such as the supposed external memory of the slime mould Physarum polycephalum? Potentially, the approach could be used to analyse minimal cognitive phenomena over a range of scales from bacteria to human beings.
-
67794.573395
Artificial Intelligence (AI) has become a topic of major interest to philosophers of science. Among the issues commonly discussed is AI’s opacity. To remedy opacity, scientists have provided methods commonly subsumed under the label ‘eXplaibable Artificial Intelligence’ (XAI) that aim to make AI and its outputs ‘interpretable’ and ‘explainable’. However, there is little interaction between developments in XAI and philosophical debates on scientific explanation. We here improve on this situation and argue for a descriptive and a normative thesis: (i) When suitably embedded into scientific research processes, XAI methods’ outputs can facilitate genuine scientific understanding. (ii) In order for XAI outputs to fulfill this function, they should be made testable. We will support our theses by building on recent and long-standing ideas from philosophy of science, by comparing them to a recent framework from the XAI community, and by showcasing their applicability to case studies from the life sciences.
-
67817.573405
This paper challenges a false dichotomy between subjectivity and objectivity in understanding the nature of human social relationships. I argue that social relationships are composed of both subjective and objective components, which are inherently interdependent. They are influenced by biological properties and subject to evolutionary processes, yet they cannot be reduced to them. I use emerging research on kinship as an example that showcases the appeal of this integrated approach. This paper takes a step in the direction of a unified account of sociality, contributing to a more comprehensive understanding of human social behavior.
-
99467.573416
The debate over whether cognitive science is committed to the existence of neural representations is usually taken to hinge on the status of representations as theoretical posits: it depends on whether or not our best-supported scientific theories commit us to the existence of representations. Thomson and Piccinini (2018) and Nanay (2022) seek to reframe this debate to focus more on scientific experimentation than on scientific theorizing. They appeal to arguments from observation and manipulation to propose that experimental cognitive neuroscience gives us non-theoretical reasons to be ontologically committed to representations. In this paper, I challenge their claims about observation and manipulation, and I argue that the question of whether we are ontologically committed to representations is still best understood as a question about the level of support we have for our representation-positing scientific theories.
-
99490.573433
Clark (2006) proposes that a standard challenge to the hypothesis of extended cognition can be avoided in the case of linguistically structured cognition, because the role played by our public manipulation of linguistic artifacts is irreducible to the role played by the brain’s operations over internal representations. I demonstrate that Clark’s argument relies on a view of the brain’s cognitive architecture to which he no longer subscribes. I argue that on Clark’s later view of the brain as engaged in ‘predictive processing’, his earlier defense of extended cognition from this challenge is no longer an eJective strategy. I explore the implications of this for Clark’s attempts to reconcile his previous arguments for extended cognition with his characterization of the predictive-processing brain.
-
125083.573448
On Wednesday May 14, 2025 I’ll be giving a talk at 2 pm Pacific Time, or 10 pm UK time. The talk is for physics students at the Universidade de São Paulo in Brazil, organized by Artur Renato Baptista Boyago. …
-
125475.573459
The cognitive sciences, especially at the intersections with computer science, artificial intelligence, and neuroscience, propose ‘reverse engineering’ the mind or brain as a viable methodology. We show three important issues with this stance: 1) Reverse engineering proper is not a single method and follows a different path when uncovering an engineered substance versus a computer. 2) These two forms of reverse engineering are incompatible. We cannot safely reason from attempts to reverse engineer a substance to attempts to reverse engineer a computational system, and vice versa. Such flawed reasoning rears its head, for instance, when neurocognitive scientists reason about what artificial neural networks and brains have in common using correlations or structural similarity. 3) While neither type of reverse engineering can make sense of non-engineered entities, both are applied in incompatible and mix-and-matched ways in cognitive scientists’ thinking about computational models of cognition. This results in treating mind as a substance; a methodological manoeuvre that is, in fact, incompatible with computationalism. We formalise how neurocognitive scientists reason (metatheoretical calculus) and show how this leads to serious errors. Finally, we discuss what this means for those who ascribe to computationalism, and those who do not.
-
125496.573469
In recent work, Nina Emery has defended the view that, in the context of naturalistic metaphysics, one should maintain the same epistemic attitude towards science and metaphysics. That is, naturalists who are scientific realists ought to be realists about metaphysics as well; and naturalists who are antirealists about science should also be antirealists about metaphysics. We call this the ‘parity thesis’. This paper suggests that the parity thesis is widely, albeit often implicitly, accepted among naturalistically inclined philosophers, and essentially for reasons similar to Emery’s. Then, reasons are provided for resisting Emery’s specific inference from scientific realism to realism about metaphysics. The resulting picture is a more nuanced view of the relationship between science and metaphysics within the naturalistic setting than the one which is currently most popular. Keywords: meta-metaphysics; metaphysics and science; naturalistic metaphysics; realism and antirealism.
-
125528.573482
Prominent phenomenological accounts of schizophrenia have long implicated disturbances in temporality as a central characteristic of the disorder (Minkowski, 1923; Stanghellini et al., 2016; Martin et al., 2019). Early clinical phenomenologists such as Minkowski posited that schizophrenia is defined by a fundamental alteration to the patient’s temporal experience, describing phenomena such as a “blocked future” and “fragmented time” that disrupts continuity between past, present, and future. Minkowski's notion of trouble générateur highlighted the incapacity to resonate with reality, marking a profound disconnect from shared temporal and existential structures. This view remains influential in contemporary research where scholars continue to explore temporal disintegration as a core feature of schizophrenia (Fuchs, 2010; Stanghellini et al., 2016). However, the more fundamental nature of this temporal disintegration has begun to be reinterpreted in clinical phenomenology.
-
125549.573497
Richard Dawkins is widely celebrated as a key figure in contemporary evolutionary biology, but his intellectual legacy resists simple classification. While he is often framed as a hardline defender of empirical science and naturalism, the structure of his contributions reveals a more ambivalent posture—one that is deeply philosophical, even as it disavows philosophy. This essay argues that Dawkins’ enduring influence derives not from experimental discoveries or novel data, but from his role as a conceptual architect: a theorist who reshapes how we think about genes, selection, and organismal design. Through close examination of his major works, public statements, and the epistemic frameworks he deploys, I suggest that Dawkins’ authority operates through what might be termed a “rhetorical empiricism”—a stance that foregrounds science while covertly engaging in metaphysical and conceptual argumentation. The central irony is that Dawkins embodies a form of philosophy he explicitly rejects: a speculative, systematizing, and normatively charged philosophy of biology.
-
125570.573508
We present a new ψ-ontology theorem demonstrating that the quantum wave function is ontic (real) rather than epistemic (representing knowledge) in single-world unitary quantum theories (SUQTs). By leveraging a protocol of repeated reversible measurements on a single quantum system, we show that any two distinct quantum states produce different statistical distributions of (erased) measurement outcomes. This theoretical distinguishability implies that different quantum states correspond to different physical realities, supporting the ontic nature of the quantum state. Unlike previous ψ-ontology theorems, such as the Pusey-Barrett-Rudolph theorem, our proof relies solely on the unitary evolution and Born rule of SUQTs, without additional assumptions like preparation independence. This strengthens its implications for quantum foundations, particularly in restricting non-ψ-ontic interpretations like QBism without assuming an underlying ontic state and its dynamics. The theorem applies to any pair of distinct states in a finite-dimensional Hilbert space, with extensions to infinite-dimensional systems, offering a robust and general argument for the reality of the quantum state.
-
125596.573518
In previous papers, we demonstrated that an ontology of quantum mechanics, described in terms of states and events with internal phenomenal aspects (a form of panprotopsychism), is well suited to explain consciousness. We showed that the combination problems of qualities, structures and subjects in panpsychism and panprotopsychism stem from implicit hypotheses based on classical physics regarding supervenience, which are not applicable at the quantum level. Within this view, consciousness arises in entangled quantum systems coupled to the neural network of the brain. In entangled systems, the properties of individual parts disappear, giving rise to an exponential number of emergent properties and states. Here, we analyze self-consciousness as the capacity to view oneself as a subject of experience. The causal openness of quantum systems provides self-conscious beings the ability to make independent choices and decisions, reflecting a sense of self-governance and autonomy. In this context, the issue of personal identity takes a new form free from the problems of the simple view or the reductive approaches.
-
143911.573528
Arithmetical truth-value realists hold that any proposition in the language of arithmetic has a fully determined truth value. Arithmetical truth-value necessists add that this truth value is necessary rather than merely contingent. …
-
240840.573539
This paper argues that the lack of a shared evidence base in the policy debate around alcohol control, and the failure to acknowledge this fact, creates a tendency to dismiss key bodies of evidence as irrelevant, to the detriment of public health approaches. Using examples from three policy processes, it shows that proponents of opposed positions deploy rival conceptualizations of “problem alcohol use” as the object of policy intervention. Using analytic tools from the philosophy of science, it argues that these conceptualizations correspond to distinct bodies of evidence, which are treated as incompatible. Finally, it points to institutional mechanisms through which the problem can be mitigated.
-
240863.573549
Alzheimer’s disease emerged around the 1900s as a rare disease that became synonymous with common dementia by the 1980s. In the 2010s, in vivo biomarkers of Alzheimer’s pathophysiology then led researchers to emphasize the presymptomatic biology of Alzheimer’s biomarkers, thus decentering dementia. Three consensus definitions were elaborated around biomarkers, and were rearticulated in 2024: biomarker-determined Alzheimer’s disease; biomarker-informed “clinical-biological” Alzheimer’s disease; and biomarker-independent, “all-cause” dementia. I consider their differences to hinge on the questionable legitimacy of the Alzheimer “biomarkerization” of aging. I encourage a focus on the actionable concept of brain health beyond Alzheimer’s to motivate equitable health promotion.
-
284349.573559
Summary: We humans are diverse. But how to understand human diversity in the case of cognitive diversity? This Element discusses how to properly investigate human behavioural and cognitive diversity, how to scientifically represent, and how to explain cognitive diversity. Since there are various methodological approaches and explanatory agendas across the cognitive and behavioural sciences, which can be more or less useful for understanding human diversity, a critical analysis is needed. And as the controversial study of sex and gender differences in cognition illustrates, the scientific representations and explanations put forward matter to society and impact public policy, including policies on mental health. But how to square the vision of human cognitive diversity with the assumption that we all share one human nature? Is cognitive diversity something to be positively valued? The author engages with these questions in connection with the issues of neurodiversity, cognitive disability, and essentialist construals of human nature.
-
298508.57357
The view that epistemic peers should conciliate in cases of disagreement—the Conciliatory View—had been an important view in the early days of the peer disagreement debate. Over the years, however, the view has been the target of severe criticism; an “obituary” was already written for the view, and, as a recent proclamation has it, there is “no hope” for it. In this paper, I will argue that we should keep the hope alive by defending the Conciliatory View of peer disagreement. The primary strategy of my defense will be to separate the claims made by the view specific to peer disagreement and claims that concern higher-order evidence more generally. This separation allows us to see which problems cannot be addressed in the context of peer disagreement alone. As I will argue, the upshot of making this distinction is that although the jury is still out on whether higher-order evidence should affect our first-order doxastic states, the Conciliatory View likely follows if it does.
-
298539.57358
This article examines the role of imagination and fiction in Otto Neurath’s work, particularly in his scientific utopianism. Using contemporary philosophical tools to understand different senses of the concept of imagination, this article argues that scientific utopianism proposes to employ scientific data and data analysis to construct imaginary social arrangements, and then to shift our attitude toward these constructions so that utopias can be compared as technological projects. This shift in attitude toward imaginary constructions is typical of utopia as a literary genre.
-
298574.573592
Probability is distinguished into two kinds: physical and epistemic, also, but less accurately, called objective and subjective. Simple postulates are given for physical probability, the only novel one being a locality condition. Translated into no-collapse quantum mechanics, without hidden variables, the postulates imply that the elements in any equiamplitude expansion of the quantum state are equiprobable. Such expansions therefore provide ensembles of microstates that can be used to deBine probabilities in the manner of frequentism, in von Mises’ sense (where the probability of ? is the frequency of occurrence of ? in a suitable ensemble). The result is the Born rule. Since satisfying our postulates, and in particular the locality condition (meaning no action-at-a-distance), these probabilities for no-collapse quantum mechanics are perfectly local, even though they violate Bell inequalities. The latter can be traced to a violation of outcome independence, used to derive the inequalities. But in no-collapse theory that is not a locality condition; it is a criterion for entanglement, not locality.
-
298635.573603
This paper explores zero and infinity as dual scalar operators that shape mathematical and physical structures across scales. From Cantorian set theory to black hole thermodynamics and fractal geometry, we argue that 0 and ∞ are not opposites but mirrors—reciprocally defining limits within a scalable universe.
-
357584.573613
Writing iambic pentameter is hard. Well maybe it’s easy for you, but we can at least agree that it’s not trivial: not just any ten-syllable line counts. There are rules! A theory of meter, whatever else it is, is an attempt to state those rules (for iambic and all other meters). …
-
417137.573658
Very short summary: In this essay, I discuss various recent controversial cases in Europe where political institutions have been criticized for making “undemocratic” decisions (Romania, Germany, France) to ask under which conditions the median voter’s views should rule. …