-
26184.182071
Ever since Carlo Rovelli introduced Relational Quantum Mechanics (RQM) to the public [1], it has attracted the interest and stimulated the imagination not only of physicists, but also, and in particular, of philosophers. There are several reasons why that is so. One of them is, quite simply, that a renowned and highly esteemed researcher had offered a new programmatic attempt to make sense of the longstanding puzzles at the foundations of quantum theory, which only happens every so often. What is more, the key to these puzzles was supposed to lie in an essentially conceptual move, in the exposure of an “incorrect notion” [1, p. 1637]. But the modern-day philosopher regards concepts as something like their natural hunting ground. If mention is made of the word, it makes them sit up as if somebody had yelled their name.
-
26216.182148
The idea that qualities can be had partly or to an intermediate degree is controversial among contemporary metaphysicians, but also has a considerable pedigree among philosophers and scientists. In this paper, we first aim to show that metaphysical sense can be made of this idea by proposing a partial taxonomy of metaphysical accounts of graded qualities, focusing on three particular approaches: one which explicates having a quality to a degree in terms of having a property with an in-built degree, another based on the idea that instantiation admits of degrees, and a third which derives the degree to which a quality is had from the aspects of multidimensional properties. Our second aim is to demonstrate that the choice between these account can make a substantial metaphysical difference. To make this point, we rely on two case studies (involving quantum observables and values) in which we apply the accounts in order to model apparent cases of metaphysical gradedness.
-
35912.182161
This paper investigates the epistemological problem of understanding the formative principles of living organisms, proposing that such knowledge requires a non-discursive mode of cognition. Revisiting the philosophies of Johann Wolfgang von Goethe and Rudolf Steiner, the study explores an alternative method of understanding life—not through mechanistic models or speculative vitalism, but through what is termed “intellectual intuition.” It is demonstrated how Goethe’s concept of the Urpflanze and Steiner’s interpretation enable a mental reconstruction of organic development as a lawful, self-generating process. Drawing parallels with Fichte’s notion of self-awareness through productive cognition, the paper argues that organisms can be known through a productive act of thinking in which the generative principle of life is intellectually intuited. This yields a scientifically grounded, though non-empirical, mode of “empirical vitalism,” in which the organism’s entelechy—its vital laws and force—can be observed through active, intuitive cognition. The study suggests that such a methodology could offer a viable epistemic and metaphysical framework to overcome the limitations of both reductionist biology and speculative vitalism.
-
35938.182179
A recent result from theoretical computer science provides for the verification of answers to the Halting Problem, even when there is no plausible means by which to derive those answers using a bottom-up approach. We argue that this result has profound implications for the existence of strongly emergent phenomena. In this work we develop a computer science-based framework for thinking about strong emergence and in doing so demonstrate the plausibility of strongly emergent phenomena existing in our universe. We identify six sufficient criteria for strong emergence and detail the actuality of five of the six criteria. Finally, we argue for the plausibility of the sixth criterion by analogy and a case study of Boltzmann brains (with additional case studies provided in the appendices.)
-
35962.182191
I will look at Bohr’s contentious doctrine of classical concepts - the claim that measurement requires classical concepts to be understood - and argue that measurement theory supports a similar conclusion. I will argue that representing a property in terms of a metric scale, which marks a shift from the empirical process of measurement to the informational output, introduces the inherently classical assumption of definite states and precise values, thus fulfilling Bohr’s doctrine. I examine how realism about metric scales implies that Bohr’s doctrine is ontological, while more moderate coherentist or model-based approaches to realism render it epistemological. Regardless of one’s stance towards measurement realism, however, measurement cannot be entirely quantum and quantum mechanics can model only the empirical side of measurement, not its informational output. Finally, I discuss how this might influence our understanding of the measurement problem.
-
35984.182201
I argue that the epistemic aim of scientific theorizing (EAST) is producing theories with the highest possible number and degree of theoretical virtues (call this “TV-EAST”). I trace TV- EAST’s logical empiricist origins and discuss its close connections to Kuhn’s and Laudan’s problem-solving accounts of the aim of science. Despite TV-EAST’s antirealist roots, I argue that if one adopts the realist view that EAST is finding true theories, one should also endorse TV-EAST. I then defend TV-EAST by showing that it addresses the challenges raised against using the “aim of science” metaphor and offers significant advantages over the realist account.
-
181669.182212
This is Part II of my commentary on Stephen Senn’s guest post, Be Careful What You Wish For. In this follow-up, I take up two topics:
(1) A terminological point raised in the comments to Part I, and
(2) A broader concern about how a popular reform movement reinforces precisely the mistaken construal Senn warns against. …
-
204229.182224
Wouldn’t it be great if Democrats prioritized a drastic increase in American productivity, thereby deprioritizing safetyism, wokeness, and redistribution? That’s definitely my view, so I’m delighted that Ezra Klein and Derek Thompson (henceforth KT) have written a whole book — Abundance sans subtitle — defending that position. …
-
208644.182234
Advocates of the explanatory indispensability argument for platonism say two things. First, we should believe in the parts of our best scientific theories that are explanatory. Second, mathematical objects play an explanatory role within those theories. I give a two-part response. I start by using a Bayesian framework to argue that the standards many have proposed must be met to show that mathematical objects are dispensable are too demanding. In particular, nominalistic theories may be more probable than platonistic ones even if they are extremely complicated by comparison. This is true even if there are genuine cases of mathematical explanation in science. The point made here is a matter of principle, holding regardless of how one assesses nominalistic theories already on offer. I then examine my recent nominalization of second-order impure set theory in light of the correct, laxer standards. I make a tentative case that my nominalistic theory meets those standards, which would undermine the explanatory indispensability argument. While this case is provisional, I aim to bring attention to my nominalization and others in light of the revised standards for demonstrating dispensability.
-
208689.182243
ground assumptions of some relevant versions of anti-exceptionalism about logic. We argue that this is a sort of sociological contingency rather than a metaphilosophical necessity. Drawing parallels with the metaphysics of science (as applied to quantum foundations), we try to bring the realist assumptions of anti-exceptionalism to light, to demotivate the necessary connection between realism and anti-exceptionalism, briefly exploring the possibility of adopting antirealism as the background default view of science instead.
-
208725.182252
Recent philosophical literature on the epistemology of measurement has relegated measurement uncertainty to a secondary issue, concerned with characterizing the quality of a measurement process or its product. To reveal the deeper epistemological significance of uncertainty, we articulate the problem of usefulness, which is concerned with the tension between the specificity of the conditions under which particular measurements are performed and the broader range of conditions in which measurement results are intended to be – and are – used. This is simultaneously an epistemological and a practical problem.
-
208782.182261
The theory of quasi-truth was developed by Newton da Costa and collaborators as a more realistic account of truth, encompassing the incompleteness and inconsistency of scientific knowledge. Intuitively, the idea is that truth is reached when consensus is established at the end of inquiry; until that is reached, we have something less than the whole truth, we have partial or quasi-truth. Formally, the view faces some challenges that have been recently addressed in the literature; they concern a mismatch between the offered formalism and the expected claims to be formalized. In this paper we use inspiration from quasi-truth theory to develop an account of consensus in science encompassing the notion of quasi-truth. We not only present the formal system capturing the idea of a scientific consensus, but also show how quasi-truth may be represented within it too. We compare the original quasi-truth approach to ours, and argue that the latter is able to face some of the difficulties that plagued the former.
-
295094.182271
Anyone engaging with the history and philosophy of pseudoscience, particularly the demarcation problem, will quickly land on Karl Popper and the campaign of the Vienna Circle of logical positivists against irrational metaphysics. The demarcation problem – how to identify the hallmarks of a serious and universal science-pseudoscience distinction – began with demarcating science from metaphysical fraud and dilettantism. Not much is known, however, about the Circle’s attitude towards typical pseudoscientific activities like parapsychology and psychic phenomena, spiritualism, psychoanalysis, and the social role and responsibility of scientific philosophy with regard to fringe and pseudoscientific endeavors. This paper provides the first systematic approach to the early history of the demarcation problem, with a special focus on logical positivism, which is supposed to be the standard-bearer of a rational, socially engaged but fallible scientific philosophy in demented times. As it turns out, most logical positivists were not just interested in pseudoscience as skeptical experimenters, but viewed it as holding various values, merits, and promises that they even imagined to be compatible with their empiricist and scientific world conception.
-
295155.182282
We argue that semiclassical gravity can be rendered consistent by assuming that quantum systems only emit a gravitational field when they interact with stable determination chains (SDCs), which are specific chains of interactions modeled via decoherence and test functions obeying a set of conditions. When systems are disconnected from SDCs, they do not emit a gravitational field. This denies the universality of gravity, while upholding a version of the equivalence principle. We argue that this theory can be tested by experiments that investigate the gravitational field emitted by isolated systems like in gravcats experiments or by investigating the gravitational interactions between entangled systems like in the (Bose- Marletto-Vedral) BMV experiment. Our theory fits into a new framework which holds that in the absence of certain conditions, quantum systems cannot emit a gravitational field. There are many possible conditions for systems to emit a gravitational field, and we will adopt a subset of them. We will show how this subset of conditions provides multiple benefits beyond rendering semiclassical gravity consistent, which includes deriving the value of the cosmological constant from first principles and providing an explanation for why the vacuum does not gravitate.
-
380082.182292
|Source|
My point is simple: knowledge is knowledge. Where it comes from doesn’t matter to its epistemic status. What matters is whether it deserves to be believed. The scientific revolution has provided a general approach – systematic inquiry into the independent evidential basis of claims (e.g. …
-
381528.182301
Philosophers of mind call Hempel’s dilemma an argument by (Crane and Mellor, 1990; Melnyk, 1997) against metaphysical physicalism, the thesis that everything that exists is either ‘physical’ or ultimately depends on the ‘physical’. Their argument is understood as a challenge to the idea of fixing what is ‘physical’ by appealing to a theory of physics. The dilemma briefly goes as follows. On the one hand, if we choose a current theory of physics to fix what is ‘physical’, then, since our current theories of physics are very likely incomplete, the so-articulated metaphysical physicalism is very likely false. On the other hand, if we choose a future theory of physics to fix what is ‘physical’, then, since future theories of physics are currently unknown, the so-articulated metaphysical physicalism has indeterminate meaning. Thus, it seems we can rely neither on current nor on future theories of physics to satisfactorily articulate metaphysical physicalism. Recently (Firt et al., 2022) argued that the dilemma extends to any theory that gives a deep-structure and changeable account of experience (including dualistic theories, although cf. Buzaglo, 2024).
-
466625.18231
David Suzuki is an 89-year-old Canadian geneticist, science broadcaster and environmental activist. In this interview he says some things that I’ve come to agree with. • ‘It’s too late’: David Suzuki says the fight against climate change is lost, iPolitics, 2 July 2025. …
-
554287.182319
In philosophy of science, the pseudosciences (like cryptozoology, homeopathy, Flat-Earth Theory, anti-vaccination activism, etc.) have been treated mainly negatively. They are viewed not simply as false, but even dangerous, since they try to mimic our best scientific theories, thus gaining respect and trust from the public, without the appropriate credentials. As a result, philosophers have traditionally put considerable effort into demarcating genuine sciences and scientific theories from pseudoscience. Since these general attempts at demarcation have repeatedly been shown to break down, the present paper takes a different and somewhat more positive approach to the study of pseudoscience. My main point is not that we should embrace and accept the pseudosciences as they are, but rather that there are indeed valuable and important lessons inherent in the study of pseudoscience and the different sections of the paper list at least six of them. By showing, through numerous examples, how (the study of) pseudoscience can teach us something about science, ourselves, and society, it makes the case that as philosophers, we should devote more time and energy to engaging with such beliefs and theories to help remedy their harmful effects.
-
640624.182328
The belief that beauty leads to truth is prevalent among contemporary physicists. Far from being a private faith, it operates as a methodological guiding principle, essentially when physicists have to develop theories without new empirical data.
-
640677.182337
Scenarios and pathways, as defined and used in the “SSP-RCP scenario framework”, are key in last decade’s climate change research and in the latest report of the Intergovernmental Panel on Climate Change (IPCC). In this framework, Shared Socioeconomic Pathways (SSP) consist of a limited set of alternative socioeconomic futures, that are both represented in short qualitative narratives and with quantitative projections of key drivers. One important use of the computationally derived SSP-scenarios is to do mitigation analysis and present a “manageable” set of options to decision-makers. However, all SSPs and derivatively SSP-scenarios in this framework assume a globally growing economy into 2100. This, in practice, amounts to a value-laden restriction of the space of solutions to be presented to decision-makers, falling short of IPCC’s general mandate of being “policy-relevant and yet policy-neutral, never policy-prescriptive”. Yet, the Global Economic Growth Assumption (GEGA) could be challenged and in practice is challenged by post-growth scholars.
-
640701.182345
Robustness of AI alignment is one of the safety issues of large language models. Can we predict how many mistakes will a model make when responding to a restricted request? We show that when access to the model is limited to in-context learning, the number of mistakes can be proved inapproximable, which can lead to unpredictability of alignment of the model. Against intuition, this is not entirely bad news for AI safety. Attackers might not be able to easily misuse in-context learning to break alignment of the model in a predictable manner because the mistake bounds of safe responses, which were used for alignment, can be proved inapproximable. This inapproximability can hide the safe responses from attackers and make alignment of the model unpredictable. If it were possible to keep the safe responses from attackers, responsible users would benefit from testing and repairing of the model’s alignment despite its possible unpredictability. We also discuss challenges involved in ensuring democratic AI alignment with limited access to safe responses, which helps us to make alignment of the model unpredictable for attackers.
-
789701.182355
Casajus (J Econ Theory 178, 2018, 105–123) provides a characterization of the class of positively weighted Shapley value for …nite games from an in…nite universe of players via three properties: e¢ ciency, the null player out property, and superweak differential marginality. The latter requires two players’payoffs to change in the same direction whenever only their joint productivity changes, that is, their individual productivities stay the same. Strengthening this property into (weak) differential marginality yields a characterization of the Shapley value. We suggest a relaxation of superweak differential marginality into two subproperties: (i) hyperweak differential marginality and (ii) superweak differential marginality for in…nite subdomains. The former (i) only rules out changes in the opposite direction. The latter (ii) requires changes in the same direction for players within certain in…nite subuniverses. Together with e¢ ciency and the null player out property, these properties characterize the class of weighted Shapley values.
-
813383.182363
Despite their successes at prediction and classification, deep neural networks (DNNs) are often claimed to fail when it comes to providing any understanding of real-world phenomena. However, recently, some authors have argued that DNNs can provide such understanding. To resolve this controversy, I first examine under which conditions DNNs provide humans with explanatory understanding in a clearly defined sense that refers to a simple setting. I adopt a systematic approach that draws on theories of explanation and explanatory understanding, but avoid dependence on any specific account by developing broad conditions of explanatory understanding that leave space for filling in the details in several alternative ways. I argue that the conditions are difficult to satisfy however these details are filled in. The main problem is that, to provide explanatory understanding in the sense I have defined, a DNN has to contain an explanation, and scientists typically do not know whether it does. Accordingly, they cannot feel committed to the explanation or use it, which means that other conditions of explanatory understanding are not satisfied. Still, in some attenuated senses, the conditions can be fulfilled. To complete my conciliatory project, I further show that my results so far are compatible with using DNNs to infer explanatorily relevant information in a thorough investigation. This is what the more optimistic literature on DNNs has focused on. In sum, then, the significance of DNNs for understanding real-world systems depends on what it means to say that they provide understanding, and on how humans use them.
-
813426.182372
Behavioral innovativeness—the propensity of an individual organism or higher group to innovate—is frequently invoked as a measurable trait allowing for cross-species comparisons. Individuals or species are often regarded as more innovative or less innovative than others, implying that we can rank order the degree of innovativeness along a single dimension. This paper defends a novel multidimensional understanding of behavioral innovativeness in which innovativeness can be modulated with respect to the generation and capitalization of opportunities, as well as the effectiveness and depth of the innovative behaviors. Besides innovation being multidimensional, it is also multilevel. Here we show how innovativeness at one level (such as the species level) does not automatically translate to innovativeness at another (such as the organism level) and discuss why this matters for cross-species comparisons.
-
898259.182381
Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay examines a troubling but underexplored feature of the ancestor-simulation hypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation. This essay evaluates the termination risk by using extrapolations from current computing practices and simulation technology. The conclusions, while provisional, have great implications for debates concerning the fundamental nature of reality and the safety of contemporary physics.
-
977468.18239
I present, discuss and critically evaluate Wallace’s account of functional (non-compositional) emergence to explain macroscopic phenomena within Everettian quantum mechanics. In brief, my main argument against this view is that it provides an unsatisfactory explanation, as it employs ‘effective ontologies’ defined in virtue of their usefulness which however do not possess any deeper justification.
-
977509.182398
A minimal realist thinks we are justified in believing in unobservable entities as explanatory, but we should be cautious in allowing non-empirically justified entities in our ontology. In this paper I argue that a minimalist would find my proposal for an ontology of fundamental entities without fundamental properties the best balance between empirical adequacy, explanatory power, and physical justification.
-
986141.182407
I discuss the distinction between extrinsic and intrinsic approaches to reformulating a theory with symmetries, and offer an account of the special value of intrinsic formalisms, drawing on a distinction between which mathematical expressions are meaningful within an extrinsic formalism and which are not.
-
986186.182416
This work explores the connection between logical independence and the algebraic structure of quantum mechanics. Building on results by Brukner et al., it introduces the notion of onto-epistemic ignorance : situations in which the truth of a proposition is not deducible due to an objective breakdown in the phenomenal chain that transmits information from a system A to a system B, rather than to any subjective lack of knowledge. It is shown that, under such conditions, the probabilities accessible to a real observer are necessarily conditioned by decidability and obey a non-commutative algebra, formally equivalent to the fundamental postulates of quantum mechanics.
-
986210.182424
In the 1960s and 1970s a series of observations and theoretical developments highlighted the presence of several anomalies which could, in principle, be explained by postulating one of the following two working hypotheses: (i) the existence of dark matter, or (ii) the modification of standard gravitational dynamics in low accelerations. In the years that followed, the dark matter hypothesis as an explanation for dark matter phenomenology attracted far more attention compared to the hypothesis of modified gravity, and the latter is largely regarded today as a non-viable alternative. The present article takes an integrated history and philosophy of science approach in order to identify the reasons why the scientific community mainly pursued the dark matter hypothesis in the years that followed, as opposed to modified gravity. A plausible answer is given in terms of three epistemic criteria for the pursuitworthiness of a hypothesis: (a) its problem-solving potential, (b) its compatibility with established theories and the feasibility of incorporation, and (c) its independent testability. A further comparison between the problem of dark matter and the problem of dark energy is also presented, explaining why in the latter case the situation is different, and modified gravity is still considered a viable possibility.