-
2183720.234863
This article examines the epistemological and ontological consequences of ultra-specialization in contemporary science. We argue that the increasing fragmentation of knowledge undermines intersubjective intelligibility, producing a form of objectivity detached from shared meaning and ontological resistance. Drawing on Kantian and phenomenological traditions, particularly the works of Husserl and Bachelard, we show that ultra-specialization leads to a redefinition of the scientific object as a procedural artifact rather than a point of rational encounter. We introduce the distinction between the intentionality of the scientist and the systemic intention of science, highlighting the dissociation between epistemic agency and formalized knowledge production. This condition generates cognitive opacity, institutional technocracy, and political distrust. In response, we propose structural reforms: deep interdisciplinarity, reintroduction of philosophical reflection within scientific practice, and the creation of epistemic translation platforms. Ultimately, we advocate for a pluralistic and reflexive model of science grounded not in technocratic closure but in the intersubjective articulation of reality. Science must not only produce valid results—it must make them intelligible and meaningful.
-
2183779.23496
tation of quantum mechanics and our world of experience, and begins to bridge it. §1 states the problem with Abner Shimony’s “Phenomenological principle”; §2 briefly presents the interpretation with connection to standard quantum mechanics; §3 presents the measurement problem in connection with the Phenomenological principle, the standard way out of it, and why the “non-individuals” interpretation of quantum mechanics should not follow it; §4 finally shows two closed venues for such an interpretation (Bohmian mechanics and the Modal-Hamiltonian Interpretation), and two alternatives for such it (Everettian quantum mechanics and spontaneous collapse theories).
-
2183804.234976
Scientific realism is typically associated with metaphysics. One current incarnation of such an association concerns the requirement of a metaphysical characterization of the entities one is being a realist about. This is sometimes called “Chakravartty’s Challenge”, and codifies the claim that without a metaphysical characterization, one does not have a clear picture of the realistic commitments one is engaged with. The required connection between metaphysics and science naturally raises the question of whether such a demand is appropriately fulfilled, and how metaphysics engages with science in order to produce what is called “scientific metaphysics”. Here, we map some of the options available in the literature, generating a conceptual spectrum according to how each view approximates science and metaphysics. This is done with the purpose of enlightening the current debate on the possibility of epistemic warrant that science could grant to such a metaphysics, and how different positions differently address the thorny issue concerning such a warrant.
-
2183828.23499
We report new findings from an empirical study of scientists from seven disciplines and scholars working in history and philosophy of science (HPS) regarding their views about scientific realism. We found that researchers’ general disposition to endorse or reject realism was better predicted by their views regarding scientific progress than their views about the mind-independence of scientific phenomena or other common theses in the realism debate. Age and gender also significantly predicted endorsement of scientific realism. Implications of these findings for philosophical debates about scientific realism and scientific progress are considered.
-
2183852.235004
We present the case for a fixed, finite number of discrete, non-interacting, spatiotemporally finite, decohered spacetimes emerging from Everett’s Universal Wave Function, which we refer to as “Many Discrete Worlds” (MDW). No universes “split” in MDW. We argue that a Many Worlds Interpretation (MWI) branching structure that emerges after decoherence is equivalent to individual, weighted universes, each of which is divided into an immense number of discrete, identical copies, the number being proportional to the individual weighting. This ensures that repeated experiments within any such universe will demonstrate consistency with the Born rule. Each of these universes should be considered as complete, containing every decohered outcome over the entire extent of its spacetime, including every event/interaction occurring beyond any cosmological particle horizon for the entire duration of the given universe. We show that a countably infinite number of interactions needs an uncountably infinite number of universes, and show why measures such as the Lebesgue measure will fail in that case, with the result that the Born rule would not be demonstrable. This leads to the conclusion that the number of universes in the multiverse must be finite and, as a surprising corollary, that the universes themselves are finite, both in space and duration.
-
2183929.235016
I. Abstract.......................................................................................................................................3 II. Introduction................................................................................................................................4
-
2183952.235029
Existing characterizations of ‘trace’ in the philosophy of the historical sciences agree that traces need to be downstream of the long-past event under investigation. I argue that this misses an important type of trace used in historical reconstructions. Existing characterizations of traces focus on what I propose to call direct traces. What I call circumstantial traces (i) share a common cause with a past event and (ii) allow an inference to said event via an intermediate step. I illustrate the significance of checking the alignment between direct and circumstantial traces in historical reconstructions through a case study from (micro-)palaeontology.
-
2215219.23505
We propose a framework for the analysis of choice behaviour when the latter is made explicitly in chronological order. We relate this framework to the traditional choice theoretic setting from which the chronological aspect is absent, and compare it to other frameworks that extend this traditional setting. Then, we use this framework to analyse various models of preference discovery. We characterise, via simple revealed preference tests, several models that differ in terms of (1) the priors that the decision-maker holds about alternatives and (2) whether the decision-maker chooses period by period or uses her knowledge about future menus to inform her present choices. These results provide novel testable implications for the preference discovery process of myopic and forward-looking agents.
-
2216745.235063
Consultant Statistician
Edinburgh
Relevant significance? Be careful what you wish for
Despised and Rejected
Scarcely a good word can be had for statistical significance these days. We are admonished (as if we did not know) that just because a null hypothesis has been ‘rejected’ by some statistical test, it does not mean it is not true and thus it does not follow that significance implies a genuine effect of treatment. …
-
2241939.235076
We critically examine the revised general relativity (GR) framework proposed by Capozziello, De Bianchi, and Battista [1, 2]. The authors introduce a Lorentzian-Euclidean Schwarzschild metric with a signature change at the event horizon (r = 2M ), claiming that radial geodesics halt at r = 2M with infinite proper time, avoiding the singularity at r = 0. We argue that their framework lacks physical justification, producing unphysical dynamics in the Lorentzian region (r > 2M ), where the metric is identical to Schwarzschild. Their revisions violate fundamental GR principles—including the equivalence principle, energy conservation, geodesic well-definedness, and consistency with the metric’s geometry—without empirical or theoretical grounding. Notably, their modified energy definition and geodesic equation yield an infinite proper time, contradicting GR’s finite result. We address the potential defense that these violations are expected in a revised GR, demonstrating that their framework’s deviations are ad hoc and undermine its validity as a physically meaningful extension.
-
2241960.235086
Jody Azzouni (2012b; 2010; 2009; 2004a; 2004b) defends a “deflationary nominalism”; deflationary in that mathematical sentences are true in a non-correspondence sense, and nominalist because mathematical terms—appearing in sentences of scientific theory or otherwise—refer to nothing at all. In this paper, I focus on Azzouni’s positive account of what should be said to exist. The quaternary “sufficient condition” (Azzouni 2004b: 384) for posit existence, Azzouni (2012b: 956) calls “thick epistemic access” (hereafter TEA), and in this paper I argue that TEA surreptitiously reifies some mathematical entities. The mathematical entity that I argue TEA reifies is the Fourier harmonic, an infinite-duration sinusoid applied throughout contemporary engineering and physics. The Fourier harmonic exists for the deflationary nominalist, I claim, because the harmonic plays what Azzouni calls an “epistemic role” (see section 2) in the commonplace observation of macroscopic entities, for example in viewing a vase with the human eye. Thus, I present More precisely, Azzouni’s deflationism interprets truth as nothing above and beyond the “generalization” expressed by the Tarski biconditional (e.g.): “Snow is white” is true iff snow is white (Azzouni 2010: 19). Hence what redeems that biconditional, in Azzouni’s account, is neither strictly correspondence, nor coherence, nor indispensability of the truth idiom to language. On the other hand, Azzouni rejects truth pluralism (see Azzouni 2010: §§4.7-4.8). The best articulation of Azzouni’s deflationary account of truth in science, mathematics, and applied mathematics may be Azzouni (2009), but see also Azzouni (2010: Chap. 4). The details will not concern me in this paper.
-
2271720.235099
Reductive doxastic moral relativism is the view that an action type’s being morally wrong is nothing but an individual or society’s belief that the action type is morally wrong. But this is viciously circular, since we reduce wrongness to a belief about wrongness. …
-
2285739.23511
This chapter begins by explaining why it is important to attend to duties when theorizing human rights. It then assesses four constraints on the duties associated with human rights: the constraints of correlativity, ability, agency, and demandingness. Finally, it compares two approaches to the duties associated with human rights: practice-based approaches and naturalistic approaches. It concludes that both approaches successfully produce duties, though neither abides by all four constraints.
-
2357293.235123
One way to interpret the difference between presentism and eternalism is perspectively. This view argues that from a perspective outside of time, we should adopt eternalism, and from a perspective embedded within time, we should be presentist. I will use the perspectival view to make two central claims about the probabilities in statistical mechanics. First, the perspectival view can help us respond to the challenge that these probabilities are merely epistemic, subjective, or anthropocentric. Second, we should treat the future as metaphysically open, due to both probabilities in statistical mechanics and the localised nature of the present.
-
2357315.235137
There are two main styles of interpreting the quantum state: either focusing on the fundamentality of the quantum state (a wavefunction or state realist view), or on how projection operators represent observable properties (an observable-first approach). Rather than being incompatible, I argue that these correspond to taking a 3rd person and 1st person perspective respectively. I further contend that the 1st person perspective - and the observable-first approach that goes with it - is better suited to explain measurement, based on the way that the metrology literature, as well as the work of Bohr, characterises measurement through the properties of a system. Finally, I show how the 1st person, observable-first approach can emerge in the world through the process of decoherence, hence showing the compatibility of the two approaches and resolving the need to choose absolutely between them.
-
2390866.235148
Classical liberalism tends to respond to the criticism of any voluntary market contract by promoting a wider choice of options and increased information and bargaining power so that no one would seem to be ‘forced’ or ‘tricked’ into an ‘unconscionable’ contract. Hence, at first glance, the strict logic of the classical liberal freedom-of-contract philosophy would seem to argue against ever abolishing any mutually voluntary contract between knowledgeable and consenting adults. Yet the modern liberal democratic societies have abolished (i.e., treated as invalid) at least three types of historical contracts: the voluntary slavery or perpetual servitude contract, the coverture marriage contract, and an undemocratic constitution to establish an autocratic government. Thus, the rights associated with those contracts are considered as inalienable. This paper analyzes these three contracts and shows that there is indeed a deeper democratic or Enlightenment classical liberal tradition of jurisprudence that rules out those contracts. The ‘problem’ is that the same principles imply the abolition of the employment contract, the contract for renting human beings, which is the foundation for the economic system that is often (but superficially) identified with classical liberalism itself. Frank Knight is taken throughout as the exemplary advocate of the economics of conventional classical liberalism.
-
2415018.235159
Despite decades of research in philosophy and cognitive science, the nature of concepts and the mechanisms underlying their change remain unresolved. Competing frameworks— externalist, inferentialist, embodied, and geometric—offer important insights but lack a unified account of how different types of concepts form, stabilize, and evolve. We propose a Systematic and Dynamic (SD) approach that examines mental content before and after concept formation, leading to the identification of inferential connections as ontologically distinct elements of conceptual architecture. Building on this foundation, we introduce the Inferential-Connection Mediated (ICM) model, which reconceptualizes concepts as dynamically structured entities composed of referential anchors—core subsets of inferential connections that fix reference— and broader networks that support reasoning, explanation, and communication. We distinguish among three types of inferential connections (observational, intentional, indirect) and classify four major concept types (theoretical, observational, subjective, and utilitarian), showing how differences in internal structure predict divergent developmental and evolutionary trajectories. The ICM model resolves long standing theoretical tensions—such as inferentialism vs. externalism, atomism vs. empiricism, and relativism vs. realism—by offering a unified, structurally grounded account of conceptual stability and change. We invite interdisciplinary commentary on the model’s implications for concept acquisition, reference, revision, and conceptual engineering across philosophy, cognitive science, linguistics, and artificial intelligence.
-
2415038.235172
This article introduces the concept of authoritarian recursion to describe how artificial intelligence (AI) systems increasingly mediate control across education, warfare, and digital discourse. Drawing on critical discourse analysis and sociotechnical theory, the study reveals how AI-driven platforms delegate judgment to algorithmic processes, normalize opacity, and recursively reinforce behavioral norms under the guise of neutrality and optimization. Case studies include generative AI models in classroom surveillance, autonomous targeting in military AI systems, and content curation logics in platform governance. Rather than treating these domains as disparate, the paper maps their structural convergence within recursive architectures of abstraction, surveillance, and classification. These feedback systems do not simply automate tasks—they encode modes of epistemic authority that disperse accountability while intensifying political asymmetries. Through cultural and policy analysis, the article argues that authoritarian recursion operates as a hybrid logic, fusing technical abstraction with state and market imperatives. The paper concludes by outlining implications for democratic legitimacy, human oversight, and the political design of AI governance frameworks.
-
2415065.235184
This paper reviews a paper from 1906 by J. Henri Poincaré on statistical mechanics with a background in his earlier work and notable connections to J. Willard Gibbs. Poincaré’s paper presents important ideas that are still relevant for understanding the need for probability in statistical mechanics. Poincaré understands the foundations of statistical mechanics as a many-body problem in analytical mechanics (reflecting his 1890 monograph on The Three-Body Problem and the Equations of Dynamics) and possibly influenced by Gibbs independent development published in chapters in his 1902 book, Elementary Principles in Statistical Mechanics. This dynamical systems approach of Poincaré and Gibbs provides great flexibility including applications to many systems besides gases. This foundation benefits from close connections to Poincaré’s earlier work. Notably, Poincaré had shown (e.g. in his study of nonlinear oscillators) that Hamiltonian dynamical systems display sensitivity to initial conditions separating stable and unstable trajectories. In the first context it precludes proving the stability of orbits in the solar system, here it compels the use of ensembles of systems for which the probability is ontic and frequentist and does not have an a priori value. Poincaré’s key concepts relating to uncertain initial conditions, and fine- and coarse-grained entropy are presented for the readers’ consideration. Poincaré and Gibbs clearly both wanted to say something about irreversibility, but came up short.
-
2415091.235204
Of all philosophers of the twentieth century, Karl Popper stands out as the one who did most to build bridges between the diverse academic disciplines. His first major work, Logik der Forschung (1934), concerns scientific method. Popper’s ideas were formed in the intellectual climate dominated by the logical positivism of the Wiener Kreis; despite a great diversity in academic interests, the members of the Vienna Circle wanted to reaffirm the scientific ethos of the Enlightenment ideal. Excited by the revolutionary ideas of Einstein (whom they engaged in both scientific and philosophical discussions), they believed that philosophy must play an active role in this new era by drawing as close to science as possible. Although Popper shared these general ideals, he strictly rejected all the main pillars of the positivist philosophy of science: inductivist logic of discovery, the verifiability principle and the concern with meaning. In single-handed opposition to this influential philosophical movement, Popper offered new solutions: a hypothetico-deductive view of science, based on falsifiability as the demarcation criterion and a denial of the claim that scientific theories could be verified. It is fair to say that the radicalism of Popper’s proposals caused an upheaval among philosophers of science, especially after the publication of his work in English in 1959.
-
2415116.235219
How do biologists pursue generalizations given the heterogeneity of biological systems? This paper addresses this question by examining an aspect of scientific generalization that has received little philosophical attention: how scientists express generalizations. Although it is commonly assumed that a scientific generalization takes the form of a representation referring to a property that is shared across a range of things, scientists sometimes express their ideas about generality by displaying multiple representations in certain configurations. Such configurations highlight commonalities between different target systems without eliminating system-specific differences. I analyze visual representations in review articles about collective cell migration as a case study. This illustrates that different types of visualizations, including single diagrams and configurations of multiple representations, function in a complementary way to promote understanding of, and reasoning about, generality, specificity, and diversity of biological mechanisms. I also discuss roles of generalizations in scientific investigations more broadly. I argue that an important role of generalizations in scientific research is to mediate and facilitate cross-fertilization among studies of different target systems. Multiple generalizations in research on collective cell migration together provide perspectives from which different biological systems are characterized and compared. They also provide heuristic hypotheses for studying less-explored systems as well as a basis for comparing developmental, pathological, and regenerative processes. This study sheds new light on how scientists pursue generalizations while embracing system-specific details. It also suggests that philosophical discussions should pay more attention to not only what representations scientists construct, but also how they present such representations.
-
2415170.23523
On a mathematically foundational level, our most successful physical theories (gauge field theories and general-relativistic theories) are formulated in a framework based on the differential geometry of connections on principal bundles. After reviewing the essentials of this framework, we articulate the generalized hole and point-coincidence arguments, examining how they weight on a substantivalist position towards bundle spaces. This question, then, is considered in light of the Dressing Field Method, which allows a manifestly invariant reformulation of gauge field theories and general-relativistic theories, making their conceptual structure more transparent: it formally implements the point-coincidence argument and thus allows to define (dressed) fields and (dressed) bundle spaces immune to hole-type arguments.
-
2415194.23524
We offer a category-theoretic representation of the process theory of causality. The new formalism allows process theorists to (i) explicate their explanatory strategies (etiological and constitutive explanations) using the compositional features of string diagrams; (ii) probabilistically evaluate causal effects through the categorical notion of functor; (iii) address the problem of explanatory irrelevance via diagram surgery; and (iv) provide a theoretical explanation for the difference between conjunctive and interactive forks. We also claim that the fundamental building blocks of the process theory—namely processes, interactions, and events—can be modeled using three types of morphisms. Overall, categorical modeling demonstrates that the philosophical theory of process causality possesses scientific rigor and expressive power comparable to those of its event-based counterparts, such as causal Bayes nets.
-
2455973.235252
T.M. Scanlon, following John Rawls, sought to change the landscape of moral theory by establishing an alternative to both intuitionism and consequentialism: contractualism. One of Scanlon’s most prominent arguments for contractualism is that it alone captures the value of mutual recognition and the role of norms of recognition in enacting this ideal moral relationship. Moreover, Scanlon argues that this ideal moral relationship explains the distinctive authority and force of morality. We concur. Nevertheless, we wish to offer an alternative to Scanlon’s account of mutual recognition and to the moral theory that emerges from it. Instead of construing mutual recognition in terms of justifiability to others, as Scanlon does, we propose to construe such relations as relations of caring solidarity with others as human. This alternative retains the overall benefits of the moral recognition approach, while offering quite different structural features, including a different account of the scope of morality. This essay is programmatic. The primary goal is to disentangle the infrastructure of moral recognition from the specific idea of justifiability, thereby to open up a range of striking new questions for moral theory.
-
2461207.235264
Very short summary: In this essay, I explore a potential tension in Chandran Kukathas’s account of the liberal archipelago, between the idea of morality conceived as a commons and the politics of indifference of the liberal state. …
-
2569128.235276
Time was, philosophers were skeptics, looking down on the poor benighted masses, who think their opinions are knowledge when they really aren’t. Maybe Bloggs thinks there’s a tree in the courtyard, but ah, a brain in a vat that was fed experiences just like those he’s having would think the same. …
-
2569546.235287
The paper offers a new analysis of the German particle wohl as akin to Italian futuro. They are both, we argue, necessity modals, but without bias. They are therefore more flexible than MUST and useable in situations with less reliable evidence or heightened uncertainty such as in reflective questions where they create Socratic inquisitiveness, a self- directed state of inquisitiveness with the goal to introspect rather than seek information.
-
2617407.2353
Three guys claim that any heavy chunk of matter emits Hawking radiation, even if it’s not a black hole:
• Michael F. Wondrak, Walter D. van Suijlekom and Heino Falcke, Gravitational pair production and black hole evaporation, Phys. …
-
2644614.23531
Errorstatistics.com has been extremely fortunate to have contributions by leading medical statistician, Stephen Senn, over many years. Recently, he provided me with a new post that I’m about to put up, but as it builds on an earlier post, I’ll reblog that one first. …
-
2645866.235323
We introduce a projection-based semantic interpretation of differentiation within the Universal Theory of Differentiation (UTD), reframing acts of distinction as structured projections of relational patterns. Building on UTD’s categorical and topos-theoretic foundations, we extend the formalism with a recursive theory of differentiational convergence. We define Stable Differentiational Identities (SDIs) as the terminal forms of recursive differentiation, prove their uniqueness and hierarchical organization, and derive a transparency theorem showing that systems capable of stable recursion can reflect upon their own structure. These results support an ontological model in which complexity, identity, and semantic expressibility emerge from structured difference. Applications span logic, semantics, quantum mechanics, and machine learning, with experiments validating the structural and computational power of the framework.