1. 24617.380588
    For billions of people, the internet has become a second home. It is where we meet friends and strangers, where we organise and learn, debate, deceive, and do business. In some respects, it is like the town square it was once claimed to be, while in others, it provides a strange new mode of interaction whose influence on us we are yet to understand. This collection of papers aims to give a short indication of some of the exciting philosophical work being carried out at the moment that addresses the novel aspects of online communication. The topics range from the expressive functions of emoji to the oppressive powers of search engines.
    Found 6 hours, 50 minutes ago on Eliot Michaelson's site
  2. 36690.380915
    Del Santo and Gisin have recently argued that classical mechanics exhibits a form of indeterminacy and that by treating the observables of classical mechanics with real number precision we introduce hidden variables that restore determinacy. In this article we introduce the conceptual machinery required to critically evaluate these claims. We present a characterization of indeterminacy which can capture both quantum indeterminacy and the classical indeterminacy of Del Santo and Gisin. This allows us to show that there is an important difference in kind between the two: their classical indeterminacy can be resolved with hidden variables in a manner which is not possible for quantum indeterminacy.
    Found 10 hours, 11 minutes ago on PhilSci Archive
  3. 36717.380955
    The self-declared focus of Gordon Belot’s new book, Accelerating Expansion: Philosophy and Physics with a Positive Cosmological Constant, is de Sitter spacetime. Belot discusses its mathematical structure, the central role which it plays in contemporary relativistic cosmology, and—perhaps most importantly for the readers of this journal—the philosophical and conceptual puzzles that arise from taking this central role seriously. The book aims to be a graduate-student-friendly invitation to all things de Sitter, and the main text is accompanied by mathematical exercises and more philosophically-oriented open questions.
    Found 10 hours, 11 minutes ago on PhilSci Archive
  4. 36744.380979
    In a widely cited study in this Journal, Nick Bostrom has posed the Vulnerable World Hypothesis: technological development, if occuring under conditions similar to those in the present, will make the devastation of civilization likely. In light of such drastic consequences, the hypothesis is worth seriously discussing in both bredth and depth. Two related proposals are hereby made and justified: creating a metatechnological map (or “tech tree”) capable of telling in advance where exactly the dangerous technologies are, and reducing the size of the apocalyptic residual by antitotalitarian deradicalization and deprogramming. Both are modest proposals in the sense that they imply neither deep restructuring of human nature nor building instruments for potential totalitarian violations of civil rights and liberties.
    Found 10 hours, 12 minutes ago on PhilSci Archive
  5. 61060.381003
    This first excerpt for November is really just the preface to 3.1. Remember, our abbreviated cruise this fall is based on my LSE Seminars in 2020, and since there are only 5, I had to cut. So those seminars skipped 3.1 on the eclipse tests of GTR. …
    Found 16 hours, 57 minutes ago on D. G. Mayo's blog
  6. 94389.381033
    This article proposes an empirical approach to understanding the life of an organism that overcomes reductionist and dualist approaches. The approach is based on Immanuel Kant’s analysis of the cognitive conditions required for the recognition of an organism: the concept of teleology and the assumption of a formative power of self-generation. It is analyzed how these two criteria are applied in the cognition of a developing organism. Using the example of a developmental series of a plant leaf, an active and relational process between observer and developing organism is shown, within which the teleology and self-generating power of the organism can be empirically observed through the mental faculties of understanding and will. Furthermore, it is emphasized that, according to Kant, even physical objects are not readily given, but are actively constituted through the unification of sense perceptions with concepts. This Kantian mode of objectification facilitates cognition of the physical properties of an organism. It can be supplemented with a participatory and co-constitutive mode of realization, in which the life of the organism (its teleologically organizing and self-generating power) can become an object of empirical research. Furthermore, it is argued that the participatory mode also facilitates an expanded conception of nature that allows for the existence of living beings within it. Finally, an analogy to Goethe’s approach to the living organism is highlighted. In summary, it is stated that to understand life, one must consciously participate in it.
    Found 1 day, 2 hours ago on PhilSci Archive
  7. 94415.381058
    I present a case-study of intra-scientific communication, focusing on the role of technical typists for the Physical Review (PR) c. 1957-1977. I argue PR became a trading zone amidst the page-charge crisis, and analyze the working networks of physicists, typists, and editors to resolve this threat to the equality of intellectual authority of qualified practitioners. Challenging the picture of typist as “automaton,” I identify the skills and technical knowledge necessary to perform manuscript translation, and offer an account of the material culture of intra-scientific communication to situate the typists’ epistemic role in the broader project of science. I claim this is a case of an epistemic contribution that has been instrumentalized, akin to human computers and human scanners. However, unlike these cases, the technical typists were not directly involved in the production or critique of scientific data. Rather their novel contributions occurred in the new field of mathematical typesetting that emerged from this trading zone. Thus I seek to differentiate the material culture of scientific experiments from the material culture of intra-scientific communication. I see this project as an extension of Galison’s trading zone framework for the material culture of experiment, recognizing that there are many more material objects besides those of the laboratory that are created in the scientific process.
    Found 1 day, 2 hours ago on PhilSci Archive
  8. 94443.381081
    In this paper, I explore the relation between actual scientific practice and conceptual interpretation of scientific theories by investigating the particle concept in non-relativistic quantum mechanics (NRQM). On the one hand, philosophers have raised various objections against the particle concept within the context of NRQM and proposed alternative ontologies such as wave function realism, Bohmian particles, mass density field, and flashes based on different realist solutions to the measurement problem. On the other hand, scientists continue to communicate, reason, and explain experimental phenomena using particle terms in the relevant regimes. It has been explicitly argued and, for most of the time, implicitly assumed in the philosophical literature that we do not need to take scientists’ particle talk seriously, and recovering position measurement of particles in our ontological accounts is sufficient to make contact with scientific practice. In this paper, I argue that although scientific discourse does not postulate a uniform and coherent ontology, it nevertheless postulates real properties. Our ontological accounts thus need to recover the various properties associated with the NRQM particle concept in scientific discourse. I show that recovering these particle properties is not trivially achievable by pointing out some particular challenges these revisionary ontologies face in the process.
    Found 1 day, 2 hours ago on PhilSci Archive
  9. 94471.381105
    A growing body of psychological research suggests that different kinds of explanations of mental illness can have striking and distinctive effects on their audiences’ attitudes and inferences. But it is surprisingly difficult to account for why this is. In this paper, I present a “normative model” of explanatory framing effects, which I claim does a better job of capturing the empirical data than do intuitive alternatives. On this model, different explanations will tend to differently affect their audience’s reasoning because each encodes a different picture of the kind of problem represented by the explanandum, and therefore the kinds of responses to it that are normatively apt to pursue. For example, a biological explanation of depression will convey to its audience that depression is a specifically biological problem, and therefore that appropriate responses to it should be directed at biological facts and norms. The communication of this normative information is, I argue, importantly different from communicating that depression has biological causes. For example, although it seems plausible that most causal explanations can be viewed additively, different characterizations of a problem cannot be so easily combined. This might explain why philosophers and mental health experts sometimes seem to regard different explanations of mental illness as competing or mutually incompatible, despite their appreciation for the causal complexity of these conditions.
    Found 1 day, 2 hours ago on PhilSci Archive
  10. 94498.381133
    On the hierarchical picture of models, theoretical models are constructed on the basis of theory and assessed by comparison to distinct models constructed from empirical data. Using the determination of the structure of the folded polypeptide chain as a case study, I instead argue that information from theory and data alike can be interpreted as constraints in the construction of models of information. On this view, more reliable information ought to be prioritized, sometimes forcing reinterpretations of less reliable information; information from theory and data is thus interdependent. I show how the reliability of information can be assessed, arguing that the evidence for a planar peptide bond was stronger and more secure than the evidence for a repeating subunit every 5.1 Å, and that disciplinary origin in physics or biology is immaterial to assessing reliability. I further show how models are assessed alongside interpretations of information in a coherentist manner: a better model accommodates more information, particularly reliable information; a model’s inability to accommodate some information necessitates reinterpreting that information.
    Found 1 day, 2 hours ago on PhilSci Archive
  11. 94531.381157
    In this paper, I argue that the constrains operating in some cases of so-called mathematical explanations of physical phenomena (MEPPs) are not strictly speaking mathematical. For this reason, the existence of explanations by constraint in science does not justify mathematical realism, not even in its Aristotelian version. I illustrate this with the now-classic case of the Bridges of Königsberg, as well as the case of the carbon molecules known as buckyballs (buckminsterfullerene).
    Found 1 day, 2 hours ago on PhilSci Archive
  12. 187063.381181
    The idea of complementarity is one of the key concepts of quantum mechanics. Yet, the idea was originally developed in William James’ psychology of consciousness. Recently, it was re-applied to the humanities and forms one of the pillars of modern quantum cognition. I will explain two different concepts of complementarity: Niels Bohr’s ontic conception, and Werner Heisenberg’s epistemic conception. Furthermore, I will give an independent motivation of the epistemic conception based on the so-called operational interpretation of quantum theory, which has powerfully been applied in the domain of quantum cognition. Finally, I will give examples illustrating the potency of complementarity in the domains of bounded rationality and survey research. Concerning the broad topic of consciousness, I will focus on the psychological aspects of awareness. This closes the circle spanning complementarity, quantum cognition, the operational interpretation, and consciousness.
    Found 2 days, 3 hours ago on Reinhard Blutner's site
  13. 209900.381206
    Scientific theories often allow multiple formulations, e.g., classical mechanics allows Lagrangian and Hamiltonian formulations. While we count them as equally true, it has been suggested that one formulation can still be more metaphysically perspicuous than another. This paper provides a new account of metaphysical perspicuity, offering both descriptive and revisionary components: As a descriptive component, we examine how metaphysical perspicuity has been conceptualized in the literature. As a revisionary component, we challenge the conventional conception that associates metaphysical perspicuity with other neighboring notions. Thus, we argue that metaphysical perspicuity is a sui generis notion, worth adding to philosophers’ toolbox.
    Found 2 days, 10 hours ago on PhilSci Archive
  14. 209931.381258
    It is a widely-held belief that (the values of) physical quantities are part of a theory’s ideology. For example, it seems that special relativity has an ontology of spacetime points and particles, and an ideology of mass and charge properties. But these intuitions cannot be reconciled with the logical structure of physical theories. From the mathematical details of a theory such as special relativity, it turns out that mass and charge properties exist in quite the same way that particles exist: the theory quantifies over them. However, there is a different distinction in physics that can carry the same load, namely that between internal and external quantities. Roughly, the internal quantities depend on the external ones; external quantities instantiate internal ones. In contemporary physics, the values of physical quantities are internal. In this sense, the latter distinction supersedes the former. But ideology has not become irrevelant: we can identify it with the structure of a theory’s (external and internal) spaces. Although we can not read off a theory’s ideology from the formalism in the same way that we can read off its ontology, we can use symmetries to discover this structure.
    Found 2 days, 10 hours ago on PhilSci Archive
  15. 209961.38132
    In his Zur Einstein’schen Relativitätstheorie Cassirer presents relativity theory as the last manifestation of the tradition of the ‘physics of principles’ that, starting from the nineteenth century, has progressively prevailed over that of the ‘physics of models.’ In particular, according to Cassirer, the relativity principle plays a similar role as the energy principle in previous physics. The paper argues that this comparison represents the core of Cassirer’s neo-Kantian interpretation of relativity. Unlike the individual physical laws, these principles do not pretend to provide models of any specific physical system, but they do impose constraints on the law-like statements that describe them. The latter do not qualify as proper laws unless they satisfy such constraints. Cassirer pointed out that before and after Kant, the history of physics presents significant instances in which the search for formal conditions that the laws of nature must satisfy preceded and made possible the direct search for such laws. In his earlier years, Cassirer seems to have regarded principles like the energy principle, the relativity principle, the principle of least action, etc., as a constitutive but provisional form of a priori, imposing specific limitations on the form of the allowable laws of nature. Only in his later years, by attributing an autonomous status to these statements of principle, did Cassirer attribute a definitive but merely regulative meaning to the a priori. This does not impose specific requirements on natural laws but only a motivation to search for them.
    Found 2 days, 10 hours ago on PhilSci Archive
  16. 209993.381388
    Economic approaches to science underline the social structure of science as the chief explanatory factor in its collective epistemic success, and typically endorse a common conclusion, namely that individual virtue is neither necessary nor sufficient for science to be successful. We analyze a central example, the invisible hand argument, in reference to a case of collective epistemic failure, namely the credibility crisis. While divergent motivations might also serve the collective goals of science, our analysis shows that the presence of a significant proportion of virtuous scientists in a scientific community is a necessary condition for its success.
    Found 2 days, 10 hours ago on PhilSci Archive
  17. 210023.381425
    Despite nearly a century of development, quantum theories that address the special relativity-quantum mechanics tension still struggle with limited explanatory depth. Given the fundamental differences between the two core theories, this is not surprising. Quantum theories that rely on mathematical constructs to explain particle or quantum state dynamics often struggle to reconcile with special relativity’s constraints in a physical 4D spacetime.
    Found 2 days, 10 hours ago on PhilSci Archive
  18. 210051.381463
    Researchers in archaeology explore the use of generative AI (GenAI) systems for reconstructing destroyed artifacts. This paper poses a novel question: can such GenAI systems generate evidence that provides new knowledge about the world or can they only produce hypotheses that we might seek evidence for? Exploring responses to this question, the paper argues that 1) GenAI outputs can at least be understood as higher-order evidence (Parker 2022) and 2) may also produce de novo synthetic evidence.
    Found 2 days, 10 hours ago on PhilSci Archive
  19. 210078.381499
    There is a complex interplay between the models in dark matter detection experiments that have led to a difficulty in interpreting the results of the experiments and ascertain whether we have detected the particle or not. The aim of this paper is to categorise and explore the different models used in said experiments, by emphasizing the distinctions and dependencies among different types of models used in this field. With a background theory, models are categorised into four distinct types: background theory, theoretical, phenomenological, experimental and data. This taxonomy highlights how each model serves a unique purpose and operates under varying degrees of independence from their respective frameworks. A key focus is on the experimental model, which is shown to rely on constraints from both data and phenomenological ones. The article argues that while theoretical models provide a backdrop for understanding the nature of dark matter, the experimental models must stand independently, particularly in their methodological approaches. This is done via a discussion of the inherent challenges in dark matter detection, such as inconsistent results and difficulties in cross-comparison, stemming from the diverse modelling approaches.
    Found 2 days, 10 hours ago on PhilSci Archive
  20. 292001.381533
    Trump won. Within hours, the pundits had come out. They proposed diagnoses of why he won: institutional failures, cultural backlash, big money, political unoriginality, or luck. They pointed to mistakes: Biden shouldn’t have run again, Harris should’ve gone on Joe Rogan, the Democrats should’ve proposed a clearer vision, and so on. …
    Found 3 days, 9 hours ago on Stranger Apologies
  21. 440742.381564
    A central challenge for Neuroscience has been understanding how nervous systems flexibly and reliably generate complex behaviors. How does an animal distinguish a benign encounter from a threat? How is irrelevant information ignored to satisfy its needs? Since the days of Pavlov’s salivating dogs or Skinner’s bar pressing rats, behavioral neuroscientists have constructed highly constrained lab paradigms to study how experience modifies relatively simple behaviors. These behaviors give scientists the benefit of precision and control: by manipulating the temporal relations between stimulus and response, neural activity can be directly tied to the behavior. However, these behaviors are also seen as highly contrived in the sense that there are no levers or bells in the habitats in which rats’ and dogs’ brains evolved, which presumably shaped the neural circuits that generate most behaviors.
    Found 5 days, 2 hours ago on PhilSci Archive
  22. 440770.381618
    The program of reconstructing quantum theory based on information-theoretic principles enjoys much popularity in the foundations of physics. Surprisingly, this endeavor has only received very little attention in philosophy. Here I argue that this should change. This is because, on the one hand, reconstructions can help us to better understand quantum mechanics, and, on the other hand, reconstructions are themselves in need of interpretation. My overall objective, thus, is to motivate the reconstruction program and to show why philosophers should care. My specific aims are threefold. (i) Clarify the relationship between reconstructing and interpreting quantum mechanics, (ii) show how the informational reconstruction of quantum theory puts pressure on standard realist interpretations, (iii) defend the quantum reconstruction program against possible objections.
    Found 5 days, 2 hours ago on PhilSci Archive
  23. 440797.381646
    In quantum foundations, there is growing interest in the program of reconstructing the quantum formalism from clear physical principles. These reconstructions are formulated in an operational framework, deriving the formalism from information-theoretic principles. It has been recognized that this project is in tension with standard ψ-ontic interpretations. This paper presupposes that the quantum reconstruction program (QRP) (i) is a worthwhile project and (ii) puts pressure on ψ-ontic interpretations. Where does this leave us? Prima facie, it seems that ψ-epistemic interpretations perfectly fit the spirit of information-based reconstructions. However, ψ-epistemic interpretations, understood as saying that the wave functions represents one’s knowledge about a physical system, recently have been challenged on technical and conceptual grounds. More importantly, for some researchers working on reconstructions, the lesson of successful reconstructions is that the wave function does not represent objective facts about the world. Since knowledge is a factive concept, this speaks against epistemic interpretations. In this paper, I discuss whether ψ-doxastic interpretations constitute a reasonable alternative. My thesis is that if we want to engage QRP with ψ-doxastic interpretations, then we should aim at a reconstruction that is spelled out in non-factive experiential terms.
    Found 5 days, 2 hours ago on PhilSci Archive
  24. 440823.381678
    QBism is currently one of the most widely discussed “subjective” interpretations of quantum mechanics. Its key move is to say that quantum probabilities are personalist Bayesian probabilities and that the quantum state represents subjective degrees of belief. Even probability-one predictions are considered subjective assignments expressing the agent’s highest possible degree of certainty about what they will experience next. For most philosophers and physicists this means that QBism is simply too subjective. Even those who agree with QBism that the wave function should not be reified and that we should look for alternatives to standard ψ-ontic interpretations often argue that QBism must be abandoned because it detaches science from objectivity. The problem is that from the QBist perspective it is hard to see how objectivity could enter science. In this paper, I introduce and motivate an interpretation of quantum mechanics that takes QBism as a starting point, is consistent with all its virtues, but allows objectivity to enter from the get-go. This is the view that quantum probabilities should be understood as objective degrees of epistemic justification.
    Found 5 days, 2 hours ago on PhilSci Archive
  25. 440851.381712
    The success of AlphaFold, an AI system that predicts protein structures, poses a challenge for traditional understanding of scientific knowledge. It operates opaquely, generating predictions without revealing the underlying principles behind its predictive success. Moreover, the predictions are largely not empirically tested but are taken at face value for further modelling purposes (e.g. in drug discovery) where experimentation takes place much further down the line. The paper presents a trilemma regarding the epistemology of AlphaFold, whereby we are forced to reject one of 3 claims: (1) AlphaFold produces scientific knowledge; (2) Predictions alone are not scientific knowledge unless derivable from established scientific principles; and (3) Scientific knowledge cannot be strongly opaque. The paper argues that AlphaFold's predictions function as scientific knowledge due to their trustworthiness and functional integration into scientific practice. The paper addresses the key challenge of strong opacity by drawing on Alexander Bird's functionalist account of scientific knowledge as irreducibly social, and advances the position against individual knowledge being necessary for the production of scientific knowledge. It argues that the implicit principles used by AlphaFold satisfy the conditions for scientific knowledge, despite their opacity. Scientific knowledge can be strongly opaque to humans, as long as it is properly functionally integrated into the collective scientific enterprise.
    Found 5 days, 2 hours ago on PhilSci Archive
  26. 556205.381758
    In a recent critique of Kuorikoski, Lehtinen and Marchionni’s (2010) analysis of derivational robustness, Margherita Harris (2021) argued that the proposed independence condition is not credible. While this criticism is cogent, it does not challenge the incremental epistemic benefits from robustness, as they do not hinge on satisfying independence conditions. Distinguishing between incremental increases and a high absolute degree of confidence in a result is crucial: the latter requires demonstrating the independence of every false assumption.
    Found 6 days, 10 hours ago on PhilSci Archive
  27. 613880.381771
    Generative artificial intelligence (AI) applications based on large language models have not enjoyed much success in symbolic processing and reasoning tasks, thus making them of little use in mathematical research. However, recently DeepMind’s AlphaProof and AlphaGeometry 2 applications have recently been reported to perform well in mathematical problem solving. These applications are hybrid systems combining large language models with rule-based systems, an approach sometimes called neuro-symbolic AI. In this paper, I present a scenario in which such systems are used in research mathematics, more precisely in theorem proving. In the most extreme case, such a system could be an autonomous automated theorem prover (AATP), with the potential of proving new humanly interesting theorems and even presenting team in research papers. The use of such AI applications would be transformative to mathematical practice and demand clear ethical guidelines. In addition to that scenario, I identify other, less radical, uses of generative AI in mathematical research. I analyse how guidelines set for ethical AI use in scientific research can be applied in the case of mathematics, arguing that while there are many similarities, there is also a need for mathematics-specific guidelines.
    Found 1 week ago on PhilSci Archive
  28. 613983.381783
    In recent times, the exponential growth of sequenced genomes and structural knowledge of proteins, as well as the development of computational tools and controlled vocabularies to deal with this growth, has fueled a demand for conceptual clarification regarding the concept of function in molecular biology. In this article, we will attempt to develop an account of function fit to deal with the conceptual/philosophical problems in that domain, but which can be extended to other areas of biology. To provide this account, we will argue for three theses: (1) some authors have confused metatheoretical issues (about the meaning and application criteria of terms) with metaphysical ones (about teleology); this led them to (2) look for explicit definitions of “function”, in terms of necessary and sufficient criteria of application, in order to make the concept of function eliminable; however, (3) if one leaves metaphysical worries aside and focuses on functional attribution practices, it is more adequate to say that the concept of function has an open texture. That is, that a multiplicity of application criteria is available, none of which is sufficient nor necessary to attribute a function to a trait, and which only in concert form a clear picture. We distinguish this thesis from some usual forms of pluralism. Finally, we will illustrate this account with a historical reconstruction of the ascription of a water transport function to aquaporins.
    Found 1 week ago on PhilSci Archive
  29. 671433.381794
    Did you know students at Oxford in 1335 were solving problems about objects moving with constant acceleration? This blew my mind. Medieval scientists were deeply confused about the connection between force and velocity: it took Newton to realize force is proportional to acceleration. …
    Found 1 week ago on Azimuth
  30. 671434.381804
    In this post, I consider the questions posed for my (October 9) Neyman Seminar by Philip Stark, Distinguished Professor Statistics at UC Berkeley. We didn’t directly deal with them during the discussion, and I find some of them a bit surprising. …
    Found 1 week ago on D. G. Mayo's blog