1. 45241.044534
    There has been increased attention on how scientific communities should respond to spurious dissent. One proposed method is to hide such dissent by preventing its publication. To investigate this, I computationally model the epistemic effects of hiding dissenting evidence on scientific communities. I find that it is typically epistemically harmful to hide dissent, even when there exists an agent purposefully producing biased dissent. However, hiding dissent also allows for quicker correct epistemic consensus among scientists. Quicker consensus may be important when policy decisions must be made quickly, such as during a pandemic, suggesting times when hiding dissent may be useful.
    Found 12 hours, 34 minutes ago on PhilSci Archive
  2. 223398.044936
    This is the last of the selected posts I will reblog from 5 years ago on the 2019 statistical significance controversy. The original post, published on this blog on December 13, 2019, had 85 comments, so you might find them of interest. …
    Found 2 days, 14 hours ago on D. G. Mayo's blog
  3. 262710.044991
    Classical mechanics is often considered to be a quintessential example of a deterministic theory. I present a simple proof, using a construction mathematically analogous to that of the Pasadena game (Nover and H´ajek [2004]), to show that classical mechanics is incomplete: there are uncountably many arrangements of objects in an infinite Newtonian space such that, although the system’s initial condition is fully known, it is impossible to calculate the system’s future trajectory because the total force exerted upon some objects is mathematically undefined. It is then shown how variations of this discrete system can be obtained which increasingly approximate a uniform mass distribution, similar to that underlying a related result, due to von Seeliger ([1895]). It is then argued that this incompleteness result, as well as that presented by the Pasadena game, has no real philosophical significance as it is a mathematical pseudoproblem shared by all models which attempt to aggregate infinitely many numerical values of a certain kind.
    Found 3 days ago on Jeffrey Barrett's site
  4. 386124.045034
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested that generics are ubiquitous in everyday communication, we found that most tweets (78%) about people contained no generics. However, tweets with generics received more “likes” and retweets. Furthermore, while recent psychological research may lead to the prediction that tweets with generics about political groups are more common than tweets with generics about ethnic groups, we found the opposite. However, consistent with recent claims that political animosity is less constrained by social norms than animosity against gender and ethnic groups, negative tweets with generics about political groups were significantly more prevalent and retweeted than negative tweets about ethnic groups. Our study provides the first ML-based insights into the use and impact of social generics on Twitter.
    Found 4 days, 11 hours ago on PhilSci Archive
  5. 391246.045065
    This paper argues that the extended mind approach to cognition can be distinguished from its alternatives, such as embedded cognition and distributed cognition, not only in terms of metaphysics, but also in terms of epistemology. In other words, it cannot be understood in terms of a mere verbal redefinition of cognitive processing. This is because the extended mind approach differs in its theoretical virtues compared to competing approaches to cognition. The extended mind approach is thus evaluated in terms of its theoretical virtues, both essential to empirical adequacy and those that are ideal desiderata for scientific theories. While the extended mind approach may have similar internal consistency and empirical adequacy compared to other approaches, it may be more problematic in terms of its generality and simplicity as well as unificatory properties due to the cognitive bloat and the motley crew objections.
    Found 4 days, 12 hours ago on PhilSci Archive
  6. 391279.045117
    This article distinguishes between two different kinds of biological normativity. One is the ‘objective ’ biological normativity of biological units discussed in anglophone philosophy of biology on the naturalization of such notions as function and pathology. The other is a ‘subjective’ biological normativity of the biological subject discussed in the continental tradition of Canguilhem and Goldstein. The existence of these two distinct kinds of biological normativity calls for a closer philosophical examination of their relationship. The aim of this paper is to address this omission in the literature and to initiate the construction of conceptual bridges that span the gaps between continental, analytic, and naturalist philosophy on biological normativity.
    Found 4 days, 12 hours ago on PhilSci Archive
  7. 418299.045155
    I continue my selective 5-year review of some of the posts revolving around the statistical significance test controversy from 2019. This post was first published on the blog on November 14, 2019. I feared then that many of the howlers of statistical significance tests would be further etched in granite after the ASA’s P-value project, and in many quarters this is, unfortunately, true. …
    Found 4 days, 20 hours ago on D. G. Mayo's blog
  8. 483049.045192
    We’ve been hard at work here in Edinburgh. Kris Brown has created Julia code to implement the ‘stochastic C-set rewriting systems’ I described last time. I want to start explaining this code and also examples of how we use it. …
    Found 5 days, 14 hours ago on Azimuth
  9. 506727.045226
    We propose an approach to the evolution of joint agency and cooperative behavior that contrasts with views that take joint agency to be a uniquely human trait. We argue that there is huge variation in cooperative behavior and that while much human cooperative behavior may be explained by invoking cognitively rich capacities, there is cooperative behavior that does not require such explanation. On both comparative and theoretical grounds, complex cognition is not necessary for forms of joint agency, or the evolution of cooperation. As a result, promising evolutionary approaches to cooperative behavior should explain how it arises across many contexts.
    Found 5 days, 20 hours ago on PhilSci Archive
  10. 506756.045259
    In quantum mechanics, we appeal to decoherence as a process that explains the emergence of a quasi-classical order. Decoherence has no classical counterpart. Moreover, it is an apparently irreversible process [1–7]. In this paper, we investigate the nature and origin of its irreversibility. Decoherence and quantum entanglement are two physical phenomena that tend to go together. The former relies on the latter, but the reverse is not true. One can imagine a simple bipartite system in which two microscopic subsystems are initially unentangled and become entangled at the end of the interaction. Decoherence does not occur, since neither system is macroscopic. Nevertheless, we will still need to quantify entanglement in order to describe the arrow of time associated with decoherence, because it occurs when microscopic systems become increasingly entangled with the degrees of freedom in their macroscopic environments. To do this we need to define entanglement entropy in terms of the sum of the von Neumann entropies of the subsystems.
    Found 5 days, 20 hours ago on PhilSci Archive
  11. 506830.045291
    A striking feature of our world is that we only seem to have records of the past. To explain this ‘record asymmetry’, Albert and Loewer claim that the Past Hypothesis induces a narrow probability density over the world’s possible past macrohistories, but not its future macro-histories. Because we’re indirectly acquainted with this low-entropy initial macrostate, our observations of records allow us to exploit the associated narrow density to infer the past. I will argue that Albert and Loewer cannot make sense of why this probabilistic structure exists without falling back on the very records they wish to explain. To avoid this circularity, I o↵er an alternative account: the ‘fork asymmetry’ explains the record asymmetry, and this in turn explains the narrow density - not vice versa.
    Found 5 days, 20 hours ago on PhilSci Archive
  12. 506866.045342
    Duality in the Exact Sciences: The Application to Quantum Mechanics.
    Found 5 days, 20 hours ago on PhilSci Archive
  13. 506896.045377
    It has recently been remarked that the argument for physicalism from the causal closure of the physical is incomplete. It is only effective against mental causation manifested in the action of putative mental forces that lead to acceleration of particles in the nervous system. Based on consideration of anomalous, physically unaccounted-for correlations of neural events, I argue that irreducible mental causation whose nature is at least prima facie probabilistic is conceivable. The manifestation of such causation should be accompanied by a local violation of the Second Law of thermodynamics. I claim that mental causation can be viewed as the disposition of mental states to alter the state probability distribution within the nervous system, with no violation of the conservation laws. If confirmed by neurophysical research, it would indicate a kind of causal homogeneity of the world. Causation would manifest probabilistically in both quantum mechanical and psychophysical systems, and the dynamics of both would be determined by the temporal evolution of the corresponding system state function. Finally, I contend that a probabilistic account of mental causation can consistently explain the character of the selectional states that ensure uniformity of causal patterns, as well as the fact that different physical realizers of a mental property cause the same physical effects in different contexts.
    Found 5 days, 20 hours ago on PhilSci Archive
  14. 517444.045416
    TLDR: The vibes are bad, even though—on most ways we can measure—things are (comparatively) good. The last post showed how disproportionately-negative sharing can emerge from trying to solve problems. …
    Found 5 days, 23 hours ago on Stranger Apologies
  15. 548629.045449
    Scientific reasoning represents complex argumentation patterns that eventually lead to scientific discoveries. Social epistemology of science provides a perspective on the scientific community as a whole and on its collective knowledge acquisition. Different techniques have been employed with the goal of maximization of scientific knowledge on the group level. These techniques include formal models and computer simulations of scientific reasoning and interaction. Still, these models have tested mainly abstract hypothetical scenarios. The present thesis instead presents data-driven approaches in social epistemology of science. A data-driven approach requires data collection and curation for its further usage, which can include creating empirically calibrated models and simulations of scientific inquiry, performing statistical analyses, or employing data-mining techniques and other procedures.
    Found 6 days, 8 hours ago on Vlasta Sikimić's site
  16. 569323.045478
    Commentary from Tina Röck on today’s post from Mazviita Chirimuuta on The Brain Abstracted (MIT Press). One way to read this book is to consider it a discussion of the limitations in our ability to understand hyper-complex, dynamic objects like the brain. …
    Found 6 days, 14 hours ago on The Brains Blog
  17. 583582.045508
    Post 5 of 5 from Mazviita Chirimuuta on The Brain Abstracted (Open Access: MIT Press). The last of this series of posts summarises the conclusions regarding philosophy of science more generally that emerge from this study of simplification in neuroscience. …
    Found 6 days, 18 hours ago on The Brains Blog
  18. 607278.045536
    In this chapter, I discuss time in nonrelativistic quantum theories. Within an instrumentalist theory like von Neumann’s axiomatic quantum mechanics, I focus on the meaning of time as an observable quantity, on the idea of time quantization, and whether the wavefunction collapse suggests that there is a preferred temporal direction. I explore this last issue within realist quantum theories as well, focusing on time reversal symmetry, and I analyze whether some theories are more hospitable for time travel than others.
    Found 1 week ago on Valia Allori's site
  19. 607312.045562
    This is a brief review of the history and development of quantum theories. Starting from the experimental findings and theoretical results which marked the crisis of the classical framework, I overview the rise of axiomatic quantum mechanics through matrix and wave mechanics. I discuss conceptual problems such as the measurement problem that led scientific realists to explore other, more satisfactory, quantum theories, as well as Bell’s theorem and quantum nonlocality, concluding with a short review of relativistic theories.
    Found 1 week ago on Valia Allori's site
  20. 622249.045587
    Pain, Ross; University of Bristol, Philosophy interventionism, transitions in human evolution, cultural complexity, causation, single-factor explanations Transitions in human evolution (e.g., the appearance of a novel technological industry) are typically complex events involving change at both spatial and temporal scales. As such, we expect them to have multiple causes. Yet it is commonplace for theorists to prioritise a single causal factor (e.g., cognitive change) in explaining these events. One rationale for this is pragmatic: theorists are specialised in a particular area—say, lithics or cognitive psychology—and so focus on one particular cause, holding all others equal. But could single-factor explanations ever be justified on objective grounds? In this article, we explore this latter idea using a highly influential theory of causation from the philosophy of science literature; namely, interventionism. This theory defines causation in a minimal way, and then draws a range of distinctions among causes, producing a range of different causal concepts. We outline some of these distinctions and show how they can be used to articulate when privileging one cause among many is objectively justified—and, by extension, when it is not. We suggest the interventionist theory of causation is thus a useful tool for theorists developing causal explanations for human behavioural evolution.
    Found 1 week ago on PhilSci Archive
  21. 622291.045614
    We propose a pluralist account of content for predictive processing systems. Our pluralism combines Millikan’s teleosemantics with existing structural resemblance accounts. The paper has two goals. First, we outline how a teleosemantic treatment of signal passing in predictive processing systems would work, and how it integrates with structural resemblance accounts. We show that the core explanatory motivations and conceptual machinery of teleosemantics and predictive processing mesh together well. Second, we argue this pluralist approach expands the range of empirical cases to which the predictive processing framework might be successfully applied. This because our pluralism is practice-oriented. A range of different notions of content are used in the cognitive sciences to explain behaviour, and some of these cases look to employ teleosemantic notions. As a result, our pluralism gives predictive processing the scope to cover these cases.
    Found 1 week ago on PhilSci Archive
  22. 622417.045643
    Advocates of philosophy in science and biomedicine argue that philosophers can embed their ideas into scientific research in order to help solve scientific problems (Pradeu et al. 2021). One successful example of this is the philosopher Thomas Pradeu’s essay, with Sébastien Jaeger and Eric Vivier, titled “The Speed of Change: Towards a Discontinuity Theory of Immunity?” published in Nature Reviews Immunology (2013). For my PhD in philosophy of science on Alzheimer’s disease embedded in a neurology environment, I was interested in the relationship between theory and practice, with a particular focus on the dominant “amyloid cascade hypothesis” of Alzheimer’s disease that has existed since the turn of the 1990s (Hardy and Higgins 1992; Hardy 2006; Herrup 2015; Kepp et al. 2023). According to this hypothesis, one of the brain proteins that defines Alzheimer’s disease—beta-amyloid—also causes it when it accumulates (Hardy and Higgins 1992). Thus, according to the hypothesis’s proponents, removing amyloid from the brain should be the priority for developing therapeutics. However, given the absence of effective treatments for Alzheimer’s disease based on this strategy, I was interested in whether this hypothesis represented a premature convergence of consensus around an untrue idea of what causes disease.
    Found 1 week ago on PhilSci Archive
  23. 622457.045671
    It is impossible to deduce the properties of a strongly emergent whole from a complete knowledge of the properties of its constituents, according to C. D. Broad, when those constituents are isolated from the whole or when they are constituents of other wholes. Elanor Taylor proposes the Collapse Problem. Macro-level property p supposedly emerges when its micro-level components combine in relation r. However, each component has the property that it can combine with the others in r to produce p. Broad’s nondeducibility criterion is not met. This article argues that the amount of information required for r is physically impossible. Strong Emergence does not collapse. But the Collapse Problem does. Belief in Strong Emergence is strongly warranted. Strong Emergence occurs whenever it is physically impossible to deduce how components, in a specific relation, would combine to produce a whole with p. Almost always, that is impossible. Strong Emergence is ubiquitous.
    Found 1 week ago on PhilSci Archive
  24. 622501.045698
    Information is a unique resource. Asymmetries that arise out of information access or processing capacities, therefore, enable a distinctive form of injustice. This paper builds a working conception of such injustice and explores it further. Let us call it informational injustice. Informational injustice is a consequence of informational asymmetries between at least two agents, which are deeply exacerbated due to modern information and communication technologies but do not necessarily originate with them. Informational injustice is the injustice of having information from an informational surplus being used to disadvantage the agent with less information.
    Found 1 week ago on PhilSci Archive
  25. 622607.045727
    I present and defend a new ontology for quantum theories (or “interpretations” of quantum theory) called Generative Quantum Theory (GQT). GQT postulates different sets of features, and the combination of these different features can help generate different quantum theories. Furthermore, this ontology makes quantum indeterminacy and determinacy play an important explanatory role in accounting for when quantum systems whose values of their properties are indeterminate become determinate. The process via which determinate values arise varies between the different quantum theories. Moreover, quantum states represent quantum properties and structures that give rise to determinacy, and each quantum theory specifies a structure with certain features. I will focus on the following quantum theories: GRW, the Many-Worlds Interpretation, single-world relationalist theories, Bohmian Mechanics, hybrid classical-quantum theories, and Environmental Determinacy-based (EnD) Quantum Theory. I will argue that GQT should be taken seriously because it provides a series of important benefits that current widely discussed ontologies lack, namely, wavefunction realism and primitive ontology, without some of their costs. For instance, it helps generate quantum theories that are clearly compatible with relativistic causality, such as EnD Quantum Theory. Also, GQT has the benefit of providing new ways to compare and evaluate quantum theories.
    Found 1 week ago on PhilSci Archive
  26. 652221.045763
    This work investigates absolute adjectives in the not very construction and how their pragmatic interpretation depends on the evaluative polarity and the scale structure of their antonymic pairs. Our experimental study reveals that evaluatively positive adjectives (clean) are more likely to be strengthened than evaluatively negative ones (dirty ), and that maximum standard adjectives (clean or closed) are more likely to be strengthened than minimum standard ones (dirty or open). Our findings suggest that both evaluative polarity and scale structure drive the asymmetric interpretation of gradable adjectives under negation. Overall, our work adds to the growing literature on the interplay between pragmatic inference, valence and semantic meaning.
    Found 1 week ago on Diana Mazzarella's site
  27. 652234.0458
    Faced with an intractable problem, some philosophers employ a singular strategy: their idea is to dismiss or dissolve the problem in some way, as opposed to meeting it head on with a proposed solution. Multiversism in many of its varieties has recently emerged as a popular application of this approach to the continuum problem: CH is true in some worlds, false in others; the effort to settle it one way or the other is misguided, a pseudo-problem. My goal here is to examine a few actual and possible implementations of this strategy, but first, in the interest of transparency, I should acknowledge a tendency toward the opposing view of CH. At least for now, I believe that one of the most pressing questions in the contemporary foundations of set theory is how to extend ZFC (or ZFC+LCs) in mathematically defensible ways so as to settle CH (and other independent questions) and to produce a more fruitful theory. It seems best to begin by sketching in my own peculiar take on this opposing view. Then, with this as backdrop, I’ll turn to multiversism.
    Found 1 week ago on Penelope Maddy's site
  28. 668776.045872
    Commentary from Dimitri Coelho Mollo on today’s post from Mazviita Chirimuuta on The Brain Abstracted (MIT Press). I was lucky to have had the chance to discuss this brilliant book with Mazviita Chirimuuta and others while it was still in preparation, and I’m looking forward to exchanging ideas about it once more over here at the BrainsBlog! …
    Found 1 week ago on The Brains Blog
  29. 668776.045915
    Post 4 of 5 from Mazviita Chirimuuta on The Brain Abstracted (Open Access: MIT Press). A central claim of the book is that recognition of the challenge of brain complexity — how it places pressure on scientists to devise experimental methods, theories and models, which drastically cut down the apparent complexity of neural processes – is indispensable when evaluating the philosophical import of neuroscientific results, and more generally, in understanding the historical trajectory of research on the brain. …
    Found 1 week ago on The Brains Blog
  30. 742341.045948
    Commentary from Carrie Figdor on today’s post from Mazviita Chirimuuta on The Brain Abstracted (MIT Press). The animating idea of Chirimuuta’s book is that science, and neuroscience in particular, must engage in simplification in order to explain a complex world. …
    Found 1 week, 1 day ago on The Brains Blog