-
46925.209514
The (dis)continuism debate in the philosophy of memory revolves around the question of whether memory and imagination belong to the same natural kind. Continuism, on the one hand, defends that they belong to the same natural kind. Discontinuism, on the other hand, defends that they do not belong to the same natural kind. By adopting a minimal notion of natural kind, one can recognize that there are different legitimate ways of sorting kinds, which lead to different positions in the debate. In this paper, I interpret continuism as a mechanistic thesis, according to which memory and imagination belong to the same natural kind because they are underpinned by the same constitutive mechanism. I clarify the implications of this thesis and show that most of the discontinuist attacks on continuism do not constitute a challenge to the mechanistic thesis. I also present a possible challenge to mechanistic continuism. This suggests that there may be multiple (dis)continuism debates. Keywords: Continuism. Discontinuism. Natural kinds. Mechanism. Episodic Memory. Episodic Imagination.
-
104600.209612
Theories of consciousness are abundant, yet few directly address the structural conditions necessary for subjectivity itself. This paper defends and develops the QBist constraint: the proposal that any conscious system must implement a first-person, self-updating inferential architecture. Inspired by Quantum Bayesianism (QBism), this constraint specifies that subjectivity arises only in systems capable of self-referential probabilistic updating from an internal perspective. The QBist constraint is not offered as a process theory, but as a metatheoretical adequacy condition: a structural requirement which candidate theories of consciousness must satisfy if they are to explain not merely behaviour or information processing, but genuine subjectivity. I assess five influential frameworks — the Free Energy Principle (FEP), Predictive Processing (PP), Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Higher-Order Thought (HOT) theory — and consider how each fares when interpreted through the lens of this constraint. I argue that the QBist constraint functions as a litmus test for process theories, forcing a shift in focus: from explaining cognitive capacities to specifying how an architecture might realize first-personal belief updating as a structural feature.
-
352148.209629
Sunwin chính chủ sở hữu bộ core game cùng hệ thống chăm sóc khách hàng vô địch. Sunwin hiện nay giả mạo rất nhiều anh em chú ý check kĩ uy tín đường link để đảm bảo an toàn và trải nghiệm game đỉnh cao duy nhất. …
-
352148.20964
Sunwin chính chủ sở hữu bộ core game cùng hệ thống chăm sóc khách hàng vô địch. Sunwin hiện nay giả mạo rất nhiều anh em chú ý check kĩ uy tín đường link để đảm bảo an toàn và trải nghiệm game đỉnh cao duy nhất. …
-
392958.209651
Philosophers of mind and philosophers of science have markedly different views on the relationship between explanation and understanding. Reflecting on these differences highlights two ways in which explaining consciousness might be uniquely difficult. First, scientific theories may fail to provide a psychologically satisfying sense of understanding—consciousness might still seem mysterious even after we develop a scientific theory of it. Second, our limited epistemic access to consciousness may make it difficult to adjudicate between competing theories. Of course, both challenges may apply. While the first has received extensive philosophical attention, in this paper I aim to draw greater attention to the second. In consciousness science, the two standard methods for advancing understanding—theory testing and refining measurement procedures through epistemic iteration—face serious challenges.
-
450678.209665
This paper proposes a novel constraint on artificial consciousness. The central claim is that no artificial system can be genuinely conscious unless it instantiates a form of self-referential inference that is irreducibly perspectival and non-computable. Drawing on Quantum Bayesianism (QBism), I argue that consciousness should be understood as an anticipatory process grounded in subjective belief revision, not as an emergent product of computational complexity. Classical systems, however sophisticated, lack the architecture required to support this mode of updating. I conclude that artificial consciousness demands more than computation—it demands a subject.
-
510588.209674
The philosopher Joseph S. Ullian died late last year. He is probably best-known for an introduction to epistemology co-authored with W. V. Quine, that is very much of its time. But what caught my eye in the obits was his reputation as a baseball fanatic. …
-
546516.209685
These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. …
-
658956.209694
In this paper we provide an ontological analysis of so-called “artifactual functions” by deploying a realizable-centered approach to artifacts which we have recently developed within the framework of the upper ontology Basic Formal Ontology (BFO). We argue that, insofar as material artifacts are concerned, the term “artifactual function” can refer to at least two kinds of realizable entities: novel intentional dispositions and usefactual realized entities. They inhere, respectively, in what we previously called “canonical artifacts” and “usefacts”. We show how this approach can help to clarify functions in BFO, whose current elucidation includes reference to the term “artifact”. In our framework, having an artifactual function implies being an artifact, but not vice versa; in other words, there are artifacts that lack an artifactual function.
-
969908.20971
In operational quantum mechanics two measurements are called operationally equivalent if they yield the same distribution of outcomes in every quantum state and hence are represented by the same operator. In this paper, I will show that the ontological models for quantum mechanics and, more generally, for any operational theory sensitively depend on which measurement we choose from the class of operationally equivalent measurements, or more precisely, which of the chosen measurements can be performed simultaneously. To this goal, I will take first three examples—a classical theory, the EPR-Bell scenario and the Popescu-Rochlich box; then realize each example by two operationally equivalent but different operational theories—one with a trivial and another with a non-trivial compatibility structure; and finally show that the ontological models for the different theories will be different with respect to their causal structure, contextuality, and fine-tuning.
-
1085233.209723
In Part 1 the properties of QBism are shown to be natural consequences of taking quantum mechanics at face value, as does Everett in his Relative State Formulation (1957). In Part 2 supporting evidence is presented. Parmenides' (Palmer, 2012) notion that the physical world is static and unchanging is vividly confirmed in the new physics. This means the time evolution of the physical world perceived by observers only occurs at the level of appearances as noted by Davies (2002). In order to generate this appearance of time evolution, a moving frame of reference is required: this is the only possible explanation of the enactment of the dynamics of physics in a static universe.
-
1657883.209733
Let us say that a being is omnisubjective if it has a perfect first-person grasp of all subjective states (including belief states). The question of whether God is omnisubjective raises a nest of thorny issues in the philosophy of language, philosophy of mind, and metaphysics, at least if there are irreducibly subjective states. There are notorious difficulties analyzing the core traditional divine attributes—omniscience, omnipotence, and omnibenevolence—but those difficulties are notorious partly because we seem to have a decent pre-theoretic grasp of what it means for something to be all knowing, powerful, and good, and so it is surprising, frustrating, and perplexing that it is so difficult to provide a satisfactory analysis of those notions.
-
1834871.209742
trices. The main aim is to construct a system of Nmatrices by substituting standard sets by quasets. Since QST is a conservative extension of ZFA (the Zermelo-Fraenkel set theory with Atoms), it is possible to obtain generalized Nmatrices (Q-Nmatrices). Since the original formulation of QST is not completely adequate for the developments we advance here, some possible amendments to the theory are also considered. One of the most interesting traits of such an extension is the existence of complementary quasets which admit elements with undetermined membership. Such elements can be interpreted as quantum systems in superposed states. We also present a relationship of QST with the theory of Rough Sets RST, which grants the existence of models for QST formed by rough sets. Some consequences of the given formalism for the relation of logical consequence are also analysed.
-
2067090.209752
A. I guess because I'm exploring the format in some of my own writing. Q. A. It's not ready to show to anyone. In fact the project is more notional than actual—a few notes in a plain text file, which I peek at from time to time. …
-
2123224.209762
It has been argued that non-epistemic values have legitimate roles to play in the classification of psychiatric disorders. Such a value-laden view on psychiatric classification raises questions about the extent to which expert disagreements over psychiatric classification are fueled by disagreements over value judgments and the extent to which these disagreements could be resolved. This paper addresses these questions by arguing for two theses. First, a major source of disagreements about psychiatric classification is factual and concerns what social consequences a classification decision will have. This type of disagreement can be addressed by empirical research, although obtaining and evaluating relevant empirical evidence often requires interdisciplinary collaboration.
-
2123243.209772
The Hard Problem of consciousness—explaining why and how physical processes are accompanied by subjective experience—remains one of the most challenging puzzles in modern thought. Rather than attempting to resolve this issue outright, in this paper I explore whether empirical science can be broadened to incorporate consciousness as a fundamental degree of freedom. Drawing on Russellian monism and revisiting the historical “relegation problem” (the systematic sidelining of consciousness by the scientific revolution), I propose an extension of quantum mechanics by augmenting the Hilbert space with a “consciousness dimension.” This framework provides a basis for reinterpreting psi phenomena (e.g., telepathy, precognition) as natural outcomes of quantum nonlocality and suggests that advanced non– human intelligence (NHI) technology might interface with a quantum–conscious substrate.
-
2700077.209783
In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the ‘cognitive map,’ which proposes a representational substrate for episodic memories and navigational capacities. In such ‘small cakes’ cases, neurocognitive representations are assumed to be meaningful and about the world, though it is wholly unclear who is reading them, how they are interpreted, and how they come to mean what they do. We analyze the ‘small cakes’ problem in neurocognitive theories (including, but not limited to, the cognitive map) and find that such an approach a) causes infinite regress in the explanatory chain, requiring a human-in-the-loop to resolve, and b) results in a computationally inert account of representation, providing neither a function nor a mechanism. We caution against a ‘small cakes’ theoretical practice across computational cognitive modelling, neuroscience, and artificial intelligence, wherein the scientist inserts their (or other humans’) cognition into models because otherwise the models neither perform as advertised, nor mean what they are purported to, without said ‘cake insertion.’ We argue that the solution is to tease apart explanandum and explanans for a given scientific investigation, with an eye towards avoiding van Rooij’s (formal) or Ryle’s (informal) infinite regresses.
-
2700123.209793
Motivational trade-off behaviours, where an organism behaves as if flexibly weighing up an opportunity for reward against a risk of injury, are often regarded as evidence that the organism has valenced experiences like pain. This type of evidence has been influential in shifting opinion regarding crabs and insects. Critics note that (i) the precise links between trade-offs and consciousness are not fully known; (ii) simple trade-offs are evinced by the nematode worm Caenorhabditis elegans, mediated by a mechanism plausibly too simple to support conscious experience; (iii) pain can sometimes interfere with rather than support making trade-offs rationally. However, rather than undermining trade-off evidence in general, such cases show that the nature of the trade-off, and its underlying neural substrate, matter. We investigate precisely how.
-
2754703.209808
There is a genre of moral philosophy for which I have particular affection, in which a thinker subjects an aspect of ordinary life to rigorous scrutiny, revealing it to be more puzzling or more profound that is typically acknowledged. …
-
2873206.209819
The received view of scientific experimentation holds that science is characterized by experiment and experiment is characterized by active intervention on the system of interest. Although versions of this view are widely held, they have seldom been explicitly defended. The present essay reconstructs and defuses two arguments in defense of the received view: first, that intervention is necessary for uncovering causal structures, and second, that intervention conduces to better evidence. By examining a range of non-interventionist studies from across the sciences, I conclude that interventionist experiments are not, ceteris paribus, epistemically superior to non-interventionist studies and that the latter may thus be classified as experiment proper. My analysis explains why intervention remains valuable while at the same time elevating the status of some non-interventionist studies to that of experiment proper.
-
2988570.209832
McQueen, K. J. [2024]: ‘Steven French’s A Phenomenological Approach to Quantum Mechanics’, BJPS Review of Books, 2024 In A Phenomenological Approach to Quantum Mechanics, Steven French o ers what he says will be his nal words on two key issues that he has for decades been trying to get across to the philosophy of physics community, one historical and one theoretical.
-
2988588.209847
The theoretical physicist Michio Kaku ([2014]) once stated that the brain is ‘the most complicated object in the known universe’. For decades, neuroscientists have been trying to disentangle the brain’s complexity in order to understand how it can support our behaviours and mental life. In his latest book, Luiz Pessoa wants us instead to embrace the entanglement of this intricate organ, not as a way to give up on our quest to understand its workings, but as a change in strategy to better comprehend its complexity.
-
2988713.209861
The book is structured into three parts. In the rst part (chapters 1–2), Chirimuuta gives a general philosophical framework with which to approach modelling perspectives in neuroscience. Part 2 (chapters 3–7) applies the framework to several detailed case studies from the history of neuroscience. Finally, part 3 (chapters 8–10) applies lessons from the rst parts to ongoing debates in both philosophy and neuroscience. In this review, I will begin by outlining the contributions in each of the three parts, with speci c focus on the strengths of the account. I will then give some criticisms of the meta-scienti c approach in the book. The goal here is not to criticize the book writ large, but instead to highlight potential debates within the generally productive stance that it lays out.
-
3161677.209875
Quantum entanglement is widely regarded as a nonlocal phenomenon, but Deutsch and Hayden (2000) have recently received growing support for their claim that in the Heisenberg picture, entanglement can be characterised locally using objects they call descriptors. I argue that the notion of locality underlying this claim is a flawed version of the principle of separability that I call spatial separability. An improved version, spatiotemporal separability, reveals that their claim is false. The proposed analysis of separability also reveals the crucial feature of quantum theory that makes it “spooky” in any picture: quantum entanglement entails that there are non-qualitative properties, which are profoundly different from the qualitative properties we have come to expect from classical physics.
-
3219368.209887
Mechanistic theories of explanation are widely held in the philosophy of science, especially in philosophy of biology, neuroscience and cognitive science. While such theories remain dominant in the field, there have been an increasing number of challenges raised against them over the past decade. These challenges claim that mechanistic explanations can lead to incoherence, triviality, or deviate too far from how scientists in the life sciences genuinely employ the term “mechanism”. In this paper, I argue that these disputes are fueled, in part, by the running together of distinct questions and concerns regarding mechanisms, representations of mechanisms, and mechanistic explanation. More care and attention to how these are distinct from one another, but also the various ways they might relate, can help to push these disputes in more positive directions.
-
3268559.209898
Time-travel fiction commonly depicts time travelers who encounter their past selves or, in the grandfather paradox, their ancestors. In traditional fictional representations of time travel, such as in H. G. Wells’s The Time Machine, travelers age in the same time sense as those visited in the past and future. Elsewhere, fantasy fiction supplies another possibility: the wizard Merlyn in T. H. White’s 1938 fantasy novel, The Sword in the Stone, meets a young Arthur. Merlyn ages in the opposite time sense to Arthur. Arthur’s first meeting with Merlyn is Merlyn’s last meeting with Arthur; and Arthur’s last meeting with him is Merlyn’s first. We can imagine time travelers who arrive in the past to meet their former selves, but now age in the opposite time sense. They are still time travelers since they are meeting their past selves. However, we have now added a twist from another part of the fantasy literature.
-
3853866.20991
We argue that special and general theories of relativity implicitly assume spacetime events correspond to quantum measurement outcomes. This leads to a change in how one should view the equivalence of spacetime and gravity. We describe a Bell test using time-like measurements that indicates a non classical causal structure that does not violate no-signaling. From this perspective, the violation of the Bell inequalities are already evidence for the non classical structure of flat spacetime as seen by an agent embedded in it. We argue that spacetime geometry can be learned by an embedded agent with internal actuators and sensors making internal measurements.
-
4082592.209921
Neil Mehta has written a fantastic book. A Pluralist Theory of Perception develops a novel theory of perception that illuminates the metaphysical structure, epistemic significance, and semantic role of perceptual consciousness. By and large, I found the core tenets of Mehta’s theory to be highly plausible and successfully defended. I could quibble with some parts (e.g., his claim that our conscious awareness of sensory qualities is non-representational). But I suspect our disagreements are largely verbal, and where they are non-verbal, they are minor. Instead of focusing on disagreements, in this commentary I wish to explore the metaphysical ramifications of Mehta’s theory with respect to the mind-body problem. Mehta has a great deal to say about the metaphysics of perception. Much of it seems to me to be in tension with physicalism. But throughout the book he remains officially neutral on the truth of physicalism, “in reflection of [his] genuine uncertainty” (ibid: 100). I will try to show that Mehta’s commitments lead almost inexorably to dualism (or, at least, away from physicalism) by giving three arguments against physicalism that centrally rely on premises to which Mehta is committed.
-
4199896.209938
In theory, replication experiments purport to independently validate claims from previous research or provide some diagnostic evidence about their truth value. In practice, this value of replication experiments is often taken for granted. Our research shows that in replication experiments, practice often does not live up to theory. Most replication experiments involve confounding factors and their results are not uniquely determined by the treatment of interest, hence are uninterpretable. These results can be driven by the true data generating mechanism, limitations of the original experimental design, discrepancies between the original and the replication experiment, distinct limitations of the replication experiment, or combinations of any of these factors. Here we introduce the notion of minimum viable experiment to replicate which defines experimental conditions that always yield interpretable replication results and is replication-ready. We believe that most reported experiments are not replication-ready and before striving to replicate a given result, we need theoretical precision in or systematic exploration of the experimental space to discover empirical regularities.
-
4373065.209952
In On Madness: Understanding the Psychotic Mind, published in 2022, Richard G.T. Gipps embarks on a philosophical exploration of psychosis. Generally speaking, Gipps’s book presents an approach he calls “apophatic psychopathology” (Gipps 2022, 2), borrowing from negative (that is, apophatic) theology and its method of understanding God’s nature by seeing how it defeats the predication of even those most supreme qualities we are drawn to predicate of Him. Gipps’s central insight regarding psychotic phenomena is that we best come to understand them not positively, by predicating of the psychotic subject this or that rationally intelligible, intentional state, but instead negatively, through seeing how such predications are here defeated. Sitting down with a person suffering from psychosis requires that we develop the capacity to stay with them in their brokenness, rather than projecting onto them an intentional structure that their illness has abrogated. Gipps comments critically on the relativistic tendencies we encounter these days, concluding that people suffering from severe psychosis are not happily thought of as just living in an “alternative reality” as good as the one populated by nonpsychotic people.