-
43895.344795
This paper demonstrates a synergy between the Inner Speech model of free will and the Modular-with-Feedback Theory. The first section examines determinism and causation to argue that free will requires the ability of an agent to make a non-deterministic choice, which could have been decided otherwise. This in spite of physical, hereditary and environmental ad hoc factors which inevitably influence choice. Section two introduces the Modular-with-Feedback Theory which proposes free will to be compatible, not with determinism, but with chance. It provides a model of how free will emerges from oscillating neuronal activity in neural modules.
-
53015.344983
Colocationists about human beings think that in my chair are two colocated entities: a human person and a human animal. Both of them are made of the same stuff, both of them exhibit the same physical movements, etc. …
-
125387.344995
Between roughly 2001 and 2018, I’ve happy to have done some nice things in quantum computing theory, from the quantum lower bound for the collision problem to the invention of shadow tomography. I hope that’s not the end of it. …
-
404465.345008
In 2015 the Laser Interferometer Gravitational Wave Observatory (‘LIGO’), comprising observatories in Hanford, WA and Livingston, LA, detected gravitational waves for the first time. In the “discovery” paper the LIGO-Virgo Collaboration describe this event, “GW150914”, as the first “direct detection” of gravitational waves and the first “direct observation” of a binary black hole merger (Abbott et al. 2016, 061102–1). Prima facie, these are somewhat puzzling claims. First, there is something counter-intuitive about describing such a sophisticated experiment as a “direct” detection, insofar as this suggests that the procedure was simple or straightforward. Even strong gravitational waves produce only a tiny change in the length of the 4km interferometer arms.
-
404486.345016
Most proposals on the problem of mental causation or the exclusion problem emerge from two metaphysical camps: physicalism and dualism. However, a recent theory called “Russellian Panpsychism” (PRM) offers a distinct perspective on the relationship between consciousness and the physical world. PRM posits that phenomenal consciousness is fundamental and pervasive. It suggests that consciousness and physical properties are not entirely separate but rather intertwined. Phenomenal consciousness serves as a foundational ground for the dispositional nature of physical properties. By doing so, PRM proposes a novel solution to the exclusion problem, combining elements from both physicalism and dualism while addressing their inherent difficulties. Nonetheless, the success of PRM faces challenges, as argued by Howell (2015). In this paper, I argue that if PRM is formulated as a version of dual-aspect monism, it can offer a distinctive approach to tackling the exclusion problem.
-
441241.345023
Much work in philosophy, psychology, and neuroscience has argued for continuism about remembering and imagining (see, e.g., Addis J R Soc N Z 48(2–3):64–88, 2018). This view claims that episodic remembering is just a form of imagining, such that memory does not have a privileged status over other forms of episodic simulation (esp. imagination). Large parts of contemporary philosophy of memory support continuism. This even holds for work in semantics and the philosophy of language, which has pointed out substantial similarities in the distribution of the verbs remember and imagine. Our paper argues against the continuist claim, by focusing on a previously neglected source of evidence for discontinuism: the semantics of episodic memory and imagination reports. We argue that, in contrast to imagination reports, episodic memory reports are essentially diachronic, in the sense that their truth requires a foregoing reference-fixing experience. In this respect, they differ from reports of experiential imagination, which is paradigmatically synchronic. To defend our claim about this difference in diachronicity, we study the truth-conditions of episodic memory and imagination reports. We develop a semantics for episodic uses of remember and imagine that captures this difference.
-
462151.345031
The function of chatbots like OpenAI’s ChatGPT is based on detecting probabilistic patterns in the training data. This makes them vulnerable to generating factual mistakes in their outputs. Recently, it has become commonplace in philosophical, scientific, and popular discourses to capture such mistakes by metaphors that draw on discourses about the human mind. The three most popular metaphors at present are hallucinating, confabulating, and bullshitting. In this paper, we review, discuss, and criticise these mental metaphors. By applying conceptual metaphor theory, we provide numerous reasons why none of the metaphors succeed in providing us with a better understanding of factual chatbot mistakes. We conclude by calling for justifications of the epistemic feasibility and fruitfulness of the metaphors at issue. Furthermore, we raise the question what would be lost if we stopped trying to capture factual chatbot mistakes by mental metaphors.
-
554928.345048
Matthew Leisinger (2020) argues that previous interpretations of John Locke’s account of akrasia (or weakness of will) are mistaken and offers a new interpretation in their place. In this essay, we aim to recapitulate part of this debate, defend a previously articulated interpretation by responding to Leisinger’s criticisms of it, and explain why Leisinger’s own interpretation faces textual and philosophical problems that are serious enough to disqualify it as an accurate reconstruction of Locke’s views. In so doing, we aim to shed further light on Locke’s views on the various ways in which humans are prone to err in their pursuit of happiness.
-
561749.345056
A lot of discussion of memory theories of personal identity invokes science-fictional thought experiments, such as when memories are swapped between two brains. One of the classic papers is Shoemaker’s “Persons and their Pasts”. …
-
750570.345063
In a previous paper, we have shown that an ontology of quantum mechanics in terms of states and events with internal phenomenal aspects, that is, a form of panprotopsychism, is well suited to explaining the phenomenal aspects of consciousness. We have proved there that the palette and grain combination problems of panpsychism and panprotopsychism arise from implicit hypotheses based on classical physics about supervenience that are inappropriate at the quantum level, where an exponential number of emergent properties and states arise. In this article, we address what is probably the first and most important combination problem of panpsychism: the subject-summing problem originally posed by William James. We begin by identifying the physical counterparts of the subjects of experience within the quantum panprotopsychic approach presented in that article. To achieve this, we turn to the notion of subject of experience inspired by the idea of prehension proposed by Whitehead and show that this notion can be adapted to the quantum ontology of objects and events. Due to the indeterminacy of quantum mechanics and its causal openness, this ontology also seems to be suitable for the analysis of the remaining aspects of the structure combination problem, which shows how the structuration of consciousness could have evolved from primitive animals to humans. The analysis imposes conditions on possible implementations of quantum cognition mechanisms in the brain and suggests new problems and strategies to address them. In particular, with regard to the structuring of experiences in animals with different degrees of evolutionary development.
-
808305.345071
In Canada medical assistance in dying (MAiD) excludes individuals who have a mental health disorder as their sole underlying medical condition (MD-SUMC). This suggests mental illness is conceptually distinct from somatic illness, a position that requires further analysis. The Canadian government has postponed legislation on mental health conditions since it is highly controversial compared to physical illness, and this will allow them to collect more data on the issue (Government of Canada 2024a). Aside from the legislative reality in Canada, Jeffrey Kirby (2022) has described three positions that scholars have taken up regrading the ethical permissibility of MAiD for MD-SUMC: (a) accept that MAiD for MD-SUMC is ethically permissible; (b) presently oppose MAiD for MD-SUMC, but maintain that MAiD for MD-SUMC could become ethically permissible should the current eligibility criteria better align with the relevant empirical data; and (c) oppose MAiD for MD-SUMC on “philosophical grounds” and maintain that no alteration could make the practice ethically permissible.
-
1201165.345078
Does the visual system adapt to number? For more than fifteen years, most have assumed that the answer is an unambiguous “yes”. Against this prevailing orthodoxy, we recently took a critical look at the phenomenon, questioning its existence on both empirical and theoretical grounds, and providing an alternative explanation for extant results (the old news hypothesis). We subsequently received two critical responses. Burr, Anobile, and Arrighi rejected our critiques wholesale, arguing that the evidence for number adaptation remains overwhelming. Durgin questioned our old news hypothesis — preferring instead a theory about density adaptation he has championed for decades — but also highlighted several ways in which our arguments do pose serious challenges for proponents of number adaptation. Here, we reply to both. We first clarify our position regarding number adaptation. Then, we respond to our critics’ concerns, highlighting seven reasons why we remain skeptical about number adaptation. We conclude with some thoughts about where the debate may head from here.
-
1269695.345086
The standard definition of a gauge transformation in the constrained Hamiltonian formalism traces back to Dirac (1964): a gauge transformation is a transformation generated by an arbitrary combination of first-class constraints. On the basis of this definition, Dirac argued that one should extend the form of the Hamiltonian in order to include all of the gauge freedom. However, there have been some recent dissenters of Dirac’s view. Notably, Pitts (2014) argues that a first-class constraint can generate “a bad physical change” and therefore that extending the Hamiltonian in the way suggested by Dirac is unmotivated. In this paper, I use a geometric formulation of the constrained Hamiltonian formalism to argue that there is a flaw in the reasoning used by both sides of the debate, but that correct reasoning supports the standard definition and the extension to the Hamiltonian. In doing so, I clarify two conceptually different ways of understanding gauge transformations, and I pinpoint what it would take to deny that the standard definition is correct.
-
1269730.345095
This paper critically analyzes an argument made by Pitts (2022, 2024) that extending the form of the Hamiltonian constitutes a trivial reformulation of a theory and therefore doesn’t provide insight into the gauge transformations. I argue that a trivial reformulation cannot be used to add new gauge transformations to a theory, and I show that the sense in which extending the form of the Hamiltonian is nontrivial is that it removes structure.
-
1407008.345104
This chapter surveys some potential contributions of philosophy of science to the scientific study of consciousness. Given the unique challenges consciousness poses as both a subjective and objective phenomenon, philosophy of science can offer conceptual tools for clarifying definitions, establishing methodological frameworks, and guiding theory comparison and assessment. By integrating philosophical perspectives on general philosophy of science with specific debates within the science of consciousness, this chapter aims to demonstrate how philosophy of science can support consciousness researchers in navigating the complexities of their field and accelerating its progress. We suggest that a promising route to make progress in consciousness science is to combine three complementary and mutually reinforcing strategies: the empirical strategy, the confirmational strategy, and the metatheoretical strategy.
-
1693085.345112
There is an intimate connection between personal identity, prudential concern, and anticipation. But just how close is the connection? In this paper, I develop and motivate phenomenal accounts of both anticipation and prudential concern which suggest that the link between anticipation and personal identity and the one between anticipation and prudential concern is less tight than often assumed. I start by arguing against two influential accounts of anticipation, and present an alternative view based on the notion of phenomenal continuity, which detaches anticipation from identity. Next, I consider the relationship between anticipation and prudential concern and make the case that, here too, there is more space between the two than orthodoxy alleges. A qualified form of anticipation may be sufficient for prudential concern, but anticipation is not necessary for prudential concern. Finally, I argue that prudential concern is grounded in phenomenal continuity, rather than psychological continuity. This secures a close alignment between anticipation and prudential concern and offers a plausible explanation as to why anticipation is a guide to prudential concern.
-
1868254.345129
This chapter presents and contextualizes empirical work done by philosophers on imagination and creativity. It also suggests new directions for future empirical research. It is argued that empirical work on these (and other topics) is not just beneficial but necessary for philosophy of imagination and creativity. Further, it is argued that this work must sometimes be done by philosophers, and it is also often best done by philosophers. Topics discussed include imaginative resistance, counterfactual imagination, scientific imagination, distinguishing imagination from other mental states (e.g., supposition, memory), vividness of imagination, AI imagination, creativity and praiseworthiness, creativity as a virtue, and AI and creativity.
-
2276699.345138
So far, the scientific study of consciousness has mainly employed verbal and linguistic tools, as well as simple formalisations thereof, to describe conscious experiences. Typical examples are the distinction between ‘being conscious’ and ‘not being conscious’, between whether a subject is ‘perceiving a stimulus consciously’ or not, between whether a subject is ‘experiencing a particular quale’ rather than another, or more generally any account of whether some ? is part of the phenomenal character of a subject’s experience at some point of time. Formalisations of these verbal descriptions mostly make use of set theory, examples being sets of states of consciousness of a subject and simple binary classifications, or of real numbers, for example to model ‘how conscious’ a system is. There are sophisticated mathematical techniques in the field, but to a large extent they only concern the statistical analysis of empirical data, and the formulation of a theory of consciousness itself—but not the description of conscious experiences which underlies the data collection or modelling effort.
-
2560438.345151
The social epistemology of science has adopted agent-based computer simulations as one of its core methods for investigating the dynamics of scientific inquiry. The epistemic status of these highly idealized models is currently under active debate in which they are often associated either with predictive or the argumentative functions. These two functions roughly correspond to interpreting simulations as virtual experiments or formalized thought experiments, respectively. This paper advances the argumentative account of modeling by proposing that models serve as a means to (re)conceptualize the macro-level dynamics of complex social epistemic interactions. I apply results from the epistemology of scientific modeling and the psychology of mental simulation to the ongoing debate in the social epistemology of science. Instead of considering simulation models as predictive devices, I view them as artifacts that exemplify abstract hypothetical properties of complex social epistemic processes in order to advance scientific understanding, hypothesis formation, and communication. Models need not be accurate representations to serve these purposes. They should be regarded as pragmatic cognitive tools that engender rather than replace intuitions in philosophical reasoning and argumentation. Furthermore, I aim to explain why the community tends to converge around few model templates: Since models have the potential to transform our intuitive comprehension of the subject of inquiry, successful models may literally capture the imagination of the modeling community.
-
2560460.345158
Philosophy of Information is a discipline that has been systematized by Floridi and other theorists since the late 1990s, but even before that, qualitative and quantitative aspects of the concept of information have been considered in philosophy and related fields. Contemporary philosophers of information have presented several arguments on the “Veridicality Thesis” (VT), which is a qualitative issue, and which remains an influential topic in the philosophy of information and is important for considering both the quantitative and qualitative aspects. In this paper, I will focus on the "Semantic Argument” (SA) of the argument for VT proposed by Floridi. By pointing out that the nuclear and structural ideas of SA are "the distinction between domains of discussion" and "the interpretation of the informative content H", I will reformulate SA in a different way than in the previous studies and re-evaluate SA as suggesting quantitative issues. As a result, the idea of "negativity of information" (which is not commonly assumed) could be derived from SA.
-
2887009.345167
Our reasons for emotions such as sadness, anger, resentment, and guilt often remain long after we cease experiencing these emotions. This is puzzling. If the reasons for these emotions persist, why do the emotions not persist? Does this constitute a failure to properly respond to our reasons? In this paper we provide a solution to this puzzle. Our solution turns on the close connection between the rationality of emotion and the rationality of attention, together with the differing reasons to which attention and emotion are properly responsive.
-
2906536.345176
Do animals have episodic memory — the kind of memory which gives us rich details about particular past events — or is this uniquely human? This might look like an empirical question, but is attracting increasing philosophical attention. We review relevant behavioural evidence, as well as drawing attention to neuroscientific and computational evidence which has been less discussed in philosophy. Next, we distinguish and evaluate reasons for scepticism about episodic memory in animals. In the process, we articulate three pressing philosophical issues underlying these sceptical arguments, which should be the focus of future work. The Problem of Interspecific Variation asks which differences between humans and animal memory mean that an animal has a variant of episodic memory, and which mean that it has a different kind of memory altogether. The Problem of Functional Variation asks how we should conceptualise the functions of episodic memory and other capacities across species and across evolutionary time. Finally, the Problem of Alternatives asks what, besides episodic memory, might explain the evidence — and how we should evaluate competing explanations.
-
3021971.345185
I propose a novel interpretation of quantum theory, which I will call Environmental Determinacy-based (EnDQT). In contrast to the well-known interpretations of quantum theory, EnDQT has the benefit of not adding non-local, superdeterministic, or retrocausal hidden variables. Also, it is not in tension with relativistic causality by providing a local causal explanation of quantum correlations. Furthermore, measurement outcomes don’t vary according to, for example, systems or worlds. It is a conservative QT in the sense that, unlike theories such as spontaneous collapse theories, no modifications of the fundamental equations of quantum theory are required to establish when determinate values arise. Moreover, in principle, arbitrary systems can be in a coherent superposition for an arbitrary amount of time. According to EnDQT, at a certain stage of the evolution of the universe, some systems acquire the capacity to have and give rise to other systems having determinate values through an indeterministic process. Furthermore, this capacity propagates via local interactions between systems. When systems are isolated from others that have this capacity, they can, in principle, evolve unitarily indefinitely. EnDQT may provide payoffs to other areas of physics and their foundations, such as cosmology, via the features of the systems that start the chains of interactions.
-
3022301.345192
Our most successful and widely adopted models of reduction between scientific theories can be categorized into Nagelian and mathematical approaches. We argue that both accounts are critically incomplete due to what we term the justification gap problem. This issue stems from the lack of justification for the mathematical mappings and bridge laws these approaches use. We propose that integrating these models with a functionalist view of theoretical quantities can bridge this gap. Hence Nagelian and mathematical models should be turned into forms of functional reduction, a less common but increasingly relevant alternative approach to theory reduction. This conclusion underscores the superiority of functional reduction, revises how we conceptualise Nagelian and mathematical reduction, and counters recent arguments raised by Knox and Wallace (2023) on functionalism and reduction.
-
3022358.3452
This paper reevaluates the conventional topographic model of brain function, stressing the critical role of philosophical inquiry in neuroscience. Since the 1940s, pioneering studies by Penfield and subsequent advancements in visual neuroscience by Hubel and Wiesel have popularized the concept of cortical maps as representations of external and internal states. Yet, contemporary research in various sensory systems, including visual cortices in certain animals, questions the universal applicability of this model. We critique the restrictive influence of this paradigm and introduce an alternative conceptualization using the olfactory system as a model. This system's genetic diversity and dynamic neural encoding serve as a foundation for proposing a rule-based, adaptive framework for neural processing, akin to the dynamic routing in GPS technology, which moves beyond fixed spatial mappings.
-
3396160.345209
In this article we propose an analysis of the controversy between Geoffrey Jefferson and Alan Turing in terms of a Kuhnian account of thought experiments. In this account, the main task is not to evaluate intuitions or (only) to rearrange concepts. Instead, we propose that the main task is to construct scenarios by proposing relevant experiences in which shared assumptions and conflicting lines of inquiry can be made explicit. From this perspective, we can understand the arguments and assumptions in the Jefferson-Turing thinking machine controversy.
-
3396470.345231
The target article argues that embodied cognitive neuroscience converges on a mechanistic approach to explanation. We argue that it does not. Even some of the article’s paradigms for embodied cognitive neuroscience are explicitly non- or anti-mechanistic.
-
3396494.345241
According to recent discussion, cross-explanatory integration in cognitive science might proceed by constraints on mechanistic and dynamic-mechanistic models provided by different research fields. However, not much attention has been given to constraints that could be provided by the study of first-person experience, which in the case of multifaceted mental phenomena are of key importance. In this paper, we fill this gap and consider the question whether information about first-person experience can constrain dynamic-mechanistic models and what the character of this relation is. We discuss two cases of such explanatory models in neuroscience, namely that of migraine and of epilepsy. We argue that, in these cases, first-person insights about the target phenomena significantly contributed to explanatory models by shaping explanatory hypotheses and by indicating the dynamical properties that the explanatory models of these phenomena should account for, and thus directly constraining the space of possible explanations.
-
3440839.345248
Jerry Fodor deemed informational encapsulation ‘the essence’ of a system’s modularity and argued that human perceptual processing comprises modular systems, thus construed. Nowadays, his conclusion is widely challenged. Often, this is because experimental work is seen to somehow demonstrate the cognitive penetrability of perceptual processing, where this is assumed to conflict with the informational encapsulation of perceptual systems. Here, I deny the conflict, proposing that cognitive penetration need not have any straightforward bearing on (a) the conjecture that perceptual processing is composed of nothing but informationally encapsulated modules, (b) the conjecture that each and every perceptual computation is performed by an informationally encapsulated module, and (c) the consequences perceptual encapsulation was traditionally expected to have for a perception-cognition border, the epistemology of perception and cognitive science. With these points in view, I propose that particularly plausible cases of cognitive penetration would actually seem to evince the encapsulation of perceptual systems rather than refute/problematize this conjecture.
-
3440891.345257
This paper refines a controversial proposal: That core fact—perceptual systems, and that their discovery reveals perception to be in the business of attributing high-level properties to the entities it detects (cf., Block, 2014; Burge, 2011). This would be a significant result. But Carey's characterisation is controversial. Even when we bracket familiar concerns with modularity (e.g., Prinz, 2007), her argument for (2) raises worries of its own. A cursory discussion aside (2009, pp. 459–460), it involves generalising from a single example: The analogue magnitude system, involved in certain forms of numerical core cognition. Believing she has established the iconicity of this single system's outputs, Carey simply “speculates” (p. 458) that all core systems will be like it in producing wholly iconic representations.