What I call the active mind approach revolves around the claim that what is “on” a person’s mind is in an important sense brought on and held on to through the agent’s self-conscious rational activity. In the first part, I state the gist of this perspective in a deliberately strong way in order to create a touchstone for critical discussion. In the second part, I engage with two categories of our mental lives that seem to speak against construing the mind as active. First, I discuss affectivity, in particular emotion, and show that emotional episodes are active engagements. Second, I discuss habitual action, and in particular those manifestations of habit which are initially opaque to the agent. In my responses to both objections, the notion of a practical self-understanding will play a central role. The result will be a qualified defence and expansion of the active mind position.
Gerundive imagination reports with an embedded reflexive subject (e.g. Zeno imagines himself swimming ) are ambiguous between an ‘inside’ and an ‘outside’ reading: the inside reading captures the imaginer’s directly making the described experience (here: swimming); the outside reading captures the imaginer’s having an experience of an event, involving his own counterpart, from an out-of-body point of view (watching one’s counterpart swim). Our paper explains the inside/outside-ambiguity through the observation (i) that imagining can referentially target different phenomenal experiences – esp. proprioception (i.e. bodily feeling) and visual perception (seeing, watching) – and (ii) that imagining and its associated experience can both be de se. Inside/outside readings then arise from intuitive constraints in the lexical semantics of verbs like feel, see. Keywords: Inside/outside readings · Imagistic perspective · Experiential imagining · Self-imagining · Counterfactual parasitism.
Can future robots and AI-systems have consciousness and genuinely human intelligence – or even better, superhuman intelligence? Is it possible for them to behave ethically? Here we look at these questions from the point of view of philosophy and AI, and argue that these questions are related: their answer hinges on the fulfillment of the same condition. Starting from an analysis of the concept of consciousness, we argue that the key capacity that computers and robots should possess in order to emulate human cognition and (ethical) consciousness is the capacity to learn and apply ‘coherent webs-of-theories’. We conjecture that where classic AI has been, in essence, ‘data-driven’, the greatest leap forward would be ‘theory-driven’ AI. We review prominent work in deep learning and cognitive neuroscience to back-up this claim. This paper is an attempt at synthesis between recent work in philosophy, AI and cognitive science.
In a recent paper, Justin D’Ambrosio (2020) has offered an empirical argument in support of a negative solution to the puzzle of Macbeth’s dagger—namely, the question of whether, in the famous scene from Shakespeare’s play, Macbeth sees a dagger in front of him. D’Ambrosio’s strategy consists in showing that “seeing” is not an existence-neutral verb; that is, that the way it is used in ordinary language is not neutral with respect to whether its complement exists. In this paper, we offer an empirical argument in favor of an existence-neutral reading of “seeing”. In particular, we argue that existence-neutral readings are readily available to language users. We thus call into question D’Ambrosio’s argument for the claim that Macbeth does not see a dagger. According to our positive solution, Macbeth sees a dagger, even though there is not a dagger in front of him.
ABSTRACT: Many artists, art critics, and poets suggest that an aesthetic appreciation of artworks may modify our perception of the world, including quotidian things and scenes. I call this Art-to-World, AtW. Focusing on visual artworks, in this paper I articulate an empirically-informed account of AtW that is based on content-related views of aesthetic experience, and on Goodman’s and Elgin’s concept of exemplification. An aesthetic encounter with artworks demands paying attention to its aesthetic, expressive, or design properties that realize its purpose. Attention to these properties make percipients better able to spot them in other entities and scenes as well. The upshot is that an aesthetic commerce with artworks enlarges the scope of what we are able to see and has therefore momentous epistemic consequences.
Quantum entanglement poses a challenge to the traditional metaphysical view that an extrinsic property of an object is determined by its intrinsic properties. So structural realists might be tempted to cite quantum entanglement as evidence for structural realism. I argue, however, that quantum entanglement undermines structural realism. If we classify two entangled electrons as a single system, we can say that their spin properties are intrinsic properties of the system, and that we can have knowledge about these intrinsic properties. Specifically, we can know that the parts of the system are entangled and spatially separated from each other. In addition, the concept of supervenience neither illuminates quantum entanglement nor helps structural realism.
For the uninitiated, the dense nature of mathematical language can act as an obscuring force. With this essay we aim to bring two classical results of discrete mathematics into the light. To this end we analyze winning strategies in a certain class of solitaire games. The gains are non-standard proofs of the results of K˝onig  and Vizing . For the standard treatment of these results, see . (For a dense and obscure version of the non-standard proofs presented here, see .) First, let’s introduce the games.
Early modern philosophy in Europe and Great Britain is awash with
discussions of the emotions: they figure not only in philosophical
psychology and related fields, but also in theories of epistemic
method, metaphysics, ethics, political theory and practical reasoning
in general. Moreover, interest in the emotions links philosophy with
work in other, sometimes unexpected areas, such as medicine, art,
literature, and practical guides on everything from child-rearing to
the treatment of subordinates. Because of the breadth of the topic,
this article can offer only an overview, but perhaps it will be enough
to give some idea how philosophically rich and challenging the
conception of the emotions was in this period.
To this end, I maintain that this property is individuated by its phenomenal roles, which can be internal – individuating the property per se – and external – determining further phenomenal or physical properties or states. I then argue that this individuation allows phenomenal roles to be organized in a necessarily asymmetrical net, thereby overcoming the circularity objection to dispositionalism. Finally, I provide reasons to argue that these roles satisfy modal fixity, as posited by Bird, and are not fundamental properties, contra Chalmers’ panpsychism. Thus, bodily pain can be considered a substantial dispositional property entrenched in non-fundamental laws of nature.
It seems plausible that visual experiences of darkness have perceptual, phenomenal content which clearly differentiates them from absences of visual experiences. I argue, relying on psychological results concerning auditory attention, that the analogous claim is true about auditory experiences of silence. More specifically, I propose that experiences of silence present empty spatial directions like ‘right’ or ‘left’, and so have egocentric spatial content. Furthermore, I claim that such content is genuinely auditory and phenomenal in the sense that one can, in principle, recognize that she is experiencing silence. This position is far from obvious as the majority of theories concerning silence perception do not ascribe perceptual, phenomenal content to experiences of silence.
Amodal completion is the representation of those parts of the perceived object that we get no sensory stimulation from. While amodal completion is rife and plays an essential role in all sense modalities, philosophical discussions of this phenomenon have almost entirely been limited to vision. The aim of this paper is to examine in what sense we can talk about amodal completion in olfaction. We distinguish three different senses of amodal completion – spatial, temporal and feature-based completion – and argue that all three are present and play a significant role in olfaction.
Introduction: In accounts of the two-factor theory of delusional belief, the second factor in this theory has been referred to only in the most general terms, as a failure in the processes of hypothesis evaluation, with no attempt to characterise those processes in any detail. Coltheart and Davies (2021) attempted such a characterisation, proposing a detailed eight-step model of how unexpected observations lead to new beliefs based on the concept of abductive inference as introduced by Charles Sanders Peirce.
Formal criteria of theoretical equivalence are mathematical mappings between specific sorts of mathematical objects, notably including those objects used in mathematical physics. Proponents of formal criteria claim that results involving these criteria have implications that extend beyond pure mathematics. For instance, they claim that formal criteria bear on the project of using our best mathematical physics as a guide to what the world is like, and also have deflationary implications for various debates in the metaphysics of physics. In this paper, I investigate whether there is a defensible view according to which formal criteria have significant non-mathematical implications, of these sorts or any other, reaching a chiefly negative verdict. Along the way, I discuss various foundational issues concerning how we use mathematical objects to describe the world when doing physics, and how this practice should inform metaphysics. I diagnose the prominence of formal criteria as stemming from contentious views on these foundational issues, and endeavor to motivate some alternative views in their stead. Formal criteria of theoretical equivalence are mathematical mappings between specific sorts of mathematical objects, such as sets of sentences (understood as syntactic strings), or sets of mathematical models, or categories of mathematical models (in the sense of category theory). Philosophers of science working on such criteria first associate different physical theories with some such mathematical objects. They then use theorems about which of these mathematical objects stand in one of these mathematical mappings to each other in order to draw conclusions about which physical theories are (or fail to be) “theoretically equivalent”.
The early Stoics diagnose vicious agents with various psychological diseases, e.g. love of money and love of wine. Such diseases are characterised as false evaluative opinions that lead the agent to form emotional impulses for certain objects, e.g. money and wine. Scholars have therefore analysed psychological diseases simply as dispositions for assent. This interpretation is incomplete, I argue, and should be augmented with the claim that psychological disease also affects what kind of action-guiding impressions are created prior to giving assent. This proposal respects the Stoic insistence that impression-formation, no less than assent, is an activity of reason. Insofar as the wine-lover’s reason is corrupted in a different way from the money-lover’s, the two vicious agents will form different action-guiding impressions when faced with similar stimuli. Here I juxtapose the Stoic account of expertise, on which experts form more precise action-guiding impressions compared to the amateur, in virtue of possessing a system of grasps (katalēpseis). So expertise enhances, whereas psychological disease degrades, the representational fidelity of the impressions that prefigure action. With these commitments, the Stoics can be seen to offer a nuanced and principled theory of cognitive penetration and to anticipate some recent proposals in epistemology and cognitive science.
In ‘The Concept of Valuing: Experimental Studies’ (CV), Joshua Knobe and Erica Roedder argue that moral considerations “play a role in the concept” of valuing. The short paper is a part of a broader project by one of the co-authors (Knobe) to show through experimental studies that folk psychology is not purely descriptive. Instead, Knobe argues, the criteria for application of a broad range of folk psychological concepts, including those of intentional action and causation, include normative elements. This thesis, though not entirely novel, certainly goes against the prevailing interpretations of folk psychology, and is supported by evidence gathered through innovative, cross-disciplinary empirical studies. The challenge it presents to the received view is therefore no doubt worth serious consideration. In earlier work I have critically examined Knobe’s empirical methodology. Here, I leave those concerns aside and focus on the explanation of the empirical data. I argue that while we are indeed more likely to interpret someone as valuing something if we ourselves take the object of valuing to be good than if we think it is not, this interpretive tendency can be explained by appeal to the principle of charity while holding on to a traditional, descriptive understanding of folk psychological concepts.
I outline two ways of reading what is at issue in the exclusion problem faced by non-reductive physicalism, the “vertical” versus “horizontal”, and argue that the vertical reading is to be preferred to the horizontal. I discuss the implications: that those who have pursued solutions to the horizontal reading of the problem have taken a wrong turn.
It is widely accepted both in theory and in practice that there is what has been called a non-hypocrisy norm on the appropriateness of moral blame. In the terminology of the recent literature on these topics, one has standing to blame only if, as a first approximation, one is not guilty of the very offence one seeks to criticize. Our acceptance of this norm – or one like it – is embodied in the common retorts to criticism, “Who are you to blame me?”, and “Look who’s talking!” If I regularly fail to reply to your emails on time, for instance, I’m in no moral position to criticize you for not replying to my emails on time – crucially, even if you are indeed blameworthy for this failure, and, crucially, even if someone else can appropriately make this criticism. Precisely how to formulate and motivate the non-hypocrisy norm on the standing to blame is a complicated affair. But the following is uncontroversial: if there is a standing-norm on blame at all, then there is some suitable non-hypocrisy norm on standing to blame.
Phenomenal consciousness has an important role in ethics: it is plausible that it is at least a necessary condition for a distinctive kind of moral status. There is a mismatch between this ethical role and an a posteriori (or “type-B”) materialist solution to the mind-body problem. I argue that, if type-B materialism is correct, then the reference of the concept of phenomenal consciousness is indeterminate between properties that are coextensive in the case of (fully conscious) humans but have radically different extensions in non-human animals. The result is that the moral status of many non-mammalian animals is indeterminate. Some ways of managing this disturbing indeterminacy are evaluated.
Debunking arguments aim to undermine common sense beliefs by showing that they are not explanatorily or causally linked to the entities they are purportedly about. Rarely are facts about the etiology of common sense beliefs invoked for the opposite aim, that is, to support the reality of entities that furnish our manifest image of the world. Here I undertake this sort of un-debunking project. My focus is on the metaphysics of ordinary physical objects. I use the view of perception as approximate Bayesian inference to show how representations of ordinary objects can be extracted from sensory input in a rational and truth-tracking manner. Drawing an analogy between perception construed as Bayesian hypothesis testing and scientific inquiry, I sketch out how some of the intuitions that traditionally inspired arguments for scientific realism also find application with regards to proverbial tables and chairs.
I examine the once popular claim according to which interpersonal comparisons of welfare are necessary for social choice. I side with current social choice theorists in emphasizing that, on a narrow construal, this necessity claim is refuted beyond appeal. However, I depart from the opinion presently prevailing in social choice theory in highlighting that on a broader construal, this claim proves not only compatible with, but even comforted by, the current state of the field. I submit that all in all, the most accurate philosophical assessment consists not in flatly rejecting this necessity claim, but in accepting it in suitably revised form.
According to a bodily view of pain, pains are objects which are located in body parts. This bodily view is supported by the locative locutions for pain in English, such as that “I have a pain in my back.” Recently, Liu and Klein (Analysis, 80(2), 262–272, 2020) carry out a cross-linguistic analysis, and they claim that (1) Mandarin has no locative locutions for pain and (2) the absence of locative locutions for pain puts the bodily view at risk. This paper rejects both claims. Regarding the philosophical claim, I argue that a language without locative locutions for pain only poses a limited challenge to the bodily view. Regarding the empirical claim, I identify the possible factors which might have misled Liu and Klein about the locative locutions for pain in Mandarin, and argue that Mandarin has a wide range of locative locutions for pain by conducting a corpus analysis. I conclude that compared to English, Mandarin lends no less, if not more, support to the bodily view of pain.
A number of thorny issues such as the nature of time, free will, the clash of the manifest and scientific images, the possibility of a naturalistic foundation of morality, and perhaps even the possibility of accounting for consciousness in naturalistic terms, seem to me to be plagued by the conceptual confusion nourished by a single fallacy: the old fisherman’s mistake.
Supervenience in metaethics is the notion that there can be no moral difference between two acts, persons or events without some non-moral difference underlying it. If St. Francis is a good man, there could not be a man exactly like St. Francis in non-evaluative respects that is not good. The phenomenon was first systematically discussed by R. M. Hare (1952), who argued that realists about evaluative properties struggle to account for it. As is well established, Hare, and following him, Simon Blackburn, mistakenly took the relevant phenomenon to be weak rather than strong supervenience, and the explanations they offered for it are accordingly outdated. In this paper, I present a non-factualist account of strong supervenience of the evaluative and argue that it fares better than competing realist views in explaining the conceptual nature of the phenomenon, as well as in offering an account of the supervenience of the evaluative in general, rather than more narrowly the moral. While Hare and Blackburn were wrong about the specifics, they were right in that non-factualists can offer a plausible account of the supervenience of the evaluative, that in certain respects is superior to competing realist explanations.
In this essay, I provide a forward-looking naturalized theory of mental content designed to accommodate predictive processing approaches to the mind, which are growing in popularity in philosophy and cognitive science. The view is introduced by relating it one of the most popular backward-looking teleosemantic theories of mental content, Fred Dretske’s informational teleosemantics. It is argued that such backward-looking views (which locate the grounds of mental content in the agent’s evolutionary or learning history) face a persistent tension between ascribing determinate contents and allowing for the possibility of misrepresentation. A way to address this tension is proposed by grounding content attributions in the agent’s own ability to detect when it has represented the world incorrectly through the assessment of prediction errors—which in turn allows the organism to more successfully represent those contents in the future. This opens up space for misrepresentation, but that space is constrained by the forward-directed epistemic capacities that the agent uses to evaluate and shape its own representational strategies. The payoff of the theory is illustrated by showing how it can be applied to interpretive disagreements over content ascriptions amongst scientists in comparative psychology and ethology. This theory thus both provides a framework in which to make content attributions to representations posited by an exciting new family of predictive models of cognition, and in so doing addresses persistent tensions with the previous generation of naturalized theories of content.
Cognitive approaches to consciousness appeal to rational or epistemic traits in order to demarcate the boundary between conscious and non-conscious species. If a specific cluster of traits is present in a species, then it should qualify as conscious, given the importance of these traits for the kind of reflective and inferential capacities associated with conscious awareness in humans. A comparative and biologically informed cognitive approach affords the possibility of studying consciousness from a non-anthropocentric perspective because it appeals to traits found in other species that are not exclusive to human beings. It may also offer the best way of conceptualizing conscious awareness, an issue that notoriously resists empirical investigation, on the basis of scientific evidence on the nature and development of these traits. Birch, Ginsburg and Jablonka present in their important paper one of the most comprehensive and well-documented cognitive theories of consciousness. This alone is an impressive achievement, especially because the literature on consciousness is unfortunately riddled with verbal disputes and views that explicitly ignore the scientific evidence because, allegedly, all discussions about consciousness must be conducted introspectively or through a priori judgment. But this paper accomplishes much more, because it also serves as an example of how to integrate extensive scientific findings into a coherent and systematic scientific explanation of a particularly intractable topic. The authors deserve praise for enriching the literature on the science of consciousness with this interdisciplinary contribution.
Double dissociations between perceivable colors and physical properties of colored objects have led many philosophers to endorse relationalist accounts of color. I argue that there are analogous double dissociations between attitudes of belief—the beliefs that people attribute to each other in everyday life— and intrinsic cognitive states of belief—the beliefs that some cognitive scientists posit as cogs in cognitive systems—pitched at every level of psychological explanation. These dissociations provide good reason to refrain from conflating attitudes of belief with intrinsic cognitive states of belief. I suggest that interpretivism provides an attractive account of the former (insofar as they are not conflated with the latter). Like colors, attitudes of belief evolved to be ecological signifiers, not cogs in cognitive systems.
Turing’s much debated test has just turned 70 and is still fairly controversial. His seminal 1950 paper is seen as a complex and multi-layered text and key questions are yet to be answered. Why did Turing refer to “can machines think?” as a question that was “too meaningless to deserve discussion” and yet spent the largest section (over 40%) of his text discussing it? Why did he spend several years working with chess-playing as a task to illustrate and test machine intelligence only to trade it off for conversational question-answering in his 1950 test? Why did Turing refer to gender imitation in a test for machine intelligence? In this paper I shall address these questions directly by unveiling social, historical and epistemological roots of Turing’s 1950 test. I will show that it came out of a controversy over the cognitive capabilities of digital computers, most notably with physicist and computer pioneer Douglas Hartree, chemist and philosopher Michael Polanyi, and neurosurgeon Geoffrey Jefferson. Turing’s 1950 paper is essentially a reply to a series of challenges posed to him by these thinkers against the view that machines can think. My goal is to improve the intelligibility of Turing’s test and contribute to ground it in its history.
In his “Motor imagery and action execution”, Bence Nanay proposes that motor imagery, the counterpart of mental imagery on the output side of our cognitive machinery, plays an important role in action initiation, can contribute to explaining akratic or relapse actions, and through its role in action initiation illustrates potential splits between action causation and action motivation. In this short commentary, I propose to explore a potential tension between Nanay’s characterization of motor imagery (§2) and the role he claims it can play in explaining action execution (§3-5). I also propose that one can alleviate this tension by explicitly considering motor imagery as one among several types of mental states that tap motor representation resources and by characterizing the cognitive control and regulation processes that shape this particular type of mental state and distinguish it from related mental states.
Scientific results are often presented as ‘surprising’ as if that is a good thing. Is it? And if so, why? What is the value of surprise in science? Discussions of surprise in science have been limited, but surprise has been used as a way of defending the epistemic privilege of experiments over simulations. The argument is that while experiments can ‘confound’, simulations can merely surprise (Morgan 2005). Our aim in this paper is to show that the discussion of surprise can be usefully extended to thought experiments and theoretical derivations. We argue that in focusing on these features of scientific practice, we can see that the surprise-confoundment distinction does not fully capture surprise in science. We set out how thought experiments and theoretical derivations can bring about surprises that can be disruptive in a productive way, and we end by exploring how this links with their future fertility.
Agency accounts of causation are often criticised as being unacceptably subjective or anthropocentric. According to such criticisms, if there were no human agents then there would be no causal relations, or, at the very least, if humans had been different then so too would causal relations. Here we describe a model of a causal agent that is not human with a view to exploring this latter claim. This model obeys the known laws of physics, and we claim that it endows the causal agent with a “causal viewpoint: a distinctive mix of knowledge, ignorance and practical ability that a creature must apparently exemplify, if it is to be capable of employing causal concepts” (Price, 2007, p.255). We argue that this model of a causal agent provides a clear illustration of the epistemic constraints that define such a ‘causal perspective’, and we employ the model to demonstrate how shared constraints lead to a shared perspective. Furthermore, we use this model to scrutinise the alignment of three familiar asymmetries with the causal asymmetry: the thermodynamic arrow, the arrow of time, and the arrow of deliberation and action.