Visual experiences seem to exhibit phenomenological particularity: when you look at some object, it – that particular object – looks some way to you. But experiences exhibit generality too: when you look at a distinct but qualitatively identical object, things seem the same to you as they did in seeing the first object. Naïve realist accounts of visual experience have often been thought to have a problem with each of these observations. It has been claimed that naïve realist views cannot account for the generality of visual experiences, and that the naïve realist explanation of particularity has unacceptable implications for self-knowledge: the knowledge we have of the character of our own experiences. We argue in this paper that neither claim is correct: naïve realism can explain the generality of experiences, and the naïve realist explanation of particularity raises no problems for our self-knowledge.
[A]gainst the palpably sophistical proofs of Leibniz that this is the best of all possible worlds, we may even oppose seriously and honestly the proof that it is the worst of all possible worlds’
(Schopenhauer, The World as Will and Representation Vol II 583)
In 1946, Frederick Copleston, a Jesuit priest famous for his work on the history of philosophy, wrote a book about Arthur Schopenhauer. …
Prediction may be a central concept for understanding perceptual and cognitive processing. Contemporary theoretical neuroscience formalizes the role of prediction in terms of probabilistic inference. Perception, action, attention and learning may then be unified as aspects of predictive processing in the brain. This chapter first explains the sense in which predictive processing is inferential and representational. Then follows an exploration of how the predictive processing framework relates to a series of considerations in favour of enactive, embedded, embodied and extended cognition (4e cognition). The initial impression may be that predictive processing is too representational and inferential to fit well to 4e cognition. But, in fact, predictive processing encompasses many phenomena prevalent in 4e approaches, while remaining both inferential and representational.
I see no good reason to prefer (any version I know of) the ‘holonomy interpretation’ to the ‘potential interpretation’ of the Aharonov-Bohm effect. Everyone agrees that the inverse image [A] = [A + dλ]λ = d F of the electromagnetic field F is a class, full of individuals; and that the circulation C of the electromagnetic potential A around a loop σ encircling the solenoid is common to the whole class [A], and to the homotopy class or hoop [σ ]. If picking individuals out of classes is the problem, picking an individual potential out of [A] should be no worse than picking an individual loop out of [σ ]. The individuals of [A] can moreover be transcended—punctually, without integration around loops—by an appropriate version of the electromagnetic connection.
Recent empirical studies raise methodological concerns about the use of intuitions in philosophy. According to one prominent line of reply, these concerns are unwarranted since the empirical studies motivating them do not control for the putatively characteristic phenomenology of intuitions. This paper makes use of research on metacognitive states that have precisely this phenomenology to argue that the above reply fails. Furthermore, it shows that empirical findings about these metacognitive states can help philosophers make better informed assessments of their warrant for relying on intuitions in inquiry.
My dog Gomer is friendly. The truth of the claim that Gomer is friendly doesn’t “float free” from how the world is. The claim is somehow made true. There are many questions to ask about this phenomenon, what has come to be called truthmaking. For example, does every truth have a truthmaker? What is the connection between truthmakers and the nature of truth? How are truthmaking and ontological commitment related? Let’s focus, however, on another question that is arguably conceptually prior to others: just what is truthmaking?
Here's a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A. If that assumption is wrong -- if things of Type X needn't necessarily have Property A -- then you've given what I'll pejoratively call an inflate-and-explode argument. …
Evolutionary psychology is one of many biologically informed
approaches to the study of human behavior. Along with cognitive
psychologists, evolutionary psychologists propose that much, if not
all, of our behavior can be explained by appeal to internal
psychological mechanisms. What distinguishes evolutionary
psychologists from many cognitive psychologists is the proposal that
the relevant internal mechanisms are adaptations—products of
natural selection—that helped our ancestors get around the
world, survive and reproduce. To understand the central claims of
evolutionary psychology we require an understanding of some key
concepts in evolutionary biology, cognitive psychology, philosophy of
science and philosophy of mind.
As Elisabeth of Bohemia famously pointed out, Descartes appears to be committed to the following inconsistent triad:
In every instance of causation, there is an a priori conceptual connection between cause and effect. …
The American speculative philosopher Grace de Laguna would have been right to judge that 1950s state-of-the-art analytic philosophy of mind was playing catch up with the philosophical world of her youth. …
Introduction: Designing the Mind Chapter 1: The Age of AI Chapter 2: The Problem of AI Consciousness Chapter 3: Consciousness Engineering Chapter 4: How to Catch an AI Zombie: Testing for Consciousness in Machines Chapter 5: Could you Merge with AI?
Take a glimpse at the scene in front of you. Assuming that your environment isn’t too cluttered, you’ll immediately get a rough sense of the number of objects in your vicinity, without having to count them. ‘We have a basic analogical and non-linguistic capacity to recognise number and quantity’ (Menary 2015, 11). This capacity for rapidly apprehending the number of entities in a collection is not unique to humans and is shared by a surprisingly large range of species (De Cruz 2006), so is thought to be evolutionarily ancient. However, it is also limited by the fact that it is inherently approximate, becoming increasingly inaccurate as the size of the target collection increases (Dehaene, Dehaene-Lambertz, & Cohen 1998). As such, the system that supports this capacity is often referred to as the approximate number system (ANS).
In the Posterior Analytics, Aristotle tells us that we learn things in three ways: we perceive particulars, we come to grasp universals by induction from these perceived particulars, and we eventually find ourselves in a position to demonstrate things about the universals we came to grasp inductively.1 Perception, then, is the source of all our learning—perception supplies the knowledge without which induction could not proceed, and without which our demonstrations would therefore find no object.
Debates about the possibility of artificial intelligence have focused on the question of whether programming a computer in the right way could produce genuine thought. But for there to be thought is for there to be thinking beings. What sort of being might be made intelligent by programming a computer? Would it be the computer itself--a physical object? Some part of the computer? The program running on the computer? Or something else? There has been almost no discussion of this question. Yet if artificial intelligence is possible, it must have an answer. A satisfying account is elusive.
According to one conception of strong emergence, strongly emergent properties are nomologically necessitated by their base properties and have novel causal powers relative to them. In this paper, I raise a difficulty for this conception of strong emergence, arguing that these two features (i.e., nomological necessitation and causal novelty) are incompatible. Instead of presenting this as an objection to the friends of strong emergence, I argue that this indicates that there are distinct varieties of strong emergence: causal emergence and epiphenomenal emergence. I then explore the prospects of emergentism with this distinction in the background.
So-called basic self-knowledge (ordinary knowledge of one’s present states of mind) can be seen as both ‘baseless’ and privileged. The spontaneous self-beliefs we have when we avow our states of mind do not appear to be formed on any particular epistemic basis (whether intro- or extro- spective). Nonetheless, on some views, these self-beliefs constitute instances of (privileged) knowledge. We are here interested in views on which true mental self-beliefs have internalist epistemic warrant that false ones lack.
18th-century British aesthetics addressed itself to a
variety of questions: What is taste? What is beauty? Is there is a
standard of taste and of beauty? What is the relation between the
beauty of nature and that of artistic representation? What is the
relation between one fine art and another? How ought the fine arts be
ranked one against another? What is the nature of the sublime and
ought it be ranked with the beautiful? What is the nature of genius
and what is its relation to taste? Although none of these questions was peripheral to
18th-century British aesthetics, not all were equally
Perdurantists think of continuants as mereological sums of stages (that is, sums of instantaneous spatiotemporal parts) from different times. This view of persistence would force us to drop the idea that there is genuine change in the world. By exploiting a presentist metaphysics, Brogaard (2000) proposed a theory, called presentist four-dimensionalism, that aims to reconcile perdurantism with the idea that things undergo real change. However, her proposal commits us to reject the idea that stages must exist in their entirety. Giving up the tenet that all the stages are equally real could be a price that perdurantists are unwilling to pay. I argue that Kit Fine (2005)’s fragmentalism provides us with the tools to combine a presentist metaphysics with a perdurantist theory of persistence without giving up the idea that reality is constituted by more than purely present stages.
T.M. Scanlon’s ‘reasons fundamentalism’ is thought to face difficulties answering the normative question—that is, explaining why it’s irrational to not do what you judge yourself to have most reason to do (e.g., Dreier 2014a). I argue that this difficulty results from Scanlon’s failure to provide a theory of mind that can give substance to his account of normative judgment and its tie to motivation. A central aim of this paper is to address this deficiency. To do this, I draw on broadly cognitivist theories of emotion (e.g., Nussbaum 2001, Roberts 2013). These theories are interesting because they view emotions as cognitive states from which motivation emerges. Thus, they provide a model Scanlon can use to develop a richer account of both the judgment-motivation connection and the irrationality of not doing what you judge yourself to have most reason to do. However, the success is only partial—even this more developed proposal fails to give a satisfactory answer to the normative question.
The purpose of this chapter is to determine what is to remember something, as opposed to imagining it, perceiving it, or introspecting it. What does it take for a mental state to qualify as remembering, or having a memory of, something? The main issue to be addressed is therefore a metaphysical one. It is the issue of determining which features those mental states which qualify as memories typically enjoy, and those states which do not qualify as such typically lack. I will proceed as follows.
This paper advances two claims. The positive claim offers a correctness condition for perceptual experiences, one that does justice to the so-called “particularity of perception”: (T1) the perceptual content of a perceptual experience is correct iff there are perceived objects of which it is non-accidentally true.
. Mathematical formalisms that are constructed for inquiry in one disciplinary context are sometimes applied to another, a phenomenon that I call ‘tool migration.’ Philosophers of science have addressed the advantages of using migrated tools. In this paper, I argue that tool migration can be epistemically risky. I then develop an analytic framework for better understanding the risks that are implicit in tool migration. My approach shows that viewing mathematical constructs as tools while also acknowledging their representational features allows for a balanced understanding of knowledge production that are aided by the research tools migrated across disciplinary boundaries.
Imagine seeming to see a box of matches on a table. Now imagine moving slightly, while trying to keep the matchbox in view. You would be startled if the box of matches were suddenly to stop looking to you like a box, instead apparently morphing into a toy car. We thus tend to betray our implicit visual expectations, by responding with sudden surprise to visual experiences that are suitably discontinuous with their immediate predecessors. The surprise illustrated there is different to the more considered surprise that we often feel in other contexts. I would be taken aback if an ordinarily reliable informant told me that an eight-year old child recently ran a marathon in just over two hours. But the surprise that I would then feel is different to the startlement illustrated in the previous paragraph. While the surprise in the earlier case is doubtless shaped by one’s experiences of the world, it seems to arise independently of the relatively sophisticated processes of learning that lead us to our beliefs about, say, age-related marathon times.
Unless presently in a coma, you cannot avoid witnessing injustice. You will find yourself judging that a citizen or a police officer has acted wrongly by killing someone, that a politician is corrupt, that a social institution is discriminatory. In all these cases, you are making a moral judgment. But what is it that drives your judgment? Have you reasoned your way to the conclusion that something is morally wrong? Or have you reached a verdict because you feel indignation or outrage? Rationalists in moral philosophy hold that moral judgment can be based on reasoning alone. Kant argued that one can arrive at a moral belief by reasoning from principles articulating one’s duties. Sentimentalists hold instead that emotion is essential to distinctively moral judgment. Hume, Smith, and their British contemporaries argued that one cannot arrive at a moral belief without experiencing appropriate feelings at some point—e.g. by feeling compassion toward victims or anger toward perpetrators. While many theorists agree that both reason and emotion play a role in ordinary moral cognition, the dispute is ultimately about which process is most central.
Experimentation is traditionally considered a privileged means of confirmation. However, how experiments are a better confirmatory source than other strategies is unclear, and recent discussions have identified experiments with various modeling strategies on the one hand, and with ‘natural’ experiments on the other hand. We argue that experiments aiming to test theories are best understood as controlled investigations of specimens. ‘Control’ involves repeated, fine-grained causal manipulation of focal properties. This capacity generates rich knowledge of the object investigated. ‘Specimenhood’ involves possessing relevant properties given the investigative target and the hypothesis in question. Specimens are thus representative members of a class of systems, to which a hypothesis refers. It is in virtue of both control and specimenhood that experiments provide powerful confirmatory evidence. This explains the distinctive power of experiments: although modellers exert extensive control, they do not exert this control over specimens; although natural experiments utilize specimens, control is diminished.
Supposing the growing block theory of time is correct and you have a choice between two options. You suffer 60 minutes of pain from 10:30 pm to 11:30 pm. You suffer 65 minutes of pain from 10:50 pm to 11:55 pm. …
I propose a new model of implicit bias, according to which implicit biases are constituted by unconscious imaginings. I begin by endorsing a principle of parsimony when confronted with unfamiliar phenomena. I introduce implicit bias in terms congenial to what most philosophers and psychologists have said about their nature in the literature so far, before moving to a discussion of the doxastic model of implicit bias and objections to it. I then introduce unconscious imagination and argue that appeal to it does not represent a departure from a standard view of imagination, before outlining my model and showing how it accommodates characteristic features of implicit bias. I argue for its advantages over the doxastic model: it does not violate the parsimony principle, it does not face any of the objections so far raised to doxasticism, and it can accommodate the heterogeneity in the category of implicit bias. Finally, I address whether my view limits our ability to hold people accountable for their biases (it does not), and whether it is consistent with what we know about intervention strategies (it is). I conclude that implicit biases are constituted by unconscious imaginings.
Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations.
Cultural psychologists often describe the relationship between mind and culture as ‘dynamic.’ In light of this, we provide two desiderata that a theory about encultured minds ought to meet: the theory ought to reflect how cultural psychologists describe their own findings and it ought to be thoroughly naturalistic. We show that a realist theory of causal powers — which holds that powers are causally-efficacious and empirically-discoverable — fits the bill. After an introduction to the major concepts in cultural psychology and describing causal power realism, we use a case study — the effects of pathogen prevalence on culture and cognition — to show the explanatory capacities of the powers framework.
Freitag (2015) and Schramm (2014) have proposed different, although converging, solutions of Goodman’s New Riddle of Induction. Answering their proposals, Dorst (2016 and 2018) has used the fictitious character of a ‘grue-speaker’ as his principal device for criticizing counterfactual-based treatments of the Riddle. In this paper, I argue that Dorst’s arguments fail: On the observation of no other than green emeralds, the ‘grue-speaker’ cannot use the symmetry between the ‘green’- and ‘grue’-languages for claiming ‘grue’- instead of ‘green’-evidence, and the counterfactuals involved (explicitly by Schramm and implicitly by Freitag) remain unaffected by Dorst’s proposal for how to evaluate them.