The dispute between defenders and opponents of extended cognition (EC) has come to a dead end as no agreement on what the mark of the cognitive is could be found. Recently, many authors, therefore, have pursued a different strategy: they focus on the notion of constitution rather than the notion of cognition to determine whether constituents of cognitive phenomena can be external to the brain. One common strategy is to apply the new mechanists’ mutual manipulability account (MM). In this paper, I will analyze whether this strategy can be successful. Thereby, I will focus on David Kaplan’s (2012) version of this strategy. It will turn out that MM alone is insufficient for answering the question whether EC is true or not. What I call the Challenge of Trivial Extendedness arises due to the fact that mechanisms for cognitive behaviors are extended in a way that nobody would want to count as cases of EC. I will argue that this challenge can be met by adding a further necessary condition: cognitive constituents of mechanisms satisfy MM and they are what I call behavior unspecific.
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
Radical Embodied Cognitive Science (REC) tries to understand as much cognition as it can without positing contentful mental entities. Thus, in one prominent formulation, REC claims that content is involved neither in visual perception nor in any more elementary form of cognition. Arguments for REC tend to rely heavily on considerations of ontological parsimony, with authors frequently pointing to the difficulty of explaining content in naturalistically acceptable terms. However, many classic concerns about the difficulty of naturalizing content likewise threaten the credentials of intentionality, which even advocates of REC take to be a fundamental feature of cognition. In particular, concerns about the explanatory role of content and about indeterminacy can be run on accounts of intentionality as well. Issues about explanation can be avoided, intriguingly if uncomfortably, by dramatically reconceptualizing or even renouncing the idea that intentionality can explain. As for indeterminacy, Daniel Hutto and Erik Myin point the way toward a response, appropriating an idea from Ruth Millikan. I take it a step further, arguing that attention to the ways that beliefs’ effects on behavior are modulated by background beliefs can help illuminate the facts that underlie their intentionality and content.
Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept learning, we show that it is easier to infer the meaning of complex concepts than that of simple concepts.
Although the interest about emergence has grown during the last years, there does not seem to be consensus on whether it is a non-trivial, interesting notion and whether the concept of reduction is relevant to its characterization. Another key issue is whether emergence should be understood as an epistemic notion or if there is a plausible ontological concept of emergence. The aim of this work is to propose an epistemic notion of contextual emergence on the basis of which one may tackle those issues.
I argue that our best science supports the rationalist idea that, independent of reasoning, emotions aren’t integral to moral judgment. There’s ample evidence that ordinary moral cognition often involves conscious and unconscious reasoning about an action’s outcomes and the agent’s role in bringing them about. Emotions can aid in moral reasoning by, for example, drawing one’s attention to such information. However, there is no compelling evidence for the decidedly sentimentalist claim that mere feelings are causally necessary or sufficient for making a moral judgment or for treating norms as distinctively moral. I conclude that, even if moral cognition is largely driven by automatic intuitions, these shouldn’t be mistaken for emotions or their non-cognitive components. Non-cognitive elements in our psychology may be required for normal moral development and motivation but not necessarily for mature moral judgment.
Multiple realisation prompts the question: how is it that multiple systems all exhibit the same phenomena despite their different underlying properties? In this paper I develop a framework for addressing that question and argue that multiple realisation can be reductively explained. I defend this position by applying the framework to a simple example – the multiple realisation of electrical conductors. I go on to compare my position to those advocated in Polger & Shapiro (2016), Batterman (2018), and Sober (1999). Contra these respective authors I claim that multiple realisation is commonplace, that it can be explained, but that it requires a sui generis reductive explanatory strategy. As such, multiple realisation poses a non-trivial challenge to reduction which can, nonetheless, be met.
Our present experiences are strikingly different from past and future ones. Every philosophy of time must explain this difference. It has long been argued that A-theorists can do it better than B-theorists because their explanation is most natural and straightforward: present experiences appear to be special because they are special. I do not wish to dispute one aspect of this advantage. But I contend that the general perception of this debate is seriously incomplete as it tends to conflate two rather different aspects of the phenomenon behind it, the individual and the common dimensions of the present. When they are carefully distinguished and the emerging costs of the A-theories are balanced against their benefits, the advantage disappears.
The study of psychological and cognitive mechanisms is an interdisciplinary endeavor, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this paper, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly. On the other hand, if we are cautious in the application of our metaphysical commitments, and careful with the inferences we draw from them, then they can provide new insights into how we might find connections between models and theories of mechanisms that appear incompatible.
Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond if you were to discover one of these robots attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?
I argue that we can visually perceive others as seeing agents. I start by characterizing perceptual processes as those that are causally controlled by proximal stimuli. I then distinguish between various forms of visual perspective-taking, before presenting evidence that most of them come in perceptual varieties. In doing so, I clarify and defend the view that some forms of visual perspective-taking are “automatic”—a view that has been marshalled in support of dual-process accounts of mindreading.
Suppose that pain is intrinsically morally undesirable, and that all agents have a non-instrumental moral reason to alleviate pain when possible. Now consider the following two cases: Alice: Alice thinks very little about morality as such. However, for as long as she can remember, she has been deeply moved by the pain of others. Although Alice would not be able to articulate any justification or explanation for her attitudes towards pain, she is saddened by the thought that others are or might be in pain, and is motivated to alleviate their pain whenever possible. She gives a significant sum of money to the Guinea Worm Eradication Fund because she knows that by doing so she will be able to significantly reduce the amount of pain caused by this parasite.
Since each of those acts plausibly fulfils the instruction, anyone trying to say something summary about what substantial features they share has a problem. The profusion and diversity of imagination’s putative kinds, roles, and capabilities might well lead you to think that nothing interesting or important unites them. Nonetheless, much recent work implicitly shares a quite general approach to imaginative phenomena: the imitation theory, according to which imaginative experiences are imitations of other experiences, and the attitudes they involve are likewise imitations of counterpart attitudes.
While scientific inquiry crucially relies on the extraction of patterns from data, we still have a very imperfect understanding of the metaphysics of patterns—and, in particular, of what it is that makes a pattern real. In this paper we derive a criterion of real-patternhood from the notion of conditional Kolmogorov complexity. The resulting account belongs in the philosophical tradition, initiated by Dennett (1991), that links real-patternhood to data compressibility, but is simpler and formally more perspicuous than other proposals defended heretofore in the literature. It also successfully enforces a non-redundancy principle, suggested by Ladyman and Ross (2007), that aims at excluding as real those patterns that can be ignored without loss of information about the target dataset, and which their own account fails to enforce.
While most surveys, defenses, and critiques of embodied cognition proceed by treating it as a neatly delineated claim, such an approach soon becomes problematic due to the inherent plurality of this perspective on cognition. Embodied cognition is best treated as a research tradition, not as a single theory. This tradition has evolved in opposition to a certain kind of cognitive science, usually dubbed “cognitivism”. Cognitivism is typically characterized as a view that cognition may be fully explained in terms of transformations of mental representations, most commonly amodal symbols. The methodological and ontological commitments of embodied cognition follow research exemplars found in embodied cognitive linguistics, grounded cognition, ecological psychology, dynamical study of development, or neurophenomenology. Due to its inherent variety, this research tradition is not reducible to a single theory of cognitive phenomena (or to a single component subtradition). At the same time, all of these subtraditions share one feature: they reject cognitivism, in one way or another. They also feature fairly similar research heuristics for the discovery of how cognitive mechanisms work.
Russellian monism is a theory in the metaphysics of mind, on which a
single set of properties underlies both consciousness and the most
basic entities posited by physics. The theory is named for Bertrand
Russell, whose views about consciousness and its place in nature were
informed by a structuralist conception of theoretical physics. On such
a structuralist conception, physics describes the world in terms of
its spatiotemporal structure and dynamics (changes within that
structure) and says nothing about what, if anything, underlies that
structure and dynamics. For example, as it is sometimes put, physics
describes what mass and charge do, e.g., how they dispose
objects to move toward or away from each other, but not what mass and
Agentialist accounts of self-knowledge seek to do justice to the connection between our identities as rational agents and our capacity to know our own minds. There are two strategies that agentialists have employed in developing their position: substantive and non-substantive. My aim is to explicate and defend one particular example of the non-substantive strategy, namely, that proposed by Tyler Burge. In particular, my concern is to defend Burge’s claim that critical reasoning requires a relation of normative directness between reviewing and reviewed perspectives. My defence will involve supplementing Burge’s view with a substantive agentialist account of self-knowledge.
Neuroscience has become increasingly reliant on multi-subject research in addition to studies of unusual single patients. This research has brought with it a challenge: how are data from different human brains to be combined? The dominant strategy for aggregating data across brains is what I call ‘the cartographic approach’, which involves mapping data from individuals to a spatial template. Here I characterize the cartographic approach and argue that one of its key steps, registration, should be carried out in a way that is sensitive to the target of investigation. Because registration aims to align homologous brain locations, but not all homologous locations can be simultaneously aligned, a multiplicity of registration methods is required to meet the needs of researchers investigating different phenomena. I call this position ‘registration pluralism’. Registration pluralism has potential implications for neuroscientific practice, three of which I discuss here. This work shows the importance of reflecting more carefully on data aggregation methods, especially in light of the substantial individual differences that exist between brains.
Expertise is traditionally classified into perceptual, cognitive, and motor forms. I argue that the empirical research literature on expertise gives us compelling reasons to reject this traditional classification and accept an alternative. According to the alternative I support there is expertise in forming impressions, which further divides into expertise in forming sensory and intellectual impressions, and there is expertise in performing actions, which further divides into expertise in performing mental and bodily actions. The traditional category of cognitive expertise splits into two--expertise in forming intellectual impressions and expertise in performing mental actions. I consider and address a challenge to my case in favor of adopting this alternative classification of expertise that derives from dual process theories of cognition.
What are intuitions? Stereotypical examples may suggest they are the results of common intellectual reflexes. But some intuitions defy the stereotype: there are hard-won intuitions which take deliberate effort to have, improved intuitions which contravene how matters naively seem to us, and expertly guided intuitions in which an expert in some domain guides a novice toward having an intuition he or she would not have had otherwise. I argue that reflection on these three phenomena motivates a conception of intuition that emphasizes its phenomenology over its etiology, as well as its grounding in malleable problem-solving abilities.
A vexing problem in contemporary epistemology – one with origins in Plato’s Meno – concerns the value of knowledge, and in particular, whether and how the value of knowledge exceeds the value of mere (unknown) true opinion. The recent literature is deeply divided on the matter of how best to address the problem. One point, however, remains unquestioned: that if a solution is to be found, it will be at the personal level, the level at which states of whole persons, as such, appear. We take exception to this orthodoxy, or at least to its unquestioned status. We argue that subpersonal states play a significant – arguably, primary – role in much epistemically relevant cognition and thus constitute a domain in which we might reasonably expect to locate the “missing source” of epistemic value, beyond the value attached to mere true belief.
Non-symmetric relations allow for differential application. A binary relation R can hold of a and b in two different ways: . aRb and . bRa. Different states of affairs result from completing R by means of a and b, depending on the order in which a and b are combined with R. The extension of a binary non-symmetric relation is, accordingly, not to be understood in terms of a set of unordered pairs. One has to operate with a structured conception of the extension of a relation, for instance in terms of ordered pairs, that not only considers which things R relates, but also the order in which it relates them.
The terminology is most clearly associated with Bertrand Russell, but
the distinction between knowledge by acquaintance and knowledge by
description is arguably a critical component of classical or
traditional versions of foundationalism. Let us say that one has
inferential or nonfoundational knowledge that p when
one’s knowledge that p depends on one’s knowledge
of some other proposition(s) from which one can legitimately infer
p; and one has foundational or noninferential knowledge that
p when one’s knowledge that p does not depend
on any other knowledge one has in this way.
Does time seem to us to pass, even though it doesn’t, really? Many philosophers think the answer is ‘Yes’ – at least when ‘time’s (really) passing’ is understood in a particular way. They take time’s passing to be a process by which each time in turn acquires a special status, such as the status of being the only time that exists, or being the only time that is present (where that means more than just being simultaneous with oneself). This chapter suggests that on the contrary, all we perceive is temporal succession, one thing after another, a notion to which modern physics is not inhospitable. The contents of perception are best described in terms of ‘before’ and ‘after’, rather than ‘past’, ‘present, and ‘future’.
Based on our third-wave framing of predictive
processing, we argued in our previous post (4) that generative models cannot be
unplugged from the world, given that action couples the agent to the
We revisit the question (most famously) initiated by Turing: can human intelligence be completely modeled by a Turing machine? We show that the answer is no, assuming a certain weak soundness hypothesis. More specifically we show that at least some meaningful thought processes of the brain cannot be Turing computable. In particular some physical processes are not Turing computable, which is not entirely expected. There are some similarities of our argument with the well known Lucas-Penrose argument, but we work purely on the level of Turing machines, and do not use Gödel’s incompleteness theorem or any direct analogue. Instead we construct directly and use a weak analogue of a Gödel statement for a certain system which involves our human, this allows us to side-step some (possible) meta-logical issues with their argument.
Materialists about human persons think that we are material through and through—wholly material beings. Those who endorse materialism more widely think that everything is material through and through. But what is it to be wholly material? In this article, I answer that question. I identify and defend a definition or analysis of ‘wholly material’.
Does the sense of smell involve the perception of odor objects? General discussion of perceptual objecthood centers on three criteria: stimulus representation, perceptual constancy, and figure-ground segregation. These criteria, derived from theories of vision, have been applied to olfaction in recent philosophical debates about psychology. An inherent problem with such framing of olfactory objecthood is that philosophers explicitly ignore the constitutive factors of the sensory systems that underpin the implementation of these criteria. The biological basis of odor coding is fundamentally different from the coding principles of the visual system. This article analyzes the three measures of perceptual objecthood against the biological background of the olfactory system. It contrasts the coding principles in olfaction with the visual system to show why these criteria of objecthood fail to be instantiated in odor perception. The argument demonstrates that olfaction affords perceptual categorization without the need to form odor objects.
In light of the very interesting interview with Dave Chalmers in the Opinionator I thought I would revisit some of my objections to the notion of artificial consciousness (AC). I am somewhat of a skeptic about artificial consciousness in a way that I am not about AGI (artificial general intelligence). …
Intuitively, moral responsibility requires conscious awareness of what one is doing, and why one is doing it, but what kind of awareness is at issue? Neil Levy argues that phenomenal consciousness — the qualitative feel of conscious sensations — is entirely unnecessary for moral responsibility. He claims that only access consciousness — the state in which information (e.g. from perception or memory) is available to an array of mental systems (e.g. such that an agent can deliberate and act upon that information) — is relevant to moral responsibility. I argue that numerous ethical, epistemic, and neuroscientific considerations entail that the capacity for phenomenal consciousness is necessary for moral responsibility. I focus in particular on considerations inspired by P.F. Strawson, who puts a range of qualitative moral emotions — the reactive attitudes — front and centre in the analysis of moral responsibility.