Here is a widespread but controversial idea: those animals who represent correctly are likely to be selected over those who misrepresent. While various versions of this claim have been traditionally endorsed by the vast majority of philosophers of mind, recently, it has been argued that this is just plainly wrong. My aim in this paper is to argue for an intermediate position: that the correctness of some but not all representations is indeed selectively advantageous. It is selectively advantageous to have correct representations that are directly involved in bringing about and guiding the organism’s action. I start with the standard objection to the claim that it is selectively advantageous to represent correctly, the ‘better safe than sorry’ argument and then generalize it with the help of Peter Godfrey Smith’s distinction between Cartesian and Jamesian reliability and the trade-off between them. This generalized argument rules out a positive answer to our question at least as far as the vast majority of our representational apparatus is concerned.
In this article, I defend an account of linguistic comprehension on which meaning is not cognized, or on which we do not tacitly know our language's semantics. On this view, sentence comprehension is explained instead by our capacity to translate sentences into the language of thought. I explain how this view can explain our capacity to correctly interpret novel utterances, and then I defend it against several standing objections.
© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.
It would be hard to overestimate the amount of progress on causation and causal explanation since Woodward’s first book, Making Things Happen (2005). It is unusual in philosophy to pronounce so confidently that a discussion has not merely changed, but genuinely progressed. It’s easy to say that this book has been long anticipated, will be widely read, and will shape the discussion for years to come. Given that, this is a perhaps surprising direction toward which to turn current discussions of causation, which have been moving more in the direction of formal methods, machine learning, and automated causal discovery. This book goes almost completely the other direction.
Accuracy scoring rules measure the value of your probability
assignment’s closeness to the truth. A scoring rule for a single
proposition p can be thought
of as a pair of functions, T
and F on the interval [0,1] where T(x) tells us the score for
assigning x to p when p is true and F(x) tells us the score for
assigning x to p when p is false. …
Gallow on causal counterfactuals without miracles and backtracking
Posted on Friday, 27 Jan 2023. Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions. …
Proponents of an “extended evolutionary synthesis” (EES) criticize standard evolutionary theory on the grounds that it overlooks the causal roles of developmental and ecological phenomena. On this view, processes such as niche construction and phenotypic plasticity are as much causes of adaptive evolution as they are products. By generating variation, as well as biasing evolutionary processes themselves, these phenomena participate with natural selection in episodes of “reciprocal causation.” To ignore the feedback between ecology, development, and evolution in our theoretical synthesis, proponents argue, is to impede biological progress. The way we conceptualize evolution influences the way we investigate it—the questions we ask, the empirical tools we use, and the assumptions we take for granted. Therefore, according to the proponent of an EES, conceptual revision is warranted.
While Classical Logic (CL) used to be the gold standard for evaluating the rationality of human reasoning, certain non-theorems of CL—like Aristotle’s (∼(? → ∼?)) and Boethius’ theses ((? → ?) → ∼(? → ∼?))—appear intuitively rational and plausible. Connexive logics have been developed to capture the underlying intuition that conditionals whose antecedents contradict their consequents, should be false. We present results of two experiments (total ? = 72), the first to investigate connexive principles and related formulae systematically. Our data suggest that connexive logics provide more plausible rationality frameworks for human reasoning compared to CL. Moreover, we experimentally investigate two approaches for validating connexive principles within the framework of coherence-based probability logic . Overall, we observed good agreement between our predictions and the data, but especially for Approach 2.
The Hole Argument presents a formidable challenge against spacetime substantivalism. The doctrine of substantivalism, roughly, holds that spacetime exists independently from matter. In the theory of General Relativity (GR), fields are represented as functions f(x) over a base manifold M, so f(p) represents the value of f at point p. In vacuum GR, the sole field is the metric g(x).
Starting from the premise that expected utility (EU) is the correct criterion of rational preference both in decision cases under certainty and decision cases under risk, I argue that EU theory is a false theory of instrumental rationality. In its place, I argue for a new theory of instrumental rationality, namely expected comparative utility (ECU) theory. I show that in some commonplace decisions under risk, ECU theory delivers different verdicts from those of EU theory.
In this paper, we establish gastrospaces as a subject of philosophical inquiry and an item for policy agendas. We first explain their political value, as key sites where members of liberal democratic societies can develop the capacity for a sense of justice and the capacity to form, revise, and pursue a conception of the good. Integrating political philosophy with analytic ontology, we then unfold a theoretical framework for gastrospaces: first, we show the limits of the concept of “third place;” second, we lay out the foundations for an ontological model of gastrospaces; third, we introduce five features of gastrospaces that connect their ontology with their political value and with the realization of justice goals. We conclude by briefly illustrating three potential levels of intervention concerning the design, use, and modification of gastrospaces: institutions, keepers, and users.
Suppose that we someday create artificially intelligent systems (AIs) who are capable of genuine consciousness, real joy and real suffering. Yes, I admit, I spend a lot of time thinking about this seemingly science-fictional possibility. …
The main idea expressed in this thesis is that phenomenal character can be somehow understood in terms of representational content. This, if true, represents substantial progress toward closing the mind-body explanatory gap: if we can give a naturalistic account of representational content, we only need to plug in Intentionalism and we get a naturalistic account of phenomenal experience.
Sometimes we know things we wish we didn’t. In some cases, without
any brainwashing, forgetting or other irrational processes, there is a
fairly reliable way to make that wish come true. Suppose that a necessary condition for knowing is that my evidence
yields a credence of 0.9900, and that I know p with evidence yielding a credence
of 0.9910. …
There is broad agreement among researchers on the senses that sensory systems have evolved to facilitate adaptive responses to the environment. There is also considerable agreement on the issue of how sensory systems promote biologically successful behavior: they do so by conveying information about states of the environment that make a difference to the success of the organism’s outputs. Should we conclude that the senses are similar in nature to our everyday carbon monoxide alarms and smoke detectors, systems limited to the role of conveying information relevant to the user’s practical interests? Or does nature sometimes favor sensory systems more akin to photometers and thermometers designed by physicists to provide a disinterested or detached perspective on the world? To answer this question, we need to get clear about what kinds of problems sensory systems have evolved to confront and how they go about confronting them. The thesis defended in this paper is that, while specialist sensory systems are similar in function to smoke alarms and carbon monoxide detectors, many generalist sensory systems have evolved to impart a more disinterested or objective point of view on the world.
In the sense that matters here, someone’s knowledge that p is or requires a particular kind of connection between their belief that p and the fact that p (c.f., Armstrong 1973; Zagzebski 1996; Nagel 2014). Yet there are different views on the nature of this connection. Traditional internalism sees the relevant connection as a kind of reflective assurance of truth that is sufficient to put away any skeptical concerns about whether p. Knowledge is here the result of fully satisfying an uncompromising “philosophical curiosity” (Fumerton 2004, 75). Non-traditional internalism – more popular today – compromises on these anti-skeptical ambitions but remains committed to the idea that knowledge requires reflective assurance of some kind. Knowledge is here the result of getting things right by doing well-enough with what is available from the first-person perspective (e.g., one’s mental states and/or seemings). Contemporary externalism, by contrast to both of these internalisms, sees the relevant connection as something broader and weaker than reflective assurance of any kind: it is something that can sometimes be instantiated by reflective assurance, but something that can also survive without it. Here knowledge and what is available from the first-person perspective – at any level of ambition – can come apart.
James Sterba (2019, chapter 2) has recently argued that the free will defense fails to explain the compossibility of a perfect God and the amount and degree of moral evil that we see. I think he is mistaken about this. I thus find myself in the awkward and unexpected position, as a non-theist myself, of defending the free will defense. In this paper, I will try to show that once we take care to focus on what the free will defense is trying to accomplish, and by what means it tries to do so, we will see that Sterba’s criticism of it misses the mark.
Determinism is a centrally important notion for physics: it links time to laws and connects events along spatial surfaces to events along the temporal dimension. In the context of space-time theories, failures of determinism have been viewed as pathologies and used to identify superfluous structure. In philosophy, determinism has played its most important role in discussions of free will, where a certain picture of what determinism entails has a strong grip on the imagination. According to that picture, a deterministic universe unfolds with physical necessity from an initial condition that was set long ago. This presents a strong challenge to your sense of agency because it takes two very basic commitments — the idea that the laws of physics place fundamental constraints on what can happen (you throw a ball in the air or set a pendulum in motion and you know exactly what is going to happen) and that the past is fixed — and it uses the laws to leverage the fixity of the past into the fixity of the future. Neither of those commitments seems negotiable. There’s a famous argument that makes this explicit that goes, in simple terms, like this: the past is fixed and out of our control; the laws are fixed and out of our control.
Ernst Bloch (1885–1977) was a German philosopher and cultural
critic who is mostly credited for renewing the interest in utopia and
for mediating between the radical philosophy of emancipation,
non-dogmatic religious thought, analysis of mass culture, and new
aesthetic forms, notably those of Expressionism. His books, especially
The Principle of Hope (1954–1959), contributed to a
particular form of critical theory and, being written in a peculiar
essayistic style, made him quite popular both in academic and
non-academic circles. Bloch was an important voice among the
intelligentsia of Weimar Germany and then, for a short period
after the Second World war, the leading philosopher of the Eastern
Several years ago, I was fortunate enough to come under the influence of several of the core ideas in Christina Van Dyke’s A Hidden Wisdom (2022) as they were being developed. Although I have never had much love for the work of the canonical scholastic philosophers (e.g., Anselm, Boethius, Aquinas, and others), I have had great interest for nearly a decade in the writings of medieval mystics. Initially, the interest was purely personal—I wasn’t looking for philosophical insight; I was looking for spiritual guidance. But I found the texts to which I first turned—the anonymously authored Cloud of Unknowing, and the writings of Pseudo-Dionysius and John of the Cross—generally more baffling and disturbing than spiritually helpful.
In The Mirror of Simple Souls by Marguerite Porete, a 14th century mystic, there is a straightforward path from claims about what love for God in its purest form entails to the conclusion that a kind of self-annihilation is the ultimate goal for a Christian. There is, furthermore, an implicit argument in her work for the conclusion that achieving self-annihilation through love for God is superior to and better for us as individuals than achieving conformity with God’s will through the (mere) cultivation of virtue as it is traditionally conceived. Taking inspiration from Porete’s work, this paper defends both of these counterintuitive claims.
Suppose there is a distinctive and significant value to knowledge. What I mean by that is that if two epistemic are very similar in terms
of truth, the level and type of justification, the subject matter and
its relevant to life, the degree of belief, etc., but one is knowledge
and the other is not, then the one that is knowledge has a significantly
higher value because it is knowledge. …
Denić (2021) observes that the availability of distributive inferences — for sentences with disjunction embedded in the scope of a universal quantifier — depends on the size of the domain quantified over as it relates to the number of disjuncts. Based on her observations, she argues that probabilistic considerations play a role in the computation of implicatures. In this paper we explore a different possibility. We argue for a modification of Denić’s generalization, and provide an explanation that is based on intricate logical computations but is blind to probabilities. The explanation is based on the observation that when the domain size is no larger than the number of disjuncts, universal and existential alternatives are equivalent if distributive inferences are obtained. We argue that under such conditions a general ban on ‘fatal competition’ (Magri 2009a,b; Spector 2014) is activated thereby predicting distributive inferences to be unavailable.
According to second-personal approaches to moral obligation, the distinctive normative features of moral obligation can only be explained in terms of second-personal relations, i.e. the distinctive way persons relate to each other as persons. But there are important disagreements between different groups of second-personal approaches. Most notably, they disagree about the nature of second-personal relations, which has consequences for the nature of the obligations that they purport to explain. This article aims to distinguish these groups from each other, highlight their respective advantages and disadvantages, and thereby indicate avenues for future research.
The kataleptic impression—an impression that is, in some special way, “true and such as could not be false”—is at the core of Stoic epistemology. Since Gisela Striker’s groundbreaking work on the criterion of truth, the dominant view among scholars is that the Stoics restricted kataleptic impressions to certain perceptual impressions. I argue that the Stoics in fact countenanced non-perceptual kataleptic impressions and explain how they thought non-perceptual impressions can meet the definition of the kataleptic impression.
Breeds are classifications of domestic animals that share, to a certain degree, a set of conventional phenotypic traits. We are going to defend that, despite classifying biological entities, animal breeds are social kinds. We will adopt Godman’s view of social kinds, classifications with predictive power based on social learning processes. We will show that, although the folk concept of animal breed refers to a biological kind, there is no way to define it. The expert definitions of breeds are instead based on socially learnt conventions and skills (artificial selection), yielding groupings in which scientific predictions are possible. We will discuss in what sense breeds are social, but not human kinds and in what sense the concept of a breed is necessary to make them real.
There are two things called contexts that play important but distinct roles in standard accounts of language and communication. The first—call these compositional contexts—feature in a semantic theory. Compositional contexts are sequences of parameters that play a role in characterizing compositional semantic values for a given language, and in characterizing how such compositional semantic values determine a proposition expressed by a given sentence. The second—call these context sets—feature in a pragmatic theory. Context sets are abstract representations of the conversational states that serve to determine the compositional contexts relevant for interpreting a speech-act and that such speech-acts act upon. In this paper, I’ll consider how, given mutual knowledge of the information codified in a compositional semantic theory, an assertion of a sentence serves to update the context set. There is a standard account of how such conversational updating occurs. However, while this account has much to recommend it, I’ll argue that it needs to be revised in light of certain natural discourses.
recent post, I noted that it is possible to cook up a Bayesian setup
where you don’t meet some threshold, say for belief or knowledge, with
respect to some proposition, but you do meet the same threshold with
respect to the claim that after you examine a piece of evidence, then
you will meet the threshold. …
In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one another. The second category consists of those deepfakes that direct an illocutionary speech act—such as a request, injunction, invitation, or promise—to an addressee who is located outside of the recording. For instance, fake footage of a company director instructing their employee to make a payment, or of a military official urging the populace to flee for safety. Whereas the former category may deceive an audience by giving rise to false beliefs, the latter can more directly manipulate an agent’s actions: the speech act’s addressee may be moved to accept an invitation or a summons, follow a command, or heed a warning, and in doing so further a deceiver’s unethical ends.
Cancer biology features the ascription of normal functions to parts of cancers. At least some ascriptions of function in cancer biology track local normality of parts within the global abnormality of the aberration to which those parts belong. That is, cancer biologists identify as functions activities that, in some sense, parts of cancers are supposed to perform, despite cancers themselves having no purpose. The present paper provides a theory to accommodate these normal function ascriptions—I call it the Modeling Account of Normal Function (MA). MA comprises two claims.