I want to argue that philosophical analysis, a.k.a. the method of cases, is a worthy pursuit: that it reliably gives us substantial knowledge. The linchpin of my strategy is an appeal to cognitive psychology to show that philosophical concepts—the concept of knowledge, the concept of justice, the concept of causality, and so on—are in certain important respects like natural kind concepts, rather than being built around mental definitions.Once upon a time it was believed that even natural kind concepts were built around definitions, or something cognitively equivalent. …
Claims about sharing values are common in contemporary political discourse: Democratic societies value freedom and equality; The Marines value loyalty, fidelity, and faithfulness, and the Zapatistas of Chiapas value indigenous practices of walking together. In each of these cases, people treat particular activities, entities, or practices as worthwhile or essential to what they do together. And in each of these cases, group members have the standing to demand compliance with any values they share, and to criticize one another for failures to act in accordance with these values. For example, Marines take themselves to have privileged standing to criticize other Marine’s for acts of disloyalty, and Zapatistas take themselves to have privileged standing to criticize other Zapatistas who fail to cultivate practices of dignity and community.
There is a broad academic consensus that racialized groups are socially constructed, though there is substantial disagreement over precisely what this means. A similar consensus has emerged in the everyday patterns of thought and talk among non-academics in the United States (US). In many contexts, the acknowledgement that race is socially constructed is thought to be the end of the conversation; and things become much more complicated when the conversation continues (Gordon 2004, 183). Few academics are willing to defend racial anti-realism; and few non-academics are willing to claim that races don’t exist—even though many adhere to something like a colorblind ideology. The reasons for this reticence are simple. People who are raised in the US simply perceive others as white, Black, Latinx, and Asian; where they’re unsure about someone’s race, appeals to ancestry will usually clear up their confusion; and they find that patterns of racial categorization can sustain a wide range of inferences about unobserved traits (including ungrounded assumptions about intelligence, the propensity for aggression, and tolerance for pain). From a psychological perspective, the world appears to be racially organized. Of course, things look different from a biological perspective. The phenotypic differences between racialized groups are skin-deep, and insufficient to ground robust inferences about unobserved traits. Some of these traits are heritable, but this doesn’t make race biologically real, even if it places limits on the kinds of variation that typically emerge in skin-deep differences. So, the scientific consensus is that races are not biologically real kinds (though see Spencer 2014).
Anger can destroy friendships, undercut the possibility of cooperation, and prevent the uptake of our best intentions. And people who are raised in Europe or North America often find expressions of anger difficult to watch. Since anger is often conceptualized as an irrational and uncontrollable emotion (Nussbaum 2016a), such expressions seem to provide evidence of a dangerous or unpredictable personality. These facts have troubling implications in the context of struggles against racial injustice. Where feelings of racialized fear enhance worries about the risk of violence (Lerner & Keltner 2001), calls for racial justice can seem like dangerous displays of aggression and hostility. Public criticisms of angry Black ‘thugs’ can then fuel these fears, by highlighting the irrationality of Black anger, and evoking further worries about the ‘dangerous’ and ‘unstable’ personalities that hide behind calls for racial justice. Consequently, while anger can “lead to powerful movements that can transform cultures and societies” (Jinpa 2016), existing power relations often distort expressions of anger in the service of sustaining White power. This should give us pause when a philosopher advises us to eliminate anger from our moral repertoire (Srinivasan 2016). Consider this fair warning: I contend that we would be better off without anger.
Internalism about non-derivative responsibility holds that whether one is non-derivatively responsible for a decision depends only on facts about the agent during the time of the decision. Only an incompatibilist can be an internalist. …
A philosopher goes into the armchair and brings back knowledge. What world have they been exploring? What is this knowledge of, and how did they find it? These are questions that philosophy, the most methodologically self-conscious of all the disciplines, can’t help but ask itself over and over again. …
When science writers, especially “statistical war correspondents”, contact you to weigh in on some article, they may talk to you until they get something spicy, and then they may or may not include the background context. …
Albrecht Dürer - The Four Horsemen of the Apocalypse
Here’s an interesting thought experiment:
The human brain is split into two cortical hemispheres. These hemispheres are joined together by the corpus callosum, a group nerve fibres that allows the two hemispheres to communicate and coordinate with one another. …
Here’s our latest paper for the Complex Adaptive System Composition and Design Environment (CASCADE) project:
• John Baez, John Foley and Joe Moeller, Network models from Petri nets with catalysts. Check it out! …
State-dependent utility is a problem for the behavioral branch of decision theory under uncertainty. It questions the very possibility that beliefs be revealed by choice data. According to the current literature, all models of beliefs are equally exposed to the problem. Moreover, the problem is solvable only when the decision-maker can influence the resolution of uncertainty. This paper gives grounds to reject these two views. The various models of beliefs can be shown to be unequally exposed to the problem of state-dependent utility. The problem can be argued to be solved even when the decision-maker has no influence over the resolution of uncertainty. The implications of such reappraisal for a philosophical appreciation of the revealed preference methodology are discussed.
Philosophers of biology have worked extensively on how we ought best to interpret the probabilities which arise throughout evolutionary theory. In spite of this substantial work, however, much of the debate has remained persistently intractable. I offer the example of Bayesian models of divergence time estimation (the determination of when two evolutionary lineages split) as a case study in how we might bring further resources from the biological literature to bear on these debates. These models offer us an example in which a number of different sources of uncertainty are combined to produce an estimate for a complex, unobservable quantity. These models have been carefully analyzed in recent biological work, which has determined the relationship between these sources of uncertainty (their relative importance and their disappearance in the limit of increasing data), both quantitatively and qualitatively. I suggest here that this case shows us the limitations of univocal analyses of probability in evolution, as well as the simple dichotomy between “subjective” and “objective” probabilities, and I conclude by gesturing toward ways in which we might introduce more sophisticated interpretive taxonomies of probability (modeled on some recent work in the philosophy of physics) as a path toward advancing debates on probability in the life sciences.
This paper develops and motivates a unification theory of metaphysical explanation, or as I will call it, Metaphysical Unificationism. The theory’s main inspiration is the unification account of scientific explanation, according to which explanatoriness is a holistic feature of theories that derive a large number of explananda from a meager set of explanantia, using a small number of argument patterns. In developing Metaphysical Unificationism, I will point out that it has a number of interesting (and to my mind, attractive) consequences. The view offers a novel conception of metaphysical explanation that doesn’t rely on the notion of a “determinative” or “explanatory” relation; it allows us to draw a principled distinction between metaphysical and scientific explanations; it implies that naturalness and fundamentality are distinct but intimately related notions; and perhaps most importantly, it re-establishes the unduly neglected link between explanation and understanding in the metaphysical realm. A number of objections can be raised against the view, but I will argue that none of these is conclusive. The upshot is that Metaphysical Unificationism provides a powerful and hitherto overlooked alternative to extant theories of metaphysical explanation.
This paper offers a new argument in defence of bacterial species pluralism. To do this, initially I present particular issues derived from the conflict between the non-theoretical understanding of species as units of classification and the theoretical comprehension of them as units of evolution. Secondly, the necessity of the concept of species for the bacterial world is justified; I show how both medicine and endosymbiosis research make use of concepts of bacterial species linked to their distinctive purposes which do not conjoin with the other available concepts. Finally, I argue that these examples provide a new defence for the philosophical thesis of pluralism.
Assume open futurism, so that, necessarily, undetermined future tensed “will” statements are either all false or all lack truth value. Then there are possible worlds containing me such that it is impossible for it to be true that I am ever in that world. …
Richard Brown interviewed by Richard Marshall. Richard Brown is a funkybodacious philosopher of consciousness and leader of the Shombie universe. He’s asked why 1+1 has to equal 2, presented a short argument proving that there is no God, shown what’s wrong with eating meat, discussed both the delayed choice quantum eraser and pain asymbolia whils’t he flies his freak flag to Alan Turing. …
If presentism and most, if not all, other versions of the A-theory are true, then propositions change in truth value. For instance, on presentism, in the time of the dinosaurs it was not true that horses exist, but now it is true. …
Stainton points out that speakers “can make assertions while speaking sub- sententially”. He argues for a “pragmatics-oriented approach” to these phenomena and against a “semantics-oriented approach”. In contrast, I argue for a largely semantics-oriented approach: typically, sub-sentential utterances assert a truth- conditional proposition in virtue of exploiting a semantic convention. Thus, there is an “implicit-demonstrative convention” in English of expressing a thought that a particular object in mind is F by saying simply ‘F’. I note also that some sub- sentential assertions include demonstrations and argue that these exploit another semantic convention for expressing a thought with a particular object in mind. I consider four objections that Stainton has to a semantics- oriented approach. The most interesting is the “syntactic ellipsis” objection, which rests on two planks: (A) the assumption that this approach must claim that what appears on the surface to be a sub-sentential is, at some deeper level of syntactic analysis, really a sentence; (B) the claim that there is no such syntactic ellipsis in these sub-sentential utterances. I argue that (A) is wrong and that (B) may well be. I also reject the other three objections: “too much ambiguity”; “no explanatory work”; and “fails a Kripkean test”. Nonetheless, occasionally, sub-sentential utterances semantically assert only a fragment of a truth-conditional proposition. This fragment needs to be pragmatically enriched to yield a propositional message. To this extent a pragmatics- oriented approach is correct.
Kepa Korta and John Perry (2008), “KP”, are among many authors, including Stephen Levinson (2000), that my paper, “Three Methodological Flaws of Linguistic Pragmatism”, charges with the flaw of confusing the metaphysics of meaning with the epistemology of interpretation (2013b: 287e94). The metaphysics is concerned with what constitutes a meaning property of an utterance, the epistemology with how a hearer discovers that property. The meaning is constituted entirely by the speaker in producing the utterance and not by any interpretative process in the hearer.
What is it to be a member of a particular taxon? In virtue of what is an organism say a Canis lupus? What makes it one? I take these to be various ways to ask about the ‘essence’, ‘nature’, or ‘identity’ of a particular taxon. The consensus answer in the philosophy of biology, particularly for taxa that are species, is that the essence is not in any way intrinsic to the members but rather is wholly relational, particularly, historical. Thus, in their excellent introduction to the philosophy of biology, Sex and Death, Kim Sterelny and Paul Griffiths have this to say: there is ‘close to a consensus in thinking that species are identified by their histories’ (1999, p. 8); ‘the essential properties that make a particular organism a platypus… are historical or relational’ (1999, p.
Conspiracy theorists believe that powerful agents are conspiring to achieve their nefarious aims and also to orchestrate a cover-up. People who suffer from impostor syndrome believe that they are not talented enough for the professional positions they find themselves in, and that they risk being revealed as inadequate. These are quite different outlooks on reality, and there is no reason to think that they are mutually reinforcing. Nevertheless, there are intriguing parallels between the patterns of trust and distrust which underpin both conspiracy theorising and impostor thinking. In both cases subjects distrust standard sources of information, instead regarding themselves as especially insightful into the underlying facts of the matter. In both cases, seemingly-anomalous data takes on special significance. And in both cases, the content of belief dictates the epistemic behaviour of the believer. This paper explores these parallels, to suggest new avenues of research into both conspiracy theorising and impostor syndrome, including questions about whether impostor syndrome inevitably involves a personal failure of rationality, and issues about how, if at all, it is possible to convince others to abandon either conspiracy theories or impostor attitudes.
People are described as suffering from impostor syndrome when they feel that their external markers of success are unwarranted, and fear being revealed as a fraud. Impostor syndrome is commonly framed as a troubling individual pathology, to be overcome through self-help strategies or therapy. But in many situations an individual’s impostor attitudes can be epistemically justified, even if they are factually mistaken: hostile social environments can create epistemic obstacles to self-knowledge. The concept of impostor syndrome prevalent in popular culture needs greater critical scrutiny, as does its source, the concept of impostor phenomenon which features in psychological research.
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.
Are there nonexistent objects, i.e., objects that do not exist? Some examples
often cited are: Zeus, Pegasus, Sherlock Holmes, Vulcan (the hypothetical planet postulated by the 19th century astronomer Le Verrier), the perpetual
motion machine, the golden mountain, the fountain of youth, the round
square, etc. Some important philosophers have thought that the very
concept of a nonexistent object is contradictory (Hume) or logically
ill-formed (Kant, Frege), while others (Leibniz, Meinong, the Russell
of Principles of Mathematics) have embraced it
wholeheartedly. One of the reasons why there are doubts about the concept of a
nonexistent object is this: to be able to say truly of an object that
it doesn’t exist, it seems that one has to presuppose that it exists,
for doesn’t a thing have to exist if we are to make a true claim about
Zombies in philosophy are imaginary creatures designed to illuminate
problems about consciousness and its relation to the physical world. Unlike the ones in films or witchcraft, they are exactly like us in
all physical respects but without conscious experiences: by definition
there is ‘nothing it is like’ to be a zombie. Yet zombies
behave just like us, and some even spend a lot of time discussing
consciousness. Few people, if any, think zombies actually exist. But many hold that
they are at least conceivable, and some that they are possible. It
seems that if zombies really are possible, then physicalism is false
and some kind of dualism is true.
Although widely and commonly confused with republicanism, civic
humanism forms a separate and distinct phenomenon in the history of
Western political thought. Republicanism is a political philosophy
that defends a concept of freedom as non-domination, and identifies
the institutions that protect it (Pettit 1999). In particular,
republicanism stands against two alternative theories of politics. The
first is despotism, especially as manifested in any form of one-man
rule; a republic is self-governing, and so are its denizens. The
second is liberalism, which posits the primacy of the autonomous
individual vis-à-vis public order and government; the
republican values civic engagement in order to realize a form of
liberty achievable only in and through the community.
We standardly evaluate counterfactuals and abilities in temporally asymmetric terms—by keeping the past fixed and holding the future open. Only future events depend counterfactually on what happens now. Past events do not. Conversely, past events are relevant to what abilities one has now in a way that future events are not. Lewis, Sider and others continue to evaluate counterfactuals and abilities in temporally asymmetric terms, even in cases of backwards time travel. I’ll argue that we need more temporally neutral methods. The past shouldn’t always be held fixed, because backwards time travel requires backwards counterfactual dependence. Future events should sometimes be held fixed, because they’re in the causal history of the past, and agents have evidence of them independently of their decisions now. We need temporally neutral methods to maintain connections between causation, counterfactuals and evidence, and if counterfactuals are used to explain the temporal asymmetry of causation.
Our main result so far is a characterization of Schnorr randomness and Martin-Lof randomness in terms of Lévy’s classical upwards convergence theorem in martingale theory. This is interesting philosophically because it suggests that randomness notions should be brought to bear on the interpretation of convergence to the truth results.
Suppose a dog lives forever. Assuming the dog stays roughly dog-sized, there is only a finite number of configurations of the dog’s matter (disregarding insignificant differences on the order of magnitude of a Planck length, say). …
Extended cognition theorists argue that cognitive processes constitutively depend on resources that are neither organically composed, nor located inside the bodily boundaries of the agent, provided certain conditions on the integration of those processes into the agent’s cognitive architecture are met. Epistemologists, however, worry that in so far as such cognitively integrated processes are epistemically relevant, agents could thus come to enjoy an untoward explosion of knowledge. This paper develops and defends an approach to cognitive integration—cluster-model functionalism—which finds application in both domains of inquiry, and which meets the challenge posed by putative cases of cognitive or epistemic bloat.
I will contrast the two main approaches to the foundations of statistical mechanics: the individualist (Boltzmannian) approach and the ensemblist approach (associated with Gibbs). I will indicate the virtues of each, and argue that the conflict between them is perhaps not as great as often imagined.