What differentiates scientific research from non-scientific inquiry? Philosophers addressing this question have typically been inspired by the exalted social place and intellectual achievements of science. They have hence tended to point to some epistemic virtue or methodological feature of science that sets it apart. Our discussion on the other hand is motivated by the case of commercial research, which we argue is distinct from (and often epistemically inferior to) academic research. We consider a deflationary view in which science refers to whatever is regarded as epistemically successful, but find that this does not leave room for the important notion of scientific error and fails to capture distinctive social elements of science. This leads us to the view that a demarcation criterion should be a widely upheld social norm without immediate epistemic connotations. Our tentative answer is the communist norm, which calls on scientists to share their work widely for public scrutiny and evaluation.
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society.
The Internet is the epistemological crisis of the 21st century: it has fundamentally altered the social epistemology of societies with relative freedom to access it. Most of what we think we know about the world is due to reliance on epistemic authorities, individuals, or institutions that tell us what we ought to believe about Newtonian mechanics, evolution by natural selection, climate change, resurrection from the dead, or the Holocaust. The most practically fruitful epistemic norm of modernity, empiricism, demands that knowledge be grounded in sensory experience, but almost no one who believes in evolution by natural selection or the reality of the Holocaust has any sensory evidence in support of those beliefs. Instead, we rely on epistemic authorities—biologists and historians, for example. Epistemic authority cannot be sustained by empiricist criteria, for obvious reasons: salient anecdotal evidence, the favorite tool of propagandists, appeals to ordinary faith in the senses, but is easily exploited given that most people understand neither the perils of induction nor the finer points of sampling and Bayesian inference. Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe about what. The traditional media were crucial, in the age of mass democracy, with promulgating and sustaining such norms.
I am underway, at last, with the project of improving and updating my notes on category theory. So, here are the first four chapters of Category Theory I: Notes towards a gentle introduction.The ‘I’ in the new title signals that I am carving the old notes into Part I and Part II, and I am planning to work up Part I into a decent shape, while quite putting aside Part II for a good while. …
Humans can think about possible states of the world without believing in them, an important capacity for high-level cognition. Here we use fMRI and a novel “shell game” task to test two competing theories about the nature of belief and its neural basis. According to the Cartesian theory, information is first understood, then assessed for veracity, and ultimately encoded as either believed or not believed. According to the Spinozan theory, comprehension entails belief by default, such that understanding without believing requires an additional process of “unbelieving”. Participants (N=70) were experimentally induced to have beliefs, desires, or mere thoughts about hidden states of the shell game (e.g., believing that the dog is hidden in the upper right corner). That is, participants were induced to have specific “propositional attitudes” toward specific “propositions” in a controlled way. Consistent with the Spinozan theory, we found that thinking about a proposition without believing it is associated with increased activation of the right inferior frontal gyrus (IFG). This was true whether the hidden state was desired by the participant (due to reward) or merely thought about. These findings are consistent with a version of the Spinozan theory whereby unbelieving is an inhibitory control process. We consider potential implications of these results for the phenomena of delusional belief and wishful thinking.
One of the central insights of Western philosophy, beginning with
Socrates, has been that few if any things are as bad for an individual
as culpably doing wrong. It is better, we are told through much of the
Western philosophical tradition, that it is better to suffer than do
Global philosophy is an ideal. It includes the affirmation of intercultural philosophy and internationalism but it goes well beyond cultural and geographic cosmopolitanism. To embrace global philosophy is to reject any approach to philosophy that cleaves to closed communities and private conversations.
The Philosophy of Science Can Usefully Be Divided Into Two Broad Areas. On the One Hand is the Epistemology of Science, Which Deals with Issues Relating to the Justification of Claims to Scientific Knowledge. Philosophers Working in This Area Investigate Such Questions as Whether Science Ever Uncovers Permanent Truths, Whether Objective Decisions Between Competing Theories Are Possible and Whether the Results of Experiment Are Clouded by Prior Theoretical Expectations. On the Other Hand Are Topics in the Metaphysics of Science, Topics Relating to Philosophically Puzzling Features of the Natural World Described by Science. Here Philosophers Ask Such Questions as Whether All Events Are Determined by Prior Causes, Whether Everything Can Be Reduced to Physics and Whether There Are Purposes in Nature. You Can Think of the Difference Between the Epistemologists and the Metaphysicians of Science in This Way. The Epistemologists Wonder Whether We Should Believe What the Scientists Tell Us. The Metaphysicians Worry About What the World is Like, If the Scientists Are Right. Readers Will Wish to Consult Chapters on Epistemology (Chapter 1), Metaphysics (Chapter 2), Philosophy of Mathemat- Ics (Chapter 11), Philosophy of Social Science (Chapter 12) and Pragmatism (Chapter 36).
This paper concerns the recent revival of entity realism. Having been started with the work of Ian Hacking, Nancy Cartwright and Ronald Giere, the project of entity realism has recently been developed by Matthias Egg, Markus Eronen, and Bence Nanay. The paper opens a dialogue among these recent views on entity realism and integrates them into a more advanced view. The result is an epistemological criterion for reality: the property-tokens of a certain type may be taken as real insofar as only they can be materially inferred from the evidence obtained in a variety of independent ways of detection.
Suppose I am just the slightest bit short of the evidence needed for
belief that I have some condition C. I consider taking a test for
C that has a zero false
negative rate and a middling false positive rate—neither close to zero
nor close to one. …
A paradigmatic aesthetic experience is a perceptual experience focused
on the beauty of an object like a work of art or an aspect of nature. Some philosophers take it that this is the only kind of aesthetic
experience, though many more take it that there are other varieties as
well. You might, for instance, have an aesthetic experience by
witnessing not a beautiful but a sublime storm. You might
have an aesthetic experience not by having a perceptual but rather by
having an (imagined) emotional experience of the deep suffering of
Sethe expressed in Toni Morrison’s great novel Beloved.
Political revolutions are transformative moments marked by profound,
rapid change in the political order achieved through the use of force
rather than through consensus or legal process. Moral responses to
revolutions are often ambivalent or deeply polarized. On the one hand,
revolutions promise to be powerful engines of moral progress, allowing
a community to abolish an oppressive social order and providing the
opportunity to institute a better one. On the other hand, revolutions
risk unravelling the fabric of political community and devolving into
bloody, prolonged conflicts that only manage to reinstate a new
This article is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska Curie grant agreement N°832636.
Global challenges such as climate change, food security, or public health have become dominant concerns in research and innovation policy. This article examines how responses to these challenges are addressed by governance actors. We argue that appeals to global challenges can give rise to a ‘solution strategy’ that presents responses of dominant actors as solutions and a ‘negotiation strategy’ that highlights the availability of heterogeneous and often conflicting responses. On the basis of interviews and document analyses, the study identifies both strategies across local, national, and European levels. While our results demonstrate the co-existence of both strategies, we find that global challenges are most commonly highlighted together with the solutions offered by dominant actors. Global challenges are ‘wicked problems’ that often become misframed as ‘tame problems’ in governance practice and thereby legitimise dominant responses.
Expressions like “I feel your pain” or “I share your sadness” play an important role in our moral lives. They convey our empathy, which is of crucial moral significance. In fact, some philosophers consider empathy to be, not just morally important, but the key to understanding morality. Whether or not we go that far, empathy is clearly central to how we understand, treat, and hope to be treated by other people. But the kind of empathy that is communicated through expressions like “I feel your pain” is also peculiar. For it seems to require something perplexing and elusive: sharing another’s experience. It’s not clear how this is possible. We each experience the world from our own point of view, which no one else occupies. My experiences are mine; your experiences are yours. How could we share each other’s experiences? This issue is related to, but different from, a long-standing puzzle about knowing other minds. Wittgenstein (1958) writes: If what I feel is always my pain only, what can the supposition mean that someone else has pain? (pp. 56).
On a textbook view, Cartesian dualism faces an insurmountable difficulty: it posits two substances with nothing in common – pure thought and pure extension – and claims that they somehow interact. How could that be? The sense of mystery is undeniable but, as so often happens with mysteries, the underlying problem is elusive. Perhaps the problem concerns causation. In her letter first raising the problem, Princess Elizabeth writes that to move the body the mind would have to make some sort of impact on it. If impact is a transfer of some quality or quantity from cause to effect the problem is immediate: mind and body have incompatible natures, so nothing can be transferred from one to the other. But it is not clear that Descartes thinks causation works this way, and if he does then that is his mistake. The flagpole is a cause of its shadow, and yet, nothing whatsoever is being transferred from one to the other.
We distinguish two types of cases that have potential to generate quasi-cyclical preferences: self-involving choices where an agent oscillates between first- and third-person perspectives that conflict regarding their life-changing implications, and self-serving choices where frame-based reasoning can be “first-personally
— This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility.
The standard vocabulary of modernity and post-modernity suggests that something is coming to an end. Sometimes the end is much desired. “When I fall,” says Clov in Samuel Beckett’s Endgame, “I’ll weep for happiness.” Sometimes, by contrast, the end is measured primarily by a sense of loss. “To write poetry after Auschwitz,” says Theodore Adorno, “is barbaric.” And sometimes, as in Martin Heidegger’s later work, the end of our epoch announces the possibility of a new beginning. A famous line from Hölderlin is the catchphrase here: “In the danger, the saving power grows.” But what exactly is the danger that threatens to end our age? It is something beyond the danger of climate change, nuclear annihilation, pandemics, and the other physical threats we confront, something underlying these that makes them alive to us as the totalizing terrors we feel them to be. It extends beyond the threat of our mere extinction, in other words, reaching all the way to the possibility of our ontological end.
Adaptationism is often taken to be the thesis that most traits are adaptations. In order to assess this thesis, it seems we must be able to establish either an exhaustive set of all traits or a representative sample of this set. Either task requires a more systematic and principled way of individuating traits than is currently available. Moreover, different trait individuation criteria can make adaptationism turn out true or false. For instance, individuation based on natural selection may render adaptationism true, but may do so by presupposing adaptationism. In this paper, we show how adaptationism depends on trait individuation and that the latter is an open and unsolved problem.
In this paper, we defend what we call the ‘Hybrid View’ of privacy. According to this view, an individual has privacy if, and only if, no one else forms an epistemically warranted belief about the individual’s personal matters, nor perceives them. We contrast the Hybrid View with what seems to be the most common view of what it means to access someone’s personal matters, namely the Belief-Based View. We offer a range of examples that demonstrate why the Hybrid View is more plausible than the Belief-Based View. Finally, we show how the Hybrid View generates a more plausible fit between the concept of privacy, and the concept of a (morally objectionable) violation of privacy.
In this paper, I critically assess Mark Richard’s interesting and important development of the claim that linguistic meanings can be fruitfully analogized with biological species. I argue that linguistic meanings qua cluster of interpretative presuppositions need not and often do not display the population-level independence and reproductive isolation that is characteristic of the biological species concept. After developing these problems in some detail, I close with a discussion of their implications for the picture
We are excited about the next Neural Mechanisms webinar this Friday (20th). As always, it is free. You can find information about how and when to join the webinar below or at the Neural Mechanisms website—where you can also join sign up for the mailing list that notifies people about upcoming webinars, webconferences, and more! …
Kocurek on chance and would
Posted on Friday, 20 Jan 2023. A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022). The paper investigates Al Hajek's argument (e.g. …
In a recent paper, Sprenger (2019) advances what he calls a “suppositional” answer to the question of why a Bayesian agent’s degrees of belief should align with the probabilities found in statistical models. We show that Sprenger’s account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
We can divide medieval discussions of the insolubles— logical paradoxes such as the Liar—into two main periods, before and after Bradwardine, who wrote his treatise on Insolubles in Oxford in the early 1320s. Bradwardine’s aim was to develop a solution to the insolubles which, unlike the then dominant theories, restrictio and cassatio, placed no restriction on self-reference or the theory of truth. He claimed to be able to prove that insolubles signify not only that they are false but also that they are true, and so are false. Few subsequent writers on insolubles followed him completely.
Walter de Segrave was at Merton College, Oxford from 1321 until at least 1338. Segrave’s ‘Insolubles’ is his only known work, which appears to have been composed at Oxford in the late 1320s or early 1330s, consistent with the fact that it is clearly a response to Bradwardine’s own ‘Insolubles’, composed when Bradwardine was regent master at Balliol College, that is, from 1321-23, before he moved to Merton in 1323. The dominant theory at the time Bradwardine was writing was restrictivism, the claim that a part cannot supposit for the whole of which it is part (and consequently, for its contradictory or anything convertible with it), at least in the presence of a privative term, in particular, privative alethic and epistemic terms such as ‘false’ and ‘unknown’.
Forty years ago, Niels Green-Pedersen listed five different accounts of valid consequence, variously promoted by logicians in the early fourteenth century and discussed by Niels Drukken of Denmark in his commentary on Aristotle’s Prior Analytics, written in Paris in the late 1330s. Two of these arguably fail to give defining conditions: truth preservation was shown by Buridan and others to be neither necessary nor sufficient; incompatibility of the opposite of the conclusion with the premises is merely circular if incompatibility is analysed in terms of consequence. Buridan was perhaps the first to define consequence in terms of preservation of what we might dub verification, that is, signifying as things are. John Mair pinpointed a sophism which threatens to undermine this proposal. Bradwardine turned it around: he suggested that a necessary condition on consequence was that the premises signify everything the conclusion signifies. Dumbleton gave counterexamples to Bradwardine’s postulates in which the conclusion arguably signifies more than, or even completely differently from the premises. Yet a long-standing tradition held that some species of validity depend on the conclusion being in some way contained in the premises. We explore the connection between signification and consequence and its role in solving the insolubles.
Any explanation for an event E that does not go all the way back
to something self-explanatory is merely partial. A partial explanation is one that is a part of a complete
explanation. So, if any event E has
an explanation, it has an explanation going all the way back to
something self-explanatory. …
Research suggests that many social concepts, such as FRIEND and ARTIST, have two independent sets of criteria for their application: one descriptive, and one normative. These have become known as “dual character concepts.” Recently, it has been argued that HUMAN is a dual character concept, and that this engenders a distinctively normative variety of dehumanization (Phillips, 2022). In what follows, I develop this model by examining which form of essentialism drives normative dehumanization. In particular, I focus on three candidates: Platonic essentialism; teleological essentialism; and value-based essentialism. Across four experiments, I found evidence that normative dehumanization is driven by value-based essentialism, as opposed to Platonic or teleological essentialism. I also found evidence that normative dehumanization is a unique predictor of intergroup hostility, over and above like/dislike; as well as perceptions of ideal humanness, and typical humanness. Together, these findings clarify the ordinary concept of a “true human,” and thus what it means to normatively dehumanize someone. These findings also suggest that research concerning intergroup hostility will benefit from focusing on the distinction between descriptive and normative dehumanization.