In this paper, we establish gastrospaces as a subject of philosophical inquiry and an item for policy agendas. We first explain their political value, as key sites where members of liberal democratic societies can develop the capacity for a sense of justice and the capacity to form, revise, and pursue a conception of the good. Integrating political philosophy with analytic ontology, we then unfold a theoretical framework for gastrospaces: first, we show the limits of the concept of “third place;” second, we lay out the foundations for an ontological model of gastrospaces; third, we introduce five features of gastrospaces that connect their ontology with their political value and with the realization of justice goals. We conclude by briefly illustrating three potential levels of intervention concerning the design, use, and modification of gastrospaces: institutions, keepers, and users.
Suppose that we someday create artificially intelligent systems (AIs) who are capable of genuine consciousness, real joy and real suffering. Yes, I admit, I spend a lot of time thinking about this seemingly science-fictional possibility. …
James Sterba (2019, chapter 2) has recently argued that the free will defense fails to explain the compossibility of a perfect God and the amount and degree of moral evil that we see. I think he is mistaken about this. I thus find myself in the awkward and unexpected position, as a non-theist myself, of defending the free will defense. In this paper, I will try to show that once we take care to focus on what the free will defense is trying to accomplish, and by what means it tries to do so, we will see that Sterba’s criticism of it misses the mark.
Ernst Bloch (1885–1977) was a German philosopher and cultural
critic who is mostly credited for renewing the interest in utopia and
for mediating between the radical philosophy of emancipation,
non-dogmatic religious thought, analysis of mass culture, and new
aesthetic forms, notably those of Expressionism. His books, especially
The Principle of Hope (1954–1959), contributed to a
particular form of critical theory and, being written in a peculiar
essayistic style, made him quite popular both in academic and
non-academic circles. Bloch was an important voice among the
intelligentsia of Weimar Germany and then, for a short period
after the Second World war, the leading philosopher of the Eastern
According to second-personal approaches to moral obligation, the distinctive normative features of moral obligation can only be explained in terms of second-personal relations, i.e. the distinctive way persons relate to each other as persons. But there are important disagreements between different groups of second-personal approaches. Most notably, they disagree about the nature of second-personal relations, which has consequences for the nature of the obligations that they purport to explain. This article aims to distinguish these groups from each other, highlight their respective advantages and disadvantages, and thereby indicate avenues for future research.
Breeds are classifications of domestic animals that share, to a certain degree, a set of conventional phenotypic traits. We are going to defend that, despite classifying biological entities, animal breeds are social kinds. We will adopt Godman’s view of social kinds, classifications with predictive power based on social learning processes. We will show that, although the folk concept of animal breed refers to a biological kind, there is no way to define it. The expert definitions of breeds are instead based on socially learnt conventions and skills (artificial selection), yielding groupings in which scientific predictions are possible. We will discuss in what sense breeds are social, but not human kinds and in what sense the concept of a breed is necessary to make them real.
In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one another. The second category consists of those deepfakes that direct an illocutionary speech act—such as a request, injunction, invitation, or promise—to an addressee who is located outside of the recording. For instance, fake footage of a company director instructing their employee to make a payment, or of a military official urging the populace to flee for safety. Whereas the former category may deceive an audience by giving rise to false beliefs, the latter can more directly manipulate an agent’s actions: the speech act’s addressee may be moved to accept an invitation or a summons, follow a command, or heed a warning, and in doing so further a deceiver’s unethical ends.
What differentiates scientific research from non-scientific inquiry? Philosophers addressing this question have typically been inspired by the exalted social place and intellectual achievements of science. They have hence tended to point to some epistemic virtue or methodological feature of science that sets it apart. Our discussion on the other hand is motivated by the case of commercial research, which we argue is distinct from (and often epistemically inferior to) academic research. We consider a deflationary view in which science refers to whatever is regarded as epistemically successful, but find that this does not leave room for the important notion of scientific error and fails to capture distinctive social elements of science. This leads us to the view that a demarcation criterion should be a widely upheld social norm without immediate epistemic connotations. Our tentative answer is the communist norm, which calls on scientists to share their work widely for public scrutiny and evaluation.
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society.
The Internet is the epistemological crisis of the 21st century: it has fundamentally altered the social epistemology of societies with relative freedom to access it. Most of what we think we know about the world is due to reliance on epistemic authorities, individuals, or institutions that tell us what we ought to believe about Newtonian mechanics, evolution by natural selection, climate change, resurrection from the dead, or the Holocaust. The most practically fruitful epistemic norm of modernity, empiricism, demands that knowledge be grounded in sensory experience, but almost no one who believes in evolution by natural selection or the reality of the Holocaust has any sensory evidence in support of those beliefs. Instead, we rely on epistemic authorities—biologists and historians, for example. Epistemic authority cannot be sustained by empiricist criteria, for obvious reasons: salient anecdotal evidence, the favorite tool of propagandists, appeals to ordinary faith in the senses, but is easily exploited given that most people understand neither the perils of induction nor the finer points of sampling and Bayesian inference. Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe about what. The traditional media were crucial, in the age of mass democracy, with promulgating and sustaining such norms.
One of the central insights of Western philosophy, beginning with
Socrates, has been that few if any things are as bad for an individual
as culpably doing wrong. It is better, we are told through much of the
Western philosophical tradition, that it is better to suffer than do
Global philosophy is an ideal. It includes the affirmation of intercultural philosophy and internationalism but it goes well beyond cultural and geographic cosmopolitanism. To embrace global philosophy is to reject any approach to philosophy that cleaves to closed communities and private conversations.
Political revolutions are transformative moments marked by profound,
rapid change in the political order achieved through the use of force
rather than through consensus or legal process. Moral responses to
revolutions are often ambivalent or deeply polarized. On the one hand,
revolutions promise to be powerful engines of moral progress, allowing
a community to abolish an oppressive social order and providing the
opportunity to institute a better one. On the other hand, revolutions
risk unravelling the fabric of political community and devolving into
bloody, prolonged conflicts that only manage to reinstate a new
Global challenges such as climate change, food security, or public health have become dominant concerns in research and innovation policy. This article examines how responses to these challenges are addressed by governance actors. We argue that appeals to global challenges can give rise to a ‘solution strategy’ that presents responses of dominant actors as solutions and a ‘negotiation strategy’ that highlights the availability of heterogeneous and often conflicting responses. On the basis of interviews and document analyses, the study identifies both strategies across local, national, and European levels. While our results demonstrate the co-existence of both strategies, we find that global challenges are most commonly highlighted together with the solutions offered by dominant actors. Global challenges are ‘wicked problems’ that often become misframed as ‘tame problems’ in governance practice and thereby legitimise dominant responses.
We distinguish two types of cases that have potential to generate quasi-cyclical preferences: self-involving choices where an agent oscillates between first- and third-person perspectives that conflict regarding their life-changing implications, and self-serving choices where frame-based reasoning can be “first-personally
— This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility.
The standard vocabulary of modernity and post-modernity suggests that something is coming to an end. Sometimes the end is much desired. “When I fall,” says Clov in Samuel Beckett’s Endgame, “I’ll weep for happiness.” Sometimes, by contrast, the end is measured primarily by a sense of loss. “To write poetry after Auschwitz,” says Theodore Adorno, “is barbaric.” And sometimes, as in Martin Heidegger’s later work, the end of our epoch announces the possibility of a new beginning. A famous line from Hölderlin is the catchphrase here: “In the danger, the saving power grows.” But what exactly is the danger that threatens to end our age? It is something beyond the danger of climate change, nuclear annihilation, pandemics, and the other physical threats we confront, something underlying these that makes them alive to us as the totalizing terrors we feel them to be. It extends beyond the threat of our mere extinction, in other words, reaching all the way to the possibility of our ontological end.
In this paper, we defend what we call the ‘Hybrid View’ of privacy. According to this view, an individual has privacy if, and only if, no one else forms an epistemically warranted belief about the individual’s personal matters, nor perceives them. We contrast the Hybrid View with what seems to be the most common view of what it means to access someone’s personal matters, namely the Belief-Based View. We offer a range of examples that demonstrate why the Hybrid View is more plausible than the Belief-Based View. Finally, we show how the Hybrid View generates a more plausible fit between the concept of privacy, and the concept of a (morally objectionable) violation of privacy.
Research suggests that many social concepts, such as FRIEND and ARTIST, have two independent sets of criteria for their application: one descriptive, and one normative. These have become known as “dual character concepts.” Recently, it has been argued that HUMAN is a dual character concept, and that this engenders a distinctively normative variety of dehumanization (Phillips, 2022). In what follows, I develop this model by examining which form of essentialism drives normative dehumanization. In particular, I focus on three candidates: Platonic essentialism; teleological essentialism; and value-based essentialism. Across four experiments, I found evidence that normative dehumanization is driven by value-based essentialism, as opposed to Platonic or teleological essentialism. I also found evidence that normative dehumanization is a unique predictor of intergroup hostility, over and above like/dislike; as well as perceptions of ideal humanness, and typical humanness. Together, these findings clarify the ordinary concept of a “true human,” and thus what it means to normatively dehumanize someone. These findings also suggest that research concerning intergroup hostility will benefit from focusing on the distinction between descriptive and normative dehumanization.
Is epistocracy epistemically superior to democracy? In this paper, I scrutinize some of the arguments for and against the epistemic superiority of epistocracy. Using empirical results from the literature on the epistemic benefits of diversity as well as the epistemic contributions of citizen science, I strengthen the case against epistocracy and for democracy. Disenfranchising, or otherwise discouraging anyone to participate in political life, on the basis of them not possessing a certain body of (social scientific) knowledge, is untenable also from an epistemic point of view. Rather than focussing on individual competence, we should pay attention to the social constellation through which we produce knowledge to make sure we decrease epistemic loss (by ensuring diversity and inclusion) and increase epistemic productivity (by fostering a multiplicity of perspectives interacting fruitfully). Achieving those epistemic benefits requires a more democratic approach that differs significantly from epistocracy.
Autism is a psychopathological condition around which there is still much prejudice and stigma. The discrepancy between third-person and first-person accounts of autistic behavior creates a chasm between autistic and neurotypical (non-autistic) people. Epistemic injustice suffered by these individuals is great, and a fruitful strategy out of this predicament is much needed. I will propose that through the appropriation and implementation of methods and concepts from phenomenology and ecological-enactive cognitive science, we can acquire powerful tools to work towards greater epistemic justice for autistic individuals. I will use the resources found in the skilled intentionality framework, integrated with various phenomenological theories.
Many philosophers characterize a particularly important sense of free will and responsibility by referring to basically deserved blame. But what is basically deserved blame? The aim of this paper is to identify the appraisal entailed by basic desert claims. It presents three desiderata for an account of desert appraisals and it argues that important recent theories fail to meet them. Then, the paper presents and defends a promising alternative. The basic idea is that claims about basically deserved blame entail that the targets have forfeited their claims that others not blame them and that there is positive reason to blame them. The paper shows how this view frames the discussion about skepticism about free will and responsibility.
On at least most accounts of what global justice requires, those living in severe poverty around the world are unjustly disadvantaged. Remedying this unjust disadvantage requires (perhaps among other things) that resources currently possessed by well off people are deployed in ways that will improve the lives of the poor. In this article, I argue that, contrary to the claims of some critics, effective altruist giving is at least among the appropriate responses to global injustice for well off people. In addition, I suggest some reasons to think that effective altruist giving will often be among the best ways for such people to satisfy obligations that they have in virtue of being beneficiaries of global injustice. The argument that I offer for this conclusion has at least two important implications. First, critics of effective altruism who claim that it is incompatible with taking global injustice sufficiently seriously are mistaken. And second, effective altruists have reason to reject the non-normative accounts of the movement’s core commitments that have been advocated by some prominent proponents.
We introduce two concepts—social certainty and social doubt—that help to articulate a variety of experiences of the social world, such as shyness, self-consciousness, culture shock, and anxiety. Following Carel’s () analysis of bodily doubt, which explores how a person’s tacit confidence in the workings of their body can be disrupted and undermined in illness, we consider how an individual’s faith in themselves as a social agent, too, can be compromised or lost, thus altering their experience of what is afforded by the social environment. We highlight how a loss of bodily or social certainty can be shaped and sustained by the environments in which one finds oneself. As such, we show how certain individuals might be more vulnerable to experiences of bodily and social doubt than others.
Alexander (Friedrich Wilhelm Heinrich Alexander) von Humboldt
(1769–1859) was a scientific explorer and natural philosopher,
who achieved fame following his return from South America in 1804. Already during his lifetime, biographies celebrating Humboldt began to
appear (Rupke 2008), and upon his death in 1859, Humboldt was
commemorated across the world—from Alexandria to New York City,
from Paris and Moscow to Adelaide and Melbourne (Wulf 2015). An ocean
current was named after him, as were numerous national parks, regions,
and a penguin species. He has been described as the first ecologist
(Bertaux 1985), the “father of American
environmentalism” (Sachs 2004), the inspiration behind the
National Parks Movement in the United States and Great Britain, and a
major influence on environmentalism in India (Grove 1990).
This paper identifies a type of linguistic phenomenon new to feminist philosophy of language: biased evaluative descriptions. Biased evaluative descriptions (BEDs) are descriptions whose well-intended positive surface meanings are inflected with implicitly biased content. Biased evaluative descriptions are characterized by three main features: (i) they have roots in implicit bias or benevolent sexism, (ii) their application is counterfactually unstable across dominant and subordinate social groups, and (iii) they encode stereotypes. After giving several different kinds of examples of biased evaluative descriptions, I distinguish them from similar linguistic concepts, including backhanded compliments, slurs, insults, epithets, pejoratives, and dog-whistles. I suggest that the framework of traditional Gricean implicature cannot account for BEDs. I discuss some challenges to the distinctiveness and evaluability of BEDs, including intersectional social identities. I conclude by discussing the social significance and moral status of BEDs. Identifying BEDs is important for a variety of social contexts, from the very general and broad (political speeches) to the very particular and small (bias in academic hiring).
The theory of morality we can call full rule-consequentialism selects
rules solely in terms of the goodness of their consequences and then
claims that these rules determine which kinds of acts are morally
wrong. George Berkeley was arguably the first rule-consequentialist. He wrote, “In framing the general laws of nature, it is granted
we must be entirely guided by the public good of mankind, but not in
the ordinary moral actions of our lives. … The rule is framed
with respect to the good of mankind; but our practice must be always
shaped immediately by the rule” (Berkeley 1712: section 31).
Antonio Gramsci (1891–1937) has been enormously influential as a
Marxist theorist of cultural and political domination in
“developed” capitalism. However, his career was that of a
radical journalist and revolutionary organizer, not a professional
philosopher. Gramsci was a socialist activist, cultural commentator
and, later, communist party leader in Italy. Most of his writings are
concerned with assessing the immediate political situation and,
particularly, the prospects for revolution in interwar Italy. Nonetheless, Gramsci was conversant with philosophical currents of the
time—especially Italian neo-idealism, native intellectual and
political traditions dating back to Machiavelli, and the major
currents of Marxist thought.
In the face of climate change, the Covid-19 pandemic and rising anti-science populism, an unlikely alliance of scholars has emerged to “regain some of the authority of science”, as Bruno Latour puts it in an interview with Science (Vrieze 2017). Historians, philosophers, and sociologists of science, who have long operated in competing intellectual niches, find a common calling in highlighting the existential importance but also increasingly fragile position of science in society.
Transdisciplinary research challenges the divide between Indigenous and academic knowledge by bringing together epistemic resources of heterogeneous stakeholders. The aim of this article is to explore causal explanations in a traditional fishing community in Brazil that provide resources for transdisciplinary collaboration, without neglecting differences between Indigenous and academic experts. Semi-structured interviews were carried out in a fishing village in the North shore of Bahia and our findings show that community members often rely on causal explanations for local ecological phenomena with different degrees of complexity. While these results demonstrate the ecological expertise of local community members, we also argue that recognition of local expertise needs to reflect on differences between epistemic communities by developing a culturally sensitive model of transdisciplinary knowledge negotiation.