There is now a great deal of evidence that norm violations impact people’s causal judgments. But it remains contentious how best to explain these findings. This includes that the primary explanations on offer differ with regard to how broad they take the phenomenon to be. In this chapter, I detail how the explanations diverge with respect to the expected scope of the contexts in which the effect arises, the types of judgments at issue, and the range of norms involved. In doing so, I briefly summarize the evidence favoring my preferred explanation—the responsibility account. I then add to the evidence, presenting the results of two preregistered studies that employ a novel method: participants were asked to rank order compound statements combining a causal attribution and a normative attribution.
Solms’ unusual project of translating Freud’s “Project for a Scientific Psychology” into contemporary cognitive science terms is hard to assess. The most important theme, to my way of thinking, is Solms’ support, in present-day terms, of Freud’s insistence that emotion lies at the heart of all cognition. Taking this one idea seriously will require significant alterations to the working assumptions of many in cognitive science.
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. Put another way, we can safely dispense with the question of whether AI will have an impact; the pertinent questions now are by whom, how, where, and when this positive or negative impact will be felt, and hence what governance needs to be put in place to provide the best possible answers. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or misuse, and the opportunity costs associated with its underuse. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies – four of which build on existing bioethics principles and an additional one that we argue is of equal importance in the case of AI.
As a highly technological innovation, cultured meat is the subject of techno-optimistic as well as techno-sceptical evaluations. The chapter discusses this opposition and connects it with arguments about seeing the world in the right way. Both sides not only call upon us to see the world in a very particular light, but also point to mechanisms of selective attention in order to explain how others can be so biased. I will argue that attention mechanisms are indeed relevant for dealing with the Anthropocene, but that dualism has paralysing effects. In a dualistic framework, cultured meat is associated with ecomodernist optimism, bold technological control over nature and alienation from animals. But interested citizens and farmers in focus groups rather envisioned the future of cultured meat through small scale production on farms combined with intensive relations with animals. Such scenarios, involving elements from both sides of the dualistic gap, depend on constructive ways of dealing with dualisms and ambivalence.
Covid-19 vaccination take up has been disappointingly low in many countries, or parts of countries, such as in America. Given that vaccination offers significant private and public benefits, this is surprising. …
It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
Here is a familiar story about electoral democracy. Modern policymaking is incredibly complicated. Voters are rationally ignorant. This ignorance has many potential bad consequences. If elected officials are closely responsive to the ignorant voters, they will make bad decisions, resulting in bad outcomes. More plausibly, this ignorance will simply serve to insulate elected officials from voter scrutiny, making them easy targets for capture and manipulation— which will also lead to bad outcomes.
Herbert A. Simon (1955) established behavioral economics based on bounded rationality. People are unable to fully optimize due to costs of information being too high for most people and their inability to compute such behavior for mathematical and logical limits, arguing that most people follow heuristic rules of thumb to achieve an aspirational level they hold. Most firms seek a level of profit acceptable to owners rather than a possible maximum profits level. He labeled this approach to be satisficing (Simon, 1956), noting this word is in the Oxford English Dictionary from a Northumbrian dialect, basically meaning “satisfy.” But he redefined it to describe how people behave using bounded rationality.
Is it permissible to be a fan of an artist or a sports team that has behaved immorally? While this issue has recently been the subject of widespread public debate, it has received little attention in the philosophical literature. This paper will investigate this issue by examining the nature and ethics of fandom. I will argue that the crimes and misdemeanors of the object of fandom provide three kinds of moral reasons for fans to abandon their fandom. First, being a fan of the immoral may provide support for their immoral behavior. Second, fandom alters our perception in ways that will often lead us to be fail to perceive our idol’s faults and even to adopting immoral points of view in order to be able to maintain the positive view we have of them. Third, fandom, like friendship, may lead us to engage in acts of loyalty to protect the interests of our idols. This gives fans of the immoral good reason to abandon their fandom. However, these reasons will not always be conclusive and, in some cases, it may be possible to instead adopt a critical form of fandom.
Angell’s logic of analytic containment AC has been shown to be characterized by a 9-valued matrix NC by Ferguson, and by a 16-valued matrix by Fine. We show that the former is the image of a surjective homomorphism from the latter, i.e., an epimorphic image. The epimorphism was found with the help of MUltlog, which also provides a tableau calculus for NC extended by quantifiers that generalize conjunction and disjunction.
Smartphone use plays an increasingly important role in our daily lives. Philosophical research that has used first-wave or second-wave theories of extended cognition in order to understand our engagement with digital technologies has focused on the contribution of these technologies to the completion of specific cognitive tasks (e.g., remembering, reasoning, problem-solving, navigation). However, in a considerable number of cases, everyday smartphone use is either task-unrelated or task-free. In psychological research, these cases have been captured by notions such as absent-minded smartphone use (Marty- Dugas et al., 2018) or smartphone-related inattentiveness (Liebherr et al., 2020). Given the prevalence of these cases, we develop a conceptual framework that can accommodate the functional and phenomenological characteristics of task-unrelated or task-free smartphone use. To this end, we will integrate research on second-wave extended cognition with mind-wandering research and introduce the concept of ‘extended mind-wandering’. Elaborating the family resemblances approach to mind-wandering (Seli, Kane, Smallwood, et al., 2018), we will argue that task-unrelated or task-free smartphone use shares many characteristics with mind-wandering. We will suggest that an empirically informed conceptual analysis of cases of extended mind-wandering can enrich current work on digitally extended cognition by specifying the influence of the attention economy on our cognitive dynamics.
According to lexical views in population axiology, there are good lives a: and 3/ such that some number of lives equally good as x is not worse than any number of lives equally good as 3/. Such views can avoid the Repugnant Conclusion without violating Transitivity or Separability, but they imply a dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically incomplete, in a sense to be explained. One might judge that the Repugnant Conclusion is preferable to each of these horns and hence embrace an Archimedean view. This is, roughly, the claim that quantity can always substitute for quality: each population is worse than a population of enough good lives. However, Archimedean views face an analogous dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically and symmetrically incomplete, in a sense to be explained. Therefore, the lexical dilemma gives us little reason to prefer Archimedean views. Even if we give up on lexicality, problems of the same kind remain.
People’s attitudes towards social norms play a crucial role in understanding group behavior. Norm psychology accounts focus on processes of norm internalization that influence people’s norm following attitudes but pay considerably less attention to social identity and group identification processes. Social identity theory studies group identity but works with a relatively thin and instrumental notion of social norms. We argue that to best understand both sets of phenomena, it is important to integrate the insights of both approaches. We highlight tensions between the two approaches and conflicting observations, and sketch the contours of an integrated account. We conclude with some observations on how a twofold account may contribute to studying the evolution of human groups and understanding behavior and social norms in complex societies.
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service’
Substrate independence and mindbody functionalism claim that thinking does not depend on any particular kind of physical implementation. But realworld information processing depends on energy and energy depends on material substrates. Biological evidence for these claims comes from ecology and neuroscience, while computational evidence comes from neuromorphic computing and deep learning. Attention to energy requirements undermines the use of substrate independence to support claims about the feasibility of artificial intelligence, the moral standing of robots, the possibility that we may be living in a computer simulation, the plausibility of transferring minds into computers, and the autonomy of psychology from neuroscience.
Attributed to William Walwyn, leader of the Levellers in the England of 1647 The individual differences of which so much is made (…) will always survive, and they are to be welcomed, not regretted. But their existence is no reason for not seeking to establish the largest possible measure of equality of environment, and circumstance, and opportunity. On the contrary, it is a reason for redoubling our efforts to establish it, in order to ensure that these diversities of gifts may come to fruition.
« More quantum computing popularization! On Guilt
The other night Dana and I watched “The Internet’s Own Boy,” the 2014 documentary about the life and work of Aaron Swartz, which I’d somehow missed when it came out. …
I've previously argued that sadistic pleasure (in oppressing the innocent) lacks value. But consider a complication. Suppose this time that the sadistic majority are all conscientious utilitarians who would never willingly increase net suffering in the world. …
Though not all scholars agree on the meaning of the term,
“neoliberalism” is now generally thought to label the
philosophical view that a society’s political and economic
institutions should be robustly liberal and capitalist, but
supplemented by a constitutionally limited democracy and a modest
welfare state. Recent work on neoliberalism, thus understood, shows
this to be a coherent and distinctive political philosophy. This entry
explicates neoliberalism by examining the political concepts,
principles, and policies shared by F. A. Hayek, Milton Friedman, and
James Buchanan, all of whom play leading roles in the new historical
research on neoliberalism, and all of whom wrote in political
philosophy as well as political economy.
A central debate in philosophy of race is between eliminativists and conservationists about what we ought do with ‘race’ talk. ‘Eliminativism’ is often defined such that it’s committed to holding that (a) ‘race’ is vacuous and races don’t exist, so (b) we should eliminate the term ‘race’ from our vocabulary. As a stipulative definition, that’s fine. But as an account of one of the main theoretical options in the debate, it’s a serious mistake. I offer three arguments for why eliminativism should not be tethered to vacuity or error theory, and three arguments for why the view shouldn’t be understood in terms of eliminating the term ‘race’ from our vocabulary. Instead, I propose we understand the debate as concerning whether certain uses of ordinary race terms are typically wrong. This proposal is quite simple, and naturally suggested by the common gloss that eliminativism about ‘race’ is akin to a commonsensical view about 'witch' talk. But nonetheless, I should that it offers a significant recharacterization of this core debate in philosophy of race.
I was watching Biogen’s stock (BIIB) climb over 100 points yesterday because its Alzheimer’s drug, aducanumab [brand name: Aduhelm], received surprising FDA approval? I hadn’t been following the drug at all (it’s enough to try and track some Covid treatments/vaccines). …
Exchange is fundamental to business. ‘Business’ can mean
an activity of exchange. One entity (e.g., a person, a firm)
“does business” with another when it exchanges a good or
service for valuable consideration, i.e., a benefit such as money. ‘Business’ can also mean an entity that offers goods and
services for exchange, i.e., that sells things. Target is a business. Business ethics can thus be understood as the study of the ethical
dimensions of the exchange of goods and services, and of the entities
that offer goods and services for exchange. This includes related
activities such as the production, distribution, marketing, sale, and
consumption of goods and services (cf.
Have we entered a “post-truth” era? The present paper attempts to answer this question by (a) offering an explication of the notion of “post-truth” from recent discussions; (b) deriving a testable implication from that explication, to the effect that we should expect to see decreasing information effects—i.e., differences between actual preferences and estimated, fully informed preferences—on central political issues over time; and then (c) putting the relevant narrative to the test by way of counterfactual modelling, using election year data for the period of 2004-2016 from the American National Election Studies’ (ANES) Times Series Study. The implication in question turns out to be consistent with the data: at least in a US context, we do see evidence of a decrease in information effects on key, political issues—immigration, same-sex adoption, and gun laws, in particular—in the period 2004 to 2016. This offers some novel, empirical evidence for the “post-truth” narrative.
The vast literature on negative treatment of outgroups and favoritism toward ingroups provides many local insights but is largely fragmented, lacking an overarching framework that might provide a unified overview and guide conceptual integration. As a result, it remains unclear where different local perspectives conflict, how they may reinforce one another, and where they leave gaps in our knowledge of the phenomena. Our aim is to start constructing a framework to help remedy this situation. We first identify a few key ideas for creating a theoretical roadmap for this complex territory, namely the principles of etiological functionalism and the dual inheritance theory of human evolution. We show how a “molecular” approach to emotions fits into this picture, and use it to illuminate emotions that shape intergroup relations. Finally, we weave the pieces together into the beginnings of a systematic taxonomy of the emotions involved in social interactions, both hostile and friendly. While it is but a start, we have developed the argument in a way that illustrates how the foundational principles of our proposed framework can be extended to accommodate further cases.
Members of marginalized groups who desire to pursue ambitious ends that might lead them to overcome disadvantage often face evidential situations that do not support the belief that they will succeed. Such agents might decide, reasonably, that their efforts are better expended elsewhere. If an agent has a less risky, valuable alternative, then quitting can be a rational way of avoiding the potential costs of failure. However, in reaching this pessimistic conclusion, she adds to the evidence that formed the basis for her pessimism in the first place, not just for herself but for future agents who will be in a similar position as hers. This is a pessimism trap. Might believing optimistically against the evidence offer a way out? In this paper, I argue against practical and moral arguments to turn to optimism as a solution to pessimism traps. I suggest that these theories ignore the opportunity costs that agents pay when they settle on difficult long-term ends without being sensitive to evidence of potential failure. The view I defend licenses optimism in a narrow range of cases. Its limitations show us that the right response to many pessimism traps is not to be found through individual optimism.
Evidence about epidemiological risk and corporate market-share played a decisive role in litigation on asbestos-poisoning and pharmaceutical negligence. These cases bear on what is now a central question in applied legal philosophy, namely should we ever allow bare statistics to settle legal disputes? A swathe of recent work agrees that it is never appropriate to settle a legal case solely using statistical evidence or, alternatively, that it is only appropriate to do so when the odds are overwhelming such as DNA evidence with a < 1 in 10,000,000 chance of error.
It is tempting to suppose that the reason why the world remains profoundly unjust is that not enough of us hold the correct beliefs about the demands of justice and/or are motivated to bring it about. As Allen Buchanan shows, however, this is to miss a crucially important part of the picture: agents' mistaken beliefs about what it takes to achieve justice can seriously hamper prospects for such achievements. In this paper, I expand on Buchanan's taxonomy of mistaken beliefs about what it takes to achieve justice, and I bring his account (so expanded) to bear on the notion of epistemic justice.
Current discussions of hermeneutical injustice, I argue, poorly characterise the cognitive state of victims by failing to account for the communicative success that victims have when they describe their experience to other similarly situated persons. I argue that victims, especially when they suffer moral wrongs that are yet unnamed, are able (1) to grasp certain salient aspects of the wrong they experience and (2) to cultivate the ability to identify instances of the wrong in virtue of moral emotions. By moral emotions I mean emotions like indignation that reflect an agent’s ethical commitments and bear on her ethical assessments. Further, I argue that victims can impart their partial understanding of the wrong they suffer to others who are not similarly situated by eliciting moral emotions such as pity that are tied to broad notions of justice and fairness.
The Covid-19 pandemic has caused significant economic hardships for millions of people around the world. Meanwhile, many of the world’s richest people have seen their wealth increase substantially during the pandemic, despite the significant economic disruptions that it has caused on the whole. It is uncontroversial that these effects, which have exacerbated already unacceptable levels of poverty and inequality, call for robust policy responses from governments. In this paper, I argue that the disparate economic effects of the pandemic also generate direct obligations of justice for those who have benefitted from pandemic windfalls. Specifically, I argue that even if we accept that those who benefit from distributive injustice in the ordinary, predictable course of life within unjust institutions do not have direct obligations to redirect their unjust benefits to those who are unjustly disadvantaged, there are powerful reasons to hold that benefitting from pandemic windfalls does ground such an obligation.
Most philosophical accounts of human rights accept that all persons have human rights. Typically, ‘personhood’ is understood as unitary and binary. It is unitary because there is generally supposed to be a single threshold property required for personhood (e.g. agency, rationality, etc.). It is binary because it is all-or-nothing: you are either a person or you are not. A difficulty with binary views is that there will typically be subjects, like children and those with dementia, who do not meet the threshold, and so who are not persons with human rights, on these accounts. It is consequently unclear how we ought to treat these subjects. This is the problem of marginal cases.