Various theorists have endorsed the “communication argument”: communicative capacities are necessary for morally responsible agency because blame aims at a distinctive kind of moral communication. I contend that existing versions of the argument, including those defended by Gary Watson and Coleen Macnamara, face a “pluralist challenge”: they do not seem to sit well with the plausible view that blame has multiple aims. I then examine three possible rejoinders to the challenge, suggesting that a context-specific function-based approach constitutes the most promising modification of the communication argument. Keywords: Blame; moral responsibility; communicative theory of responsibility; function of blame.
Nietzsche characterizes the Third Essay of On the Genealogy of Morality as “offer[ing] the answer to the question whence the ascetic ideal […] derives its tremendous power although it is the harmful ideal par excellence” (EH GM). What draws people to ideals of self-denial and self-punishment? In short, I will argue, according to Nietzsche, the same as what draws many to physical self-harm: to stop feeling like you’re going to burst out of your skin. “The ascetic ideal,” in Nietzsche’s sense, is an ideal of categorically denying certain desires (instincts, impulses, etc.). What distinguishes this type of ideal is a certain “valuation[al]” (GM III:11) stance — a stance of condemnation (demonization, mistrust) of certain of one’s desires, and correspondingly of oneself for having them or “giving in” to them (cf. III:8, 10). Perhaps one feels a pang of guilt at the first glimpse of ill-will or unforgiveness in oneself. Or one is moved in confession to include simply that one “felt lust,” jealousy, anger. Merely having the desire is treated as problematic, something to feel bad about, reason for punishment.
. We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft on errorstatistics.com. …
When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves’ (or AJBT) test to answer this question (Thaler & Sunstein, 2008, 5). In a recent paper, L. A. Paul and Sunstein (ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally trans-formative experience, that is, one that results in our values changing (Paul, 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Pettigrew, 2019, Chapters 6 and 7).
Philippa Foot famously distinguishes between two senses in which a particular norm, request, or demand can be ``categorical". In the first sense, a categorical demand is one that applies to a person regardless of his or her aims or interests.1 In this sense, demands of morality are categorical. But so are the demands of etiquette, club rules, rules of feudal obedience, and so on.2 The second sense is the extent to which the demands in question don’t just apply to someone, but generate normative reasons for action.3 In this second sense one might say that demands of morality, unlike ancillary domains, are categorical: moral demands are normative.
Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious. Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them. …
The notion of growth is one of the most studied notions within economic theory and, traditionally, it is accounted for on the basis of a positivist thesis according to which assumptions are not relevant, as long as economic models have acceptable predictive power. Following this view, it does not matter whether assumptions are realistic or not. Arguments against this principle may involve a defense of the realistic assumptions over highly idealized or false ones. This article aims in a different direction. Instead of demanding more realism, we can accept the spirit of the mentioned thesis, but, instead, criticize the circularity that may arise by combining different assumptions that are necessary for the explanation of economic growth in mainstream economics. Such a circularity is a key aspect of the well-known problem of providing microfoundations for macroeconomic properties. It is here suggested that the notion of emergence could be appropriate to arrive at a better understanding of growth, clarifying the issues related to circularity, but without totally rejecting the usefulness of unrealistic assumptions.
Pereboom and Caruso propose the quarantine model as an alternative to existing models of criminal justice. They appeal to the established public health practice of quarantining people, which is believed to be effective and morally justified, to explain why -in criminal justice- it is also morally acceptable to detain wrongdoers, without assuming the existence of a retrospective moral responsibility. Wrongdoers in their model are treated as carriers of dangerous diseases and as such should be preventively detained (or rehabilitated) until they no longer pose a threat to society. Our main concern in this paper is that Pereboom and Caruso adopt an idiosyncratic meaning of quarantine regulations. We highlight a set of important disanalogies between their quarantine model and the quarantine regulations currently adopted in public health policies. More specifically, we argue that the similarities that Pereboom and Caruso propose to substantiate their analogy are not consistent—despite what they claim—with the regulations underlying quarantine as an epidemiological process. We also notice that certain quarantine procedures adopted in public health systems are inadequate to deal with criminal behaviors. On these grounds, we conclude that Pereboom and Caruso should not appeal to the quarantine analogy to substantiate their view, unless they address the issues and criticism we raise in this paper.
Normativity is a fundamental feature of selfhood. In the Modern, Enlightenment tradition, being a self is having the capacity to be autonomous, that is, to be responsible for one's own actions and beliefs. To be a self, an organism must be capable of freely following cognitive, behavioural and linguistic norms. It must be able to justify its positions by giving reasons to others -- and to itself. These capacities are contrasted not only with mechanical causation in the physical world, but with the heteronomous status of slaves, of individuals controlled by hypnosis or evil neurosurgeons, or of those manipulated by propaganda or advertising. Thoughts and actions are only mine in so far as I, as an independent individual, take responsibility for them. Only by such deliberative and conscious activity can I take ownership of them. Descartes' rejection of received opinion and Kant's insistence that a Subject must be self-regulating exemplify this Modern tradition.
In this paper, I expand on Sarkar’s (2019) view that the term ‘biodiversity’ should be understood primarily as a normative concept with a descriptive component molded to the evaluation; hence, ‘biodiversity’ is a thick term. The idea of inseparability is advocated for by using Bernard William’s example of thick terms as context-oriented whilst taking issue with McDowell’s “anti-disentangling” argument and other contemporary arguments for separability. Compared to other papers in the area of environmental pragmatism, this paper argues that conservation scientists will achieve greater success in conservation efforts by framing ‘biodiversity’ as a primarily normative concept to the value system of the local community.
Any theory of what we owe to each other (whether such a theory is just part of morality or the whole shebang ) must answer a number of questions. In particular, any such theory must address (at least) the following issues, which I label: Scope: Who matters, morally speaking? Everyone? Some subset of people? Weight : How much do those who matter matter? Does everyone matter equally? Are people who bear relationships to the agent morally more important? Focus: What about people matters? Their well-being? Their ends? Their autonomous agency? Stance: How do we act best to take the people who matter into consideration? Do we promote their (well-being/ends/agency)? Do we honor or preserve (same)? Some other action or set of actions?
Many philosophers, following Williamson (The Philosophical Review 105(4): 489–523, 1996), Williamson (Knowledge and its Limits, Oxford, Oxford University Press, 2000), subscribe to the constitutive rule account of assertion (CRAA). They hold that the activity of asserting is constituted by a single constitutive rule of assertion. However, in recent work, Maitra (in: Brown & Cappelen (ed). Assertion: new philosophical essays, Oxford, Oxford University Press, 2011), Johnson (Acta Analytica 33(1): 51–67, 2018), and Kelp and Simion (Synthese 197(1): 125–137, 2020a), Kelp and Simion (in: Goldberg (ed) The Oxford Handbook of Assertion, Oxford, Oxford University Press, 2020b) aim to show that, for all the most popular versions of the constitutive rule of assertion proposed in the literature, asserting is not an activity constituted by a single constitutive rule and that therefore CRAA is very likely false. To reach this conclusion, they all present a version of what can be dubbed the engagement condition objection. That is, they each propose a necessary condition on engaging in rule-constituted activities. Then they argue that, for all the most popular versions of the constitutive rule of assertion proposed in the literature, one can make assertions without satisfying this condition. In response, I present a counterexample that shows that the proposed engagement conditions lead to counterintuitive results, and I propose an alternative that better captures our intuitions. Then I argue that this alternative engagement condition is compatible with all the most popular versions of the constitutive rule of assertion.
As with most topics in philosophy, there is no consensus about what experimental philosophy is. Most broadly, experimental philosophy involves using scientific methods to collect empirical data for the purpose of casting light on philosophical issues. Such a definition threatens to be too broad, however: Taking the nature of matter to be a philosophical issue, research at the Large Hadron Collider would count as experimental philosophy. Others have suggested more narrow definitions, characterizing experimental philosophy in terms of the use of scientific methods to investigate intuitions. This threatens to be too narrow, however, excluding such work as Eric Schwitzgebel’s comparison of the rates of theft of ethics books to similar volumes from other areas of philosophy for the purpose of finding out whether philosophical training in ethics promotes moral behavior. While restricting experimental philosophy to the study of intuitions is too narrow, this nonetheless covers most of the research in this area. Focusing on this research, we begin by discussing some of the methods that have been used by experimental philosophers. We then distinguish between three types of goals that have guided experimental philosophers, illustrating these goals with some examples.
Eugen Fischer and John Collins have brought together an impressive, and important, series of essays concerning the methodological debates between rationalists and naturalists, and how these debates have been impacted by work in experimental philosophy. The work at issue concerns the evidential value of intuitions, and as such is only a small part of the experimental philosophy corpus as I understand it. In fact, Fischer and Collins define experimental philosophy in this narrow sense in their introduction. On their view, experimental philosophy ‘‘builds on the assumption that, for better or worse, intuitions are crucially involved in philosophical work’’ (3). The parenthetical serves to emphasize that such work could either be pursued from a positive perspective aiming to vindicate the use of intuitions in philosophy or from a negative perspective aiming to undermine that use. Noting these two perspectives, it might then seem that experimental philosophy is neutral with regard to methodological debate: ‘‘experimental philosophy is not a party to the dispute between methodological rationalism and naturalism, but offers a new framework for settling it’’ (23).
In the long run, the development of artificial intelligence (AI) is likely to be one of the biggest technological revolutions in human history. Even in the short run it will present tremendous challenges as well as tremendous opportunities. The more we do now to think through these complex challenges and opportunities, the better the prospects for the kind of outcomes we all hope for, for ourselves, our children, and our planet.
In this paper, we use the case of the COVID-19 pandemic in Europe to address the question of what kind of knowledge we should incorporate into public health policy. We show that policy-making in Europe during the COVID-19 pandemic has been biomedicine-centric in that its evidential basis marginalised input from non-biomedical disciplines. We then argue that in particular the social sciences could contribute essential expertise and evidence to public health policy in times of biomedical emergencies and that we should thus strive for a tighter integration of the social sciences in future evidence-based policy-making. This demand faces challenges on different levels, which we identify and discuss as potential inhibitors for a more pluralistic evidential basis.
Writing comments on a post about adversarial collaboration feels like a place where I should be adversarial (if in a collaborative spirit). But I agree with basically everything Eric says here. Frankly, this is all spot on. You probably don’t want to read 500 words from me just saying “yep, this” and agreeing with his excellent, sensible advice, though. So, let me attempt to be provocative: Eric doesn’t go far enough! (Not that he was trying to, of course.) All philosophers should be asking themselves what empirical evidence would actually test their views. Collaboration should be the rule, not the exception. And we should expect collaborations to have an adversarial element, treating this as a feature, not a bug.
The field that has come to be known as the Critical Philosophy of
Race is an amalgamation of philosophical work on race that largely
emerged in the late 20th century, though it draws from earlier work. It
departs from previous approaches to the question of race that dominated
the modern period up until the era of civil rights. Rather than
focusing on the legitimacy of the concept of race as a way to
characterize human differences, Critical Philosophy of Race approaches
the concept with a historical consciousness about its function in
legitimating domination and colonialism, engendering a critical
approach to race and hence the name of the sub-field.
Most authors who discuss willpower assume that everyone knows what it is, but our assumptions differ to such an extent that we talk past each other. We agree that willpower is the psychological function that resists temptations – variously known as impulses, addictions, or bad habits; that it operates simultaneously with temptations, without prior commitment; and that ’s skill at exec-use of it is limited by its cost, commonly called effort, as well as by the person utive functioning. However, accounts are usually not clear about how motivation functions during the application of willpower, or how motivation is related to effort. Some accounts depict willpower as the perceiving or formation of motivational contingencies that outweigh the temptation, and some depict it as a continuous use of mechanisms that interfere with reweighing the temptation. Some others now suggest that impulse control can bypass motivation altogether, although they refer to this route as habit rather than willpower.
I argue against the claim that the fundamental form of trust is a 2-place relation of A trusting B and in favour of the fundamental form being a 4- place relation of A, by ψ-ing, trusting B to φ. I characterize trusting behaviour as behaviour that knowingly makes one reliant on someone doing what they are supposed to do in the collaborative enterprise that the trusting behaviour belongs to. I explain how trust is involved in the following collaborative enterprises: knowledge transfer – i.e. telling someone something; maintaining a relationship; and passing responsibility for an action on to someone else. And I finish by showing how our talk of trust in non-collaborative contexts – e.g. trusting a branch to support one’s weight – may be explained by reference to the central sort of collaborative trust. Key words: collaboration; reliance; communication; Faulkner; Simpson; Jones.
Scholars, journalists, and activists working on climate change often distinguish between “individual” and “structural” approaches to decarbonization. The former concern choices individuals can make to reduce their “personal carbon footprint” (e.g., eating less meat). The latter concern changes to institutions, laws, and other social structures. These two approaches are often framed as oppositional, representing a mutually exclusive forced choice between alternative routes to decarbonization. After presenting representative samples of this oppositional framing of individual and structural approaches in environmental communication, we identify four problems with oppositional thinking and propose five ways to conceive of individual and structural reform as symbiotic and interdependent.
We were slightly concerned, upon having read Eric Winsberg, Jason Brennan and Chris Surprenant’s reply to our paper “Were Lockdowns Justified? A Return to the Facts and Evidence”, that they may have fundamentally misunderstood the nature of our argument, so we issue the following clarification, along with a comment on our motivations for writing such a piece, for the interested reader.
As the presence of artificial intelligence (AI) becomes increasingly common in the workplace, Human Resources (HR) professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. In this paper, our aim is to examine this concern, which has received little attention in debates about the ethics of algorithms. We begin by discussing a recent survey of HR professionals, which reports current attitudes about the use of such AI-based tools in the workplace. In general, expectations are such that in the next few years, AI will play a prominent role in the HR toolkit, especially for hiring and onboarding purposes. Perhaps the most common objection to the use of hiring algorithms or algorithmic decision-making systems in general is that they have the potential to be biased or lead to objectionable patterns of discrimination. However, while HR professionals have registered concerns about bias and discrimination, interestingly the most cited worry in the recent survey that we consider is that hiring algorithms will “dehumanize” the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e., removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e., conceiving of other humans as subhuman), we argue that fears about dehumanizing the hiring process can be fruitfully investigated by considering its potentially negative impact on the employee-employer relationship. We argue that there are good independent reasons to accept a genuine, substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We go on to argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms. However, as the use of recruitment algorithms becomes more widespread, difficult trade-offs may need to be made, since the human element, as it features in the hiring process, can also be the source of its own problems.
In the US, you are sometimes told that something “violates federal law”, and it is said in a way that suggests that violating federal law is somehow particularly bad. This raises a moral question. I will assume, contrary to philosophical anarchists, that valid and reasonable laws are in some way morally binding. …
warming—originally organized by readily identifiable vested interests—has by now recruited a large popular constituency of declared “skeptics” increasingly disposed to “take a stand”: some of them opposed to government regulation in general, some resistant to any claims to intellectual authority (perhaps especially scientific), and some mobilized by a version of the right to individual freedom of opinion. As a result, confidence in the expertise of scientists has reached an all time low: Internet sites, radio talk shows, and television channels preferentially transmit “contrarian” aacks on the credibility of climate scientists. Even our most responsible newspapers and journals, in their very commitment to the traditional ethic of “balance,” sometimes contribute to the widespread misimpression that climate scientists are deeply divided about both the extent of the dangers we face and the relevance of human activity to global warming. Not knowing who or what to believe, the natural response for most people is to do nothing, and the consequence, as Thomas Homer-Dixon wrote last year for the New York Times: “Climate policy is gridlocked, and there’s virtually no chance of a breakthrough” (2010). Meanwhile, as evidence both of the role of human contributions to global warming and the dangers of that warming continues to mount, consensus among climate scientists grows ever stronger, and those of us who aend to that evidence are increasingly alarmed.
Astronomer Charles Piazzi Smyth’s 1864–65 expedition to measure the Great Pyramid of Giza was planned around a system of linear measures designed to guarantee the validity of his measurements and settle ongoing uncertainties as to the Pyramid’s true size. When the intended system failed to come together, Piazzi Smyth was forced to improvise a replacement that presented a fundamental challenge to the metrological enterprise upon which his system had been based. The astronomer’s new system centered around a small lump of basalt, now held at Cambridge’s Whipple Museum of the History of Science, which nucleated a wide array of material and scientific considerations. Through a bipartite analysis of the physical and narrative dimensions of Piazzi Smyth’s basalt Standard, I develop the implications of its use and construction for understanding the material constitution of scientific instruments. In particular, I illustrate how instruments are locally constituted through co-accountable systems and how their material features become integrally implicated in both their uses and meanings.
In 1854 the biologist Thomas Henry Huxley pointed to a significant change in the way that reviewers were treating books that endorsed deeply flawed scientific theories. In the past, “when a book had been shown to be a mass of pretentious nonsense,” it “quietly sunk into its proper limbo. But these days appear, unhappily, to have gone by.” Due to the “uer ignorance of the public mind as to the methods of science and the criterion of truth,” scientists were now forced to review such books in order to expose their deficiencies (Huxley 1903, 1). Huxley’s observation indicates how the development of a mass reading audience in mid-nineteenth century Britain transformed the very nature of scientific controversy. Scientists were compelled to debate the validity of theories in new public sites, not just in exclusive scientific societies or in specialized scientific journals with limited circulation. It was during the nineteenth century that public controversy—not limited to science alone—became possible for the first time. In this short piece I will discuss how the “communications revolution” produced a public space for the debate over evolutionary theory in mid-nineteenth century Britain. I will focus on periodicals as one of those public spaces in which the debate took place.1
I believe that tenured historians, philosophers, and sociologists of science— when presented with the opportunity—have a professional obligation to get involved in public controversies over what should count as science. I stress ‘tenured’ because the involved academics need to be materially protected from the consequences of their involvement, given the amount of misrepresentation and abuse that is likely to follow, whatever position they take. Indeed, the institution of academic tenure justifies itself most clearly in such heat‐seeking situations, where one may appear to offer a reasoned defense for views that many consider indefensible. To be sure, the opportunities for involvement will vary in kind and number, but I believe that we are obliged to embrace them. In the specific case of ‘demarcation’ questions of what counts as science, the people who possess the sort of general and comparative knowledge most relevant for adducing this matter are historians, philosophers, and sociologists of science—not professional scientists unschooled in these areas.
The last decade of the Qing dynasty (1644-1911) and Republican period (1912-1949) saw intensive efforts to revise the Qing Code, promulgate modern legal codes based on Japanese and German law, establish a modern system of courts, and develop a professional corps of lawyers and jurists (Huang 2001; Xu 2001; Yeung 2003; Young 2004; Neighbors 2004). These institutional reforms were implemented as part of the drive to have extraterritoriality rescinded and safeguard the sovereignty of the Qing dynasty and then Republic of China. The reforms were accompanied by new categories within civil and criminal law (including a new conceptual distinction between the two), new conceptions of legal knowledge and expertise, and rich discussions over sources of law which took place within the legal realm as well as the readership of Republican newspapers and journals (Young 2004; Lean 2007). If, as Roger Berkowitz (2005, 1) writes in his study of scientific codification in continental Europe, “in a legal system, there must be some way that the law comes to be known,” how did ways of knowing law change during this period of legal reform and broader intellectual change? Through a survey of jurisprudence textbooks and other legal publications, this paper argues that writers in early 20th-century China came to define jurisprudence (faxue 法學, falixue 法理學) in positivistic terms, ultimately using new conceptions of science (kexue 科學) and social science (shehui kexue 社會科學) to identify its place within a new ordering of modern knowledge.
At the beginning of the twentieth century the high mortality rates of both mothers and babies during childbirth became a predominant concern in Britain and its empire, provoking outcries from medical and nursing professionals as well as politicians and the wider public. Infant mortality became the new marker of the vitality of the nation and a widely used indicator of general standards of health. Efforts to improve maternal and infant welfare were part of a broader shift in Britain towards public health as a government responsibility. Measures taken to reduce mortality rates emphasized state‐run initiatives in maternal education and antenatal care, the medicalization of childbirth, and scientific infant feeding and childrearing practices (Fildes, Marks and Marland 1992; Lewis 1980; Marks 1996; Davin 1997). This shift in health care policies resulted in profound changes to the experience of childbirth and to the role of the state in the area of social welfare.