[Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. …
The uneducated person blames others for their failures; those who have just begun to be instructed blame themselves; those whose learning is complete blame neither others nor themselves.1 So says Epictetus, spelling out one tenet of Stoic thought: that blame, whether of oneself or another, has no place in a life wisely lived. To blame is unhealthy and dispensable. This tenet long endeared me to Stoicism. For I was, for many years, what Peter Graham calls a ‘blame sceptic’. That is not to say that I resiled from blaming. Rather, I blamed and then reproached myself for doing so. Since reproaching entails blaming, I thereby compounded my felony. And then, reproaching myself for compounding my felony, I compounded it some more.
In a decade of important work, Stephen Smith has marshaled a number of arguments against what he calls ‘the duty view’ of damages awards in private law. The duty view (which might more revealingly have been called ‘the existing duty view’) is the view according to which ‘damage[s] awards confirm existing legal duties to pay damages.’ Generously, I am credited with advancing ‘the most plausible’ version of the duty view, namely the ‘inchoate duty view’ according to which the court makes determinate, by its award, what was up to then an indeterminate legal duty. Smith and I agree, at least arguendo, that by its award the court fixes the amount that the defendant now has a duty to pay. I merely add: ‘and now has a duty to have paid’. This is the addition that Smith rejects.
Need considerations play an important role in empirically informed theories of distributive justice. We propose a concept of need-based justice that is related to social participation and provide an ethical measurement of need-based justice. The β-ε-index satisfies the need-principle, monotonicity, sensitivity, transfer and several ‘technical’ axioms. A numerical example is given.
Curiously, people assign less punishment to a person who attempts and fails to harm somebody if their intended victim happens to suffer the harm for coincidental reasons. This “blame blocking” effect provides an important evidence in support of the two-process model of moral judgment (Cushman, 2008). Yet, recent proposals suggest that it might be due to an unintended interpretation of the dependent measure in cases of coincidental harm (Prochownik, 2017; also Malle, Guglielmo, & Monroe, 2014). If so, this would deprive the two-process model of an important source of empirical support. We report and discuss results that speak against this alternative account.
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Art can be addressed, not just to individuals, but to groups. Art can even be part of how groups think to themselves – how they keep a grip on their values over time. I focus on monuments as a case study. Monuments, I claim, can function as a commitment to a group value, for the sake of long-term action guidance. Art can function here where charters and mission statements cannot, precisely because of art’s powers to capture subtlety and emotion. In particular, art can serve as the vessel for group emotions, by making emotional content sufficiently public so as to be the object of a group commitment. Art enables groups to guide themselves with values too subtle to be codified.
We argue that comparative psychologists have been too quick to jump to metacognitive interpretations of their data. We examine two such cases in some detail. One concerns so-called “uncertainty monitoring” behavior, which we show to be better explained in terms for first-order estimates of risk. The other concerns informational search, which we argue is better explained in terms of a first-order curiosity-like motivation that directs questions at the environment.
At first glance there does not seem to be anything philosophically
problematic about human enhancement. Activities such as physical
fitness routines, wearing eyeglasses, taking music lessons and prayer
are routinely utilized for the goal of enhancing human capacities. This
entry is not concerned with every activity and intervention that might
improve people’s embodied lives. The focus of this entry is a
cluster of debates in practical ethics that is conventionally labeled
as “the ethics of human enhancement”. These debates include
clinicians’ concerns about the limits of legitimate health care,
parents’ worries about their reproductive and rearing
obligations, and the efforts of competitive institutions like sports to
combat cheating, as well as more general questions about distributive
justice, science policy, and the public regulation of medical
In politics, representation is as representation does. Or – it is the contingent product of what is done with it, or in its name. Against this background, efforts by theorists to extract representation’s essence from its contexts and functions do not necessarily advance our understanding (Derrida 1982, 301). Likewise, neat distinctions between (e.g.) two or more types, forms or qualities of representation are common in democratic theory, but the practices which produce representation often traverse and disrupt static and neat distinctions. Consider the example of “self-appointed representation” (SAR) (Montanaro 2012) and its implied opposite “other-appointed representation” (OAR). SAR, to be representation, depends in some form on recognition by others. OAR, to be representation, depends on a presentation of a self adequate to representation. This is one instance of representation’s diverse and common liminal qualities, which see it traversing and complicating neat categorisations.
In his On the Genealogy of Morality Nietzsche famously discusses a psychological condition he calls ressentiment, a form of toxic, vengeful anger. In this paper, I offer a free-standing theory in philosophical psychology of what is characteristic of this state. My view takes some inspiration from Nietzsche, but this paper will not be a work of exegesis. In the process of developing my account, I will try to chart the terrain around ressentiment and closely-related and sometimes overlapping states (ordinary moral resentment, envy, vengefulness, anger, and the like) and also seek to explain what’s ethically objectionable as well as psychologically pernicious about ressentiment. Ressentiment, I shall contend in this paper, is not simply a ten dollar word substitutable for ‘resentment,’ though it is indeed a species of that genus. On the account I develop, the perception of being slighted, insulted, or demeaned figures centrally in cases of ressentiment.
The Four Ages of Man - Nicolas Lancret
There’s an oft-repeated ‘fact’ thrown around in debates about retirement and old age. The details can vary but it’s something to the effect that when the pension entitlement age was set at 65 in the early part of the 20th century, very few people could expect to collect it, and those that did could only expect to collect for a few years (probably no more than 5). …
Is it possible to introduce a small number of agents into an environment, in such a way that an equilibrium results in which almost everyone (including the original agents) cooperates almost all the time? This is a compelling question for those interested in the design of beneficial game-theoretic AI, and it may also provide insights into how to get human societies to function better. We investigate this broad question in the specific context of finitely repeated games, and obtain a mostly positive answer. Our main novel technical tool is the use of limited altruism (LA) types, which behave altruistically towards other LA agents but not towards selfish agents. The uncertainty about which type of agent one is facing turns out to be essential in establishing cooperation. We provide characterizations in several families of games of which LA types are effective for our purposes.
John Stuart Mill famously wrote:
We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience. …
What are the epistemic benefits of democracy? According to the ‘epistemic democrats’, democratic procedures such as deliberation and voting are valuable in part because they produce epistemically valuable outcomes. Indeed, epistemic democrats claim the legitimacy of democracy depends, at least in part, on the epistemic quality of the outcomes of political decision-making processes. In this paper, I want to consider two epistemic factors that might figure into the value of democracy, namely, veritistic and non-veritistic epistemic goals.
There’s been a lot of excitement about the new gene-editing tool CRISPR-Cas9. Discussion of the technology has largely focused on its precision, accuracy, customizability, and affordability. But the CRISPR-Cas system from which the technology was derived has a fascinating life of its own. The work of Eugene V. Koonin’s lab is mapping the rich histories of CRISPR-Cas systems in microbial populations. In “CRISPR: A New Principle of Genome Engineering Linked to Conceptual Shifts in Evolutionary Biology,” Koonin argues that fundamental research studying adaptive immune mechanisms has (among other things) illuminated “fundamental principles of genome manipulation.” I think Koonin’s discussion provides important philosophical insights for how we should understand the significance of CRISPR-Cas systems, and the technologies derived from them. Yet the analysis he provides is only part of a larger story that fully captures the biological significance that CRISPR-Cas systems represent. There is also a human element to the CRISPR-Cas story that concerns its development as a technology. Accounting for the human history of CRISPR-Cas reveals that the story Koonin provides requires greater nuance. I’ll show how CRISPR-Cas technologies are not “natural” genome editing systems but are partly artifacts of human ingenuity. Furthermore, I’ll argue that when it comes to the story of CRISPR-Cas, fundamental and applied research are importantly intertwined.
In recent years there has been an explosion of philosophical work on blame. Much of this work has focused on explicating the nature of blame or on examining the norms that govern it, and the primary motivation for theorizing about blame seems to derive from blame’s tight connection to responsibility. However, very little philosophical attention has been given to praise and its attendant practices. In this paper, I identify three possible explanations for this lack of attention. My goal is to show that each of these lines of thought is mistaken and to argue that praise is deserving of careful, independent analysis by philosophers interested in theorizing about responsibility.
Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that these skeptical worries are misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate.
Decision theory and philosophy of action both attempt to explain what it is for an ideally rational agent to answer the question “What to do?” From the agent’s point of view, the answer to that question is settled in practical deliberation and motivates her to act. The mental states that determine her answer are the sources of rationalizing explanations of the agent’s behavior. They explain why she performed a given action in terms of why it made sense, from her point of view, to so act. Rationalizing explanations should be contrastive, of the form “Agent S performed action A, rather than actions B, C, or D, because P, Q, and R” where B, C, and D are whatever S takes to be the possible alternatives to A, and P, Q, and R are whichever of S’s deliberative considerations and other factors yield a good explanation.
Cancer is a worldwide epidemic. It is the first or second leading
cause of death before age 70 in ninety-one countries, as of 2015. According to the International Agency for Research on Cancer,
“there will be an estimated 18.1 million new cancer cases and
9.6 million cancer deaths in 2018,” and cancer is expected to be
the “leading cause of death in every country of the world in the
21st century” (Bray, et al., 2018). While overall cancer
mortality has declined in the U.S. annually since 2005, progress has
been slow in some cases, and mortality is rising in others. In
particular, “death rates rose from 2010 to 2014 by almost 3% per
year for liver cancer and by about 2% per year for uterine
cancer,” and, “pancreatic cancer death rates continued to
increase slightly (by 0.3% per year) in men” (Siegel, et al.,
In this article, Michael Marder interprets the “toxic flood” we are living or dying through as a global dump. On his reading, multiple levels of existence—from the psychic to the physiological, from the environmental-elemental to the planetary—are being converted into a dump, a massive and still growing hodgepodge of industrial and consumer byproducts and emissions; shards of metaphysical ideas and theological dreams; radioactive materials; light, sound, and other modes of sensory pollution; pesticides and herbicides; and so forth. Toxicity targets our bodily tissues, senses, and minds, not to mention our worlds, without individuating us in this targeting, as indifferent and random as the global dump that nourishes it. Disrupting metabolism at every scrambled register of existence, it waxes into what Marder calls “ontological toxicity,” the mangled parts of the dump that do not pass through and out of being and, in not passing, warrant the annihilation, the rapid passing away, of all else. In an ontologically toxic state, the meaning of being is being dumped.
This is a (likely incomplete) transcendental phenomenology of professional failure. You can read it, if you like. Or don’t. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
There is conflicting experimental evidence about whether the “stakes” or importance of being wrong affect judgments about whether a subject knows a proposition. To date, judgments about stakes effects on knowledge have been investigated using binary paradigms: responses to “low” stakes cases are compared with responses to “high stakes” cases. However, stakes or importance are not binary properties—they are scalar: whether a situation is “high” or “low” stakes is a matter of degree. So far, no experimental work has investigated the scalar nature of stakes effects on knowledge: do stakes effects increase as the stakes get higher? Do stakes effects only appear once a certain threshold of stakes has been crossed? Does the effect plateau at a certain point? To address these questions, we conducted experiments that probe for the scalarity of stakes effects using several experimental approaches. We found evidence of scalar stakes effects using an “evidence-seeking” experimental design, but no evidence of scalar effects using a traditional “evidence-fixed” experimental design. In addition, using the evidence-seeking design, we uncovered a large, but previously unnoticed framing effect on whether participants are skeptical about whether someone can know something, no matter how much evidence they have. The rate of skeptical responses and the rate at which participants were willing to attribute “lazy knowledge”—that someone can know something without having to check— were themselves subject to a stakes effect: participants were more skeptical when the stakes were higher, and more prone to attribute lazy knowledge when the stakes were lower. We argue that the novel skeptical stakes effect provides resources to respond to criticisms of the evidence-seeking approach that argue that it does not target knowledge.
We identify several ongoing debates related to implicit measures, surveying prominent views and considerations in each. First, we summarize the debate regarding whether performance on implicit measures is explained by conscious or unconscious representations. Second, we discuss the cognitive structure of the operative constructs: are they associatively or propositionally structured? Third, we review debates about whether performance on implicit measures reflects traits or states. Fourth, we discuss the question of whether a person's performance on an implicit measure reflects characteristics of the person who is taking the test or characteristics of the situation in which the person is taking the test. Finally, we survey the debate about the relationship between implicit measures and (other kinds of) behavior.
During the last few years, it has become usual to turn to some seventeenth century readings of the traditional idea of an original common possession of the earth for philosophical aid to explain and support the rights of persons in situations of extreme need, including refugees. Hugo Grotius’s conception of this idea is one of the most cited ones. In this paper, I hold that a Grotian reading of the idea of an original common possession of the earth is not a fruitful principle if we want to elaborate a solid defence of the rights of the ones in need. I reconstruct and analyse the role this idea has in Grotius’s theory of private property and present objections to it from a Kantian perspective.
Words change meaning, usually in unpredictable ways. But some words’ meanings are revised intentionally. Revisionary projects are normally put forward in the service of some purpose – some serve specific goals of inquiry, and others serve ethical, political or social aims. Revisionist projects can ameliorate meanings, but they can also pervert. In this paper, I want to draw attention to the dangers of meaning perversions, and argue that the self-declared goodness of a revisionist project doesn’t suffice to avoid meaning perversions. The road to Hell, or to horrors on Earth, is paved with good intentions. Finally and more importantly, I want to demarcate what meaning perversions are. This, I hope, can help us assess the moral and political legitimacy of revisionary projects.
I don’t usually write too much about veganism/vegetarianism since it is a deeply personal issue (but see here and here). However, I have been trying to write more about deeply personal issues and I recently revisited one of my favorite albums from way back in the late ’80’s. …
To be an enthusiast, for Locke, is to believe oneself, on insufficient evidence, to be the recipient of immediate divine inspiration. We describe the theological context that led Locke to insert a chapter on this subject into the fourth edition of the Essay, and then examine why Locke held enthusiasm to be particularly objectionable. Far from being an obscure historical footnote, the chapter raises foundational questions for Locke’s epistemology. We look more closely than have previous treatments of this topic at the religious practices that Locke targets, and find them to be less obviously irrational than his criticisms suggest. Reflection on those criticisms allows us a clearer understanding of where Locke locates the ultimate grounds of rational belief.
Yesterday, I was rereading Philip Pettit's 2018 article "Consciousness Incorporated". Due to some vocabulary mismatch, I find his exact commitments on group phenomenal consciousness not entirely clear [note 1]. …
Jacques Maritain (1882–1973), French philosopher and political
thinker, was one of the principal exponents of Thomism in the
twentieth century and an influential interpreter of the thought of St