Many contemporary ethicists believe that:
Moral reasons are always other-regarding. Add this very plausible premise:
No action that is on balance supported by moral reasons is bad. Suppose Alice is out of food on a desert island. …
Larry Li, Bill Cannon and I ran a session on non-equilibrium thermodynamics in biology at SMB2021, the annual meeting of the Society for Mathematical Biology. You can see talk slides here! Here’s the basic idea:
Since Lotka, physical scientists have argued that living things belong to a class of complex and orderly systems that exist not despite the second law of thermodynamics, but because of it. …
Lewtas  recently articulated an argument claiming that emergent conscious causal powers are impossible. In developing his argument, Lewtas makes several assumptions about emergence, phenomenal consciousness, categorical properties, and causation. We argue that there are plausible alternatives to these assumptions. Thus, the proponent of emergent conscious causal powers can escape Lewtas’s challenge.
Derek Parfit defended Non-Realist Cognitivism. It is an open secret that this meta-ethical theory is often thought at best puzzling and at worst objectionably unclear. Employing truthmaker theory, I provide an account of Non-Realist Cognitivism that dispels charges of objectionable unclarity, clarifies how to assess it, and explains why, if plausible, it would be an attractive theory. I develop concerns that the theory involves cheating into an objection that ultimately reveals Non-Realist Cognitivism faces a dilemma. Whether it can escape demands further attention. In bridging meta-ethics and the truthmaking literature, I illustrate the importance of greater meta-metaphysical reflection in meta-ethics.
There is now a great deal of evidence that norm violations impact people’s causal judgments. But it remains contentious how best to explain these findings. This includes that the primary explanations on offer differ with regard to how broad they take the phenomenon to be. In this chapter, I detail how the explanations diverge with respect to the expected scope of the contexts in which the effect arises, the types of judgments at issue, and the range of norms involved. In doing so, I briefly summarize the evidence favoring my preferred explanation—the responsibility account. I then add to the evidence, presenting the results of two preregistered studies that employ a novel method: participants were asked to rank order compound statements combining a causal attribution and a normative attribution.
Solms’ unusual project of translating Freud’s “Project for a Scientific Psychology” into contemporary cognitive science terms is hard to assess. The most important theme, to my way of thinking, is Solms’ support, in present-day terms, of Freud’s insistence that emotion lies at the heart of all cognition. Taking this one idea seriously will require significant alterations to the working assumptions of many in cognitive science.
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. Put another way, we can safely dispense with the question of whether AI will have an impact; the pertinent questions now are by whom, how, where, and when this positive or negative impact will be felt, and hence what governance needs to be put in place to provide the best possible answers. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or misuse, and the opportunity costs associated with its underuse. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies – four of which build on existing bioethics principles and an additional one that we argue is of equal importance in the case of AI.
This paper articulates in formal terms a crucial distinction concerning future contingents, the distinction between what is true about the future and what is reasonable to believe about the future. Its key idea is that the branching structures that have been used so far to model truth can be employed to define an epistemic property, credibility, which we take to be closely related to knowledge and assertibility, and which is ultimately reducible to probability. As a result, two kinds of claims about future contingents — one concerning truth, the other concerning credibility — can be smoothly handled within a single semantic framework.
As a highly technological innovation, cultured meat is the subject of techno-optimistic as well as techno-sceptical evaluations. The chapter discusses this opposition and connects it with arguments about seeing the world in the right way. Both sides not only call upon us to see the world in a very particular light, but also point to mechanisms of selective attention in order to explain how others can be so biased. I will argue that attention mechanisms are indeed relevant for dealing with the Anthropocene, but that dualism has paralysing effects. In a dualistic framework, cultured meat is associated with ecomodernist optimism, bold technological control over nature and alienation from animals. But interested citizens and farmers in focus groups rather envisioned the future of cultured meat through small scale production on farms combined with intensive relations with animals. Such scenarios, involving elements from both sides of the dualistic gap, depend on constructive ways of dealing with dualisms and ambivalence.
Scientists obtain a great deal of the evidence they use by collecting
and producing empirical results. Much of the standard philosophical
literature on this subject comes from 20th century logical
empiricists, their followers, and critics who embraced their issues
while objecting to some of their aims and assumptions. Discussions
about empirical evidence have tended to focus on epistemological
questions regarding its role in theory testing. This entry follows
that precedent, even though empirical evidence also plays important
and philosophically interesting roles in other areas including
scientific discovery, the development of experimental tools and
techniques, and the application of scientific theories to practical
Covid-19 vaccination take up has been disappointingly low in many countries, or parts of countries, such as in America. Given that vaccination offers significant private and public benefits, this is surprising. …
I am now simultaneously aware of the motion of my fingers and of the text on the screen. Call this co-awareness. Co-awareness is not the same thing as awareness by the same subject. For if I type with my eyes closed and then stop typing and open my eyes, the tactile and visual experiences still have the same subject, but there is no co-awareness. …
If naturalism about our minds is true, then the correct account of intentionality is causal. On a causal account of intentionality, our possession of an irreducible concept is caused by something which falls under that concept. …
Philosophers who take rationality to consist in the satisfaction of rational requirements typically favour rational requirements that govern mental attitudes at a time rather than across times. One such account has been developed by Broome in Rationality through reasoning. He claims that diachronic functional properties of intentions such as settling on courses of actions and resolving conflicts are emergent properties that can be explained with reference to synchronic rational pressures. This is why he defends only a minimal diachronic requirement which characterises forgetting as irrational. In this paper, I show that Broome’s diachronically minimalist account lacks the resources to explain how a rational agent may resolve incommensurable choices by an act of will. I argue that one can solve this problem by either specifying a mode of diachronic deliberation or by introducing a genuinely diachronic requirement that governs the rational stability of an intention via a diachronic counterfactual condition concerning rational reconsideration. My proposal is similar in spirit to Gauthier’s account in his seminal paper ‘Assure and threaten’. It improves on his work by being both more general and explanatorily richer in its application with regard to diachronic phenomena such as transformative choices and acts of will.
There has been a lot of time and effort spent debating whether human beings have free will, and rightly so, it is an important and interesting question. However there has been very little discussion (at least until now) concerning whether God would have free will. I should note that I am talking about the Western conception of God, and more specifically the view that comes from the Abrahamic religions of Judaism, Christianity, and Islam, mixed with Neoplatonism and Aristotelianism, which is sometimes referred to as Classical Theism.
It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
Standard proposals of scientific anti-realism assume that the methodology of a scientific research program can be endorsed without accepting its metaphysical commitments. I argue that the distinction between competence, the rules governing one’s language faculty, and performance, linguistic behavior, precludes this. Linguistic theories aim to describe competence, not performance, and so must be able to distinguish observations reflective of the former from those reflective of the latter. This classification of data makes sense only against the background of a psychologically realistic view of linguistic theory. So the very methodology of the science commits one to its realistic interpretation.
This paper explores the principle that knowledge is fragile, in that whenever S knows that S doesn’t know that S knows that p, S thereby fails to know p. Fragility is motivated by the infelicity of dubious assertions, utterances which assert p while acknowledging higher order ignorance of p. Fragility is interestingly weaker than KK, the principle that if S knows p, then S knows that S knows p. Existing theories of knowledge which deny KK by accepting a Margin for Error principle can be conservatively extended with Fragility.
On the basis of findings from developmental biology, some researchers have argued that evolutionary theory needs to be significantly updated. Advocates of such a “developmental update” have, among other things, suggested that we need to re-conceptualize units of selection, that we should expand our view of inheritance to include environmental as well as genetic and epigenetic factors, that we should think of organisms and their environment as involved in reciprocal causation, and that we should reevaluate the rates of evolutionary change. However, many of these same conclusions could be reached on the basis of other evidence, namely from microbiology. In this paper, I ask why microbiological evidence has not had a similarly large influence on calls to update biological theory, and argue that there is no principled reason to focus on developmental as opposed to microbiological evidence in support of these revisions to evolutionary theory. I suggest that the focus on developmental biology is more likely attributable to historical accident. I will also discuss some possible room for overlap between developmental and microbiology, despite the historical separation of these two subdisciplines.
It is commonly maintained that neuroplastic mechanisms in the brain provide empirical support for the hypothesis of multiple realizability. We show in various case studies that neuroplasticity stems from preexisting mechanisms and processes inherent in the neural (or biochemical) structure of the brain. We argue that not only does neuroplasticity fail to provide empirical evidence of multiple realization, its inability to do so strengthens the mind-body identity theory. Finally, we argue that a recently proposed identity theory called Flat Physicalism can be enlisted to explain the current state of the mind-body problem more adequately.
Here is a familiar story about electoral democracy. Modern policymaking is incredibly complicated. Voters are rationally ignorant. This ignorance has many potential bad consequences. If elected officials are closely responsive to the ignorant voters, they will make bad decisions, resulting in bad outcomes. More plausibly, this ignorance will simply serve to insulate elected officials from voter scrutiny, making them easy targets for capture and manipulation— which will also lead to bad outcomes.
Herbert A. Simon (1955) established behavioral economics based on bounded rationality. People are unable to fully optimize due to costs of information being too high for most people and their inability to compute such behavior for mathematical and logical limits, arguing that most people follow heuristic rules of thumb to achieve an aspirational level they hold. Most firms seek a level of profit acceptable to owners rather than a possible maximum profits level. He labeled this approach to be satisficing (Simon, 1956), noting this word is in the Oxford English Dictionary from a Northumbrian dialect, basically meaning “satisfy.” But he redefined it to describe how people behave using bounded rationality.
When interacting with other people, we assume that they have their reasons for what they do and believe, and experience recognizable feelings and emotions. When people act from weakness of will or are otherwise irrational, what they do can still be comprehensible to us, since we know what it is like to fall for temptation and act against one’s better judgment. Still, when someone’s experiences, feelings and way of thinking is vastly different from our own, understanding them becomes increasingly difficult. Delusions and psychosis are often seen as marking the end of intelligibility. In this paper, I argue first for the importance of seeing other people as intelligible as long as this is at all possible. Second, I argue, based on both previous literature and my own lived experience, that more psychotic phenomena than previously thought can be rendered at least somewhat intelligible. Besides bizarre experiences like illusions, hallucinations and intense feelings of significance, I also explain what it is like to lose one’s bedrock, and how this loss impacts which beliefs one has reason to reject. Finally, I give an inside account of some disturbances of reason, and show that there are important similarities between certain psychotic reasoning problems and common non-pathological phenomena.
We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two monisms, we argue, are in tension with each other. If only accuracy has final epistemic value, then there are legitimate alternatives to strictly proper scoring rules. Our argument relies on the way scoring rules are used in contexts where accuracy is rewarded, such as education.
Frege famously claimed that variations in the sense of a proper name can sometimes be ‘tolerated’. In this paper, we offer a novel explanation of this puzzling claim. Frege, we argue, follows Trendelenburg in holding that we think in language— sometimes individually and sometimes together. Variations in sense can be tolerated in just those cases where we are using language to coordinate our actions, but we are not engaged in thinking together about an issue.
Suppose that the right account of state authority requires the consent of the governed. A standard view is that this consent is presumed in virtue of the resident’s choice not to leave the territory of the state. …
Physicalism demands an explication of what it means for something to be physical. But the most popular way of providing one—viz., characterizing the physical in terms of the postulates of a scientifically derived physical theory—is met with serious trouble. Proponents of physicalism can either appeal to current physical theory or to some future physical theory (preferably an ideal and complete one). Neither option is promising: currentism almost assuredly renders physicalism false and futurism appears to render it indeterminate or trivial. The purpose of this essay is to argue that attempts to characterize the mental encounter a similar dilemma: currentism with respect to the mental is likely to be inadequate or contain falsehoods and futurism leaves too many significant questions about the nature of mentality unanswered. This new dilemma, we show, threatens both sides of the current debate surrounding the metaphysical status of the mind.
Is it permissible to be a fan of an artist or a sports team that has behaved immorally? While this issue has recently been the subject of widespread public debate, it has received little attention in the philosophical literature. This paper will investigate this issue by examining the nature and ethics of fandom. I will argue that the crimes and misdemeanors of the object of fandom provide three kinds of moral reasons for fans to abandon their fandom. First, being a fan of the immoral may provide support for their immoral behavior. Second, fandom alters our perception in ways that will often lead us to be fail to perceive our idol’s faults and even to adopting immoral points of view in order to be able to maintain the positive view we have of them. Third, fandom, like friendship, may lead us to engage in acts of loyalty to protect the interests of our idols. This gives fans of the immoral good reason to abandon their fandom. However, these reasons will not always be conclusive and, in some cases, it may be possible to instead adopt a critical form of fandom.
Angell’s logic of analytic containment AC has been shown to be characterized by a 9-valued matrix NC by Ferguson, and by a 16-valued matrix by Fine. We show that the former is the image of a surjective homomorphism from the latter, i.e., an epimorphic image. The epimorphism was found with the help of MUltlog, which also provides a tableau calculus for NC extended by quantifiers that generalize conjunction and disjunction.
The heyday of discussions initiated by Searle's claim that computers have syntax, but no semantics has now past, yet philosophers and scientists still tend to frame their views on artificial intelligence in terms of syntax and semantics. In this paper I do not intend to take part in these discussions; my aim is more fundamental, viz. to ask what claims about syntax and semantics in this context can mean in the first place. And I argue that their sense is so unclear that that their ability to act as markers within any disputes on artificial intelligence is severely compromised; and hence that their employment brings us little more than a harmful illusion of explanation.