There is now a great deal of evidence that norm violations impact people’s causal judgments. But it remains contentious how best to explain these findings. This includes that the primary explanations on offer differ with regard to how broad they take the phenomenon to be. In this chapter, I detail how the explanations diverge with respect to the expected scope of the contexts in which the effect arises, the types of judgments at issue, and the range of norms involved. In doing so, I briefly summarize the evidence favoring my preferred explanation—the responsibility account. I then add to the evidence, presenting the results of two preregistered studies that employ a novel method: participants were asked to rank order compound statements combining a causal attribution and a normative attribution.
This paper articulates in formal terms a crucial distinction concerning future contingents, the distinction between what is true about the future and what is reasonable to believe about the future. Its key idea is that the branching structures that have been used so far to model truth can be employed to define an epistemic property, credibility, which we take to be closely related to knowledge and assertibility, and which is ultimately reducible to probability. As a result, two kinds of claims about future contingents — one concerning truth, the other concerning credibility — can be smoothly handled within a single semantic framework.
Scientists obtain a great deal of the evidence they use by collecting
and producing empirical results. Much of the standard philosophical
literature on this subject comes from 20th century logical
empiricists, their followers, and critics who embraced their issues
while objecting to some of their aims and assumptions. Discussions
about empirical evidence have tended to focus on epistemological
questions regarding its role in theory testing. This entry follows
that precedent, even though empirical evidence also plays important
and philosophically interesting roles in other areas including
scientific discovery, the development of experimental tools and
techniques, and the application of scientific theories to practical
Covid-19 vaccination take up has been disappointingly low in many countries, or parts of countries, such as in America. Given that vaccination offers significant private and public benefits, this is surprising. …
Philosophers who take rationality to consist in the satisfaction of rational requirements typically favour rational requirements that govern mental attitudes at a time rather than across times. One such account has been developed by Broome in Rationality through reasoning. He claims that diachronic functional properties of intentions such as settling on courses of actions and resolving conflicts are emergent properties that can be explained with reference to synchronic rational pressures. This is why he defends only a minimal diachronic requirement which characterises forgetting as irrational. In this paper, I show that Broome’s diachronically minimalist account lacks the resources to explain how a rational agent may resolve incommensurable choices by an act of will. I argue that one can solve this problem by either specifying a mode of diachronic deliberation or by introducing a genuinely diachronic requirement that governs the rational stability of an intention via a diachronic counterfactual condition concerning rational reconsideration. My proposal is similar in spirit to Gauthier’s account in his seminal paper ‘Assure and threaten’. It improves on his work by being both more general and explanatorily richer in its application with regard to diachronic phenomena such as transformative choices and acts of will.
It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
This paper explores the principle that knowledge is fragile, in that whenever S knows that S doesn’t know that S knows that p, S thereby fails to know p. Fragility is motivated by the infelicity of dubious assertions, utterances which assert p while acknowledging higher order ignorance of p. Fragility is interestingly weaker than KK, the principle that if S knows p, then S knows that S knows p. Existing theories of knowledge which deny KK by accepting a Margin for Error principle can be conservatively extended with Fragility.
Here is a familiar story about electoral democracy. Modern policymaking is incredibly complicated. Voters are rationally ignorant. This ignorance has many potential bad consequences. If elected officials are closely responsive to the ignorant voters, they will make bad decisions, resulting in bad outcomes. More plausibly, this ignorance will simply serve to insulate elected officials from voter scrutiny, making them easy targets for capture and manipulation— which will also lead to bad outcomes.
We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two monisms, we argue, are in tension with each other. If only accuracy has final epistemic value, then there are legitimate alternatives to strictly proper scoring rules. Our argument relies on the way scoring rules are used in contexts where accuracy is rewarded, such as education.
Critics have recently argued that reliabilists face trade-off problems, forcing them to condone intuitively unjustified beliefs when they generate lots of true belief further downstream. What these critics overlook is that reliabilism entails that there are side-constraints on belief-formation, on account of which there are some things you should not believe, even if doing so would have very good epistemic consequences. However, we argue that by embracing side-constraints the reliabilist faces a dilemma: she can either hold on to reliabilism, and with it aforementioned side-constraints, but then needs to explain why we should allow the pursuit of justification to get in the way of the acquisition of true belief; or she can deny that there are side-constraints – and in effect give up on reliabilism. We’ll suggest that anyone moved by the considerations that likely attract people to reliabilism in the first place – the idea the true belief is good, and as such should be promoted – should go for the second horn, and instead pursue a form of epistemic utilitarianism.
Until recently, discussion of virtues in the philosophy of mathematics has been fleeting and fragmentary at best. But in the last few years this has begun to change. As virtue theory has grown ever more influential, not just in ethics where virtues may seem most at home, but particularly in epistemology and the philosophy of science, some philosophers have sought to push virtues out into unexpected areas, including mathematics and its philosophy. But there are some mathematicians already there, ready to meet them, who have explicitly invoked virtues in discussing what is necessary for a mathematician to succeed.
Nicod Criterion (NC): A claim of form “All Fs are Gs” is confirmed by any sentence of the form “i is F and i is G” where “i” is a name of some particular object. Equivalence Condition (EC): Whatever confirms (disconfirms) one of two equivalent sentences also confirms (disconfirms) the other.2
1. Introduction. A genealogy would be an historical account of how someone, or some number of people, came to believe or to value the things that they do. What is genealogy for? The question may seem unfair: couldn’t genealogy be pursued for its own sake and without ulterior motive? Even if its pursuit serves wider ends, perhaps it serves them by instancing those ends rather than by providing an independently specifiable means to them. If, for example, there is value in knowing, then insofar as genealogical inquiry promises to furnish knowledge, its fruits might instance the wider value of our coming to know. Relatedly, insofar as the activities associated with inquiry are themselves valuable, independently of their resulting in knowing, then the activities associated with genealogical inquiry might instance those values, and that might be so independently of whether those activities also terminated in our acquiring genealogical knowledge. However, even if it were accepted that genealogy can have a value of its own, or can instance what is intrinsically valuable, that need not exclude that it also serves further ends. Let us therefore allow the question.
A puzzle arises when combining two individually plausible, yet jointly incompatible, norms of inquiry. On the one hand, it seems that one shouldn’t inquire into a question while believing an answer to that question. But, on the other hand, it seems rational to inquire into a question while believing its answer, if one is seeking confirmation. Millson (2021), who has recently identified this puzzle, suggests a possible solution, though he notes that it comes with significant costs. I offer an alternative solution, which doesn’t involve these costs. The best way to resolve the puzzle is to reject the prohibition on inquiring into a question while believing an answer to it. Resolving the puzzle in this way makes salient two fruitful areas in the epistemology of inquiry which merit further investigation. The first concerns the nature of the inquiring attitudes and the second concerns the aim(s) of inquiry.
Much of the literature on the relationship between belief and credence has focused on the reduction question: that is, whether either belief or credence reduces to the other. This debate, while important, only scratches the surface of the belief-credence connection. Even on the anti-reductive dualist view, belief and credence could still be very tightly connected. Here, I explore questions about the belief-credence connection that go beyond reduction. This paper is dedicated to what I call the independence question: just how independent are belief and credence? I look at this question from two angles: a descriptive one (as a psychological matter, how much can belief and credence come apart?) and a normative one (for a rational agent, how closely connected are belief and credence?) Ultimately, I suggest that the two attitudes are more independent than one might think.
Though not all scholars agree on the meaning of the term,
“neoliberalism” is now generally thought to label the
philosophical view that a society’s political and economic
institutions should be robustly liberal and capitalist, but
supplemented by a constitutionally limited democracy and a modest
welfare state. Recent work on neoliberalism, thus understood, shows
this to be a coherent and distinctive political philosophy. This entry
explicates neoliberalism by examining the political concepts,
principles, and policies shared by F. A. Hayek, Milton Friedman, and
James Buchanan, all of whom play leading roles in the new historical
research on neoliberalism, and all of whom wrote in political
philosophy as well as political economy.
We call attention to certain cases of epistemic akrasia, arguing that they support belief-credence dualism. Belief-credence dualism is the view that belief and credence are irreducible, equally fundamental attitudes. Consider the case of an agent who believes p, has low credence in p, and thus believes that they shouldn’t believe p. We argue that dualists, as opposed to belief-firsters (who say credence reduces to belief) and credence-firsters (who say belief reduces to credence) can best explain features of akratic cases, including the observation that akratic beliefs seem to be held despite possessing a defeater for those beliefs, and that, in akratic cases, one can simultaneously believe and have low confidence in the very same proposition.
I was watching Biogen’s stock (BIIB) climb over 100 points yesterday because its Alzheimer’s drug, aducanumab [brand name: Aduhelm], received surprising FDA approval? I hadn’t been following the drug at all (it’s enough to try and track some Covid treatments/vaccines). …
Have we entered a “post-truth” era? The present paper attempts to answer this question by (a) offering an explication of the notion of “post-truth” from recent discussions; (b) deriving a testable implication from that explication, to the effect that we should expect to see decreasing information effects—i.e., differences between actual preferences and estimated, fully informed preferences—on central political issues over time; and then (c) putting the relevant narrative to the test by way of counterfactual modelling, using election year data for the period of 2004-2016 from the American National Election Studies’ (ANES) Times Series Study. The implication in question turns out to be consistent with the data: at least in a US context, we do see evidence of a decrease in information effects on key, political issues—immigration, same-sex adoption, and gun laws, in particular—in the period 2004 to 2016. This offers some novel, empirical evidence for the “post-truth” narrative.
Legal probabilism is a research program that relies on probability
theory to analyze, model and improve the evaluation of evidence and
the process of decision-making in trial proceedings. While the
expression “legal probabilism” seems to have been coined
by Haack (2014b), the underlying idea can be traced back to the early
days of probability theory (see, for example, Bernoulli 1713). Another
term that is sometimes encountered in the literature is “trial
by mathematics” coined by Tribe (1971). Legal probabilism
remains a minority view among legal scholars, but attained greater
popularity in the second half of the twentieth century in conjunction
with the law and economics movement (Becker 1968; Calabresi 1961;
Some find it plausible that a sufficiently long duration of torture is worse than any duration of mild headaches. Similarly, it has been claimed that a million humans living great lives is better than any number of worm-like creatures feeling a few seconds of pleasure each. Some have related bad things to good things along the same lines. For example, one may hold that a future in which a sufficient number of beings experience a lifetime of torture is bad, regardless of what else that future contains, while minor bad things, such as slight unpleasantness, can always be counterbalanced by enough good things. Among the most common objections to such ideas are sequence arguments. But sequence arguments are usually formulated in classical logic. One might therefore wonder if they work if we instead adopt many-valued logic. I show that, in a common many-valued logical framework, the answer depends on which versions of transitivity are used as premises. We get valid sequence arguments if we grant any of several strong forms of transitivity of ‘is at least as bad as’ and a notion of completeness. Other, weaker forms of transitivity lead to invalid sequence arguments. The plausibility of the premises is largely set aside here, but I tentatively note that almost all of the forms of transitivity that result in valid sequence arguments seem intuitively problematic. Still, a few moderately strong forms of transitivity that might be acceptable result in valid sequence arguments, although weaker statements of the initial value claims avoid these arguments at least to some extent.
Members of marginalized groups who desire to pursue ambitious ends that might lead them to overcome disadvantage often face evidential situations that do not support the belief that they will succeed. Such agents might decide, reasonably, that their efforts are better expended elsewhere. If an agent has a less risky, valuable alternative, then quitting can be a rational way of avoiding the potential costs of failure. However, in reaching this pessimistic conclusion, she adds to the evidence that formed the basis for her pessimism in the first place, not just for herself but for future agents who will be in a similar position as hers. This is a pessimism trap. Might believing optimistically against the evidence offer a way out? In this paper, I argue against practical and moral arguments to turn to optimism as a solution to pessimism traps. I suggest that these theories ignore the opportunity costs that agents pay when they settle on difficult long-term ends without being sensitive to evidence of potential failure. The view I defend licenses optimism in a narrow range of cases. Its limitations show us that the right response to many pessimism traps is not to be found through individual optimism.
Evidence about epidemiological risk and corporate market-share played a decisive role in litigation on asbestos-poisoning and pharmaceutical negligence. These cases bear on what is now a central question in applied legal philosophy, namely should we ever allow bare statistics to settle legal disputes? A swathe of recent work agrees that it is never appropriate to settle a legal case solely using statistical evidence or, alternatively, that it is only appropriate to do so when the odds are overwhelming such as DNA evidence with a < 1 in 10,000,000 chance of error.
It is tempting to suppose that the reason why the world remains profoundly unjust is that not enough of us hold the correct beliefs about the demands of justice and/or are motivated to bring it about. As Allen Buchanan shows, however, this is to miss a crucially important part of the picture: agents' mistaken beliefs about what it takes to achieve justice can seriously hamper prospects for such achievements. In this paper, I expand on Buchanan's taxonomy of mistaken beliefs about what it takes to achieve justice, and I bring his account (so expanded) to bear on the notion of epistemic justice.
Many classic moral paradoxes involve conditional obligations, such as the obligation to be gentle if one is to murder. Many others involve supererogatory acts, or “good deeds beyond the call of duty.” Less attention, however, has been paid to the intersection of these topics. We develop the first general account of conditional supererogation. It has the power to solve both some familiar puzzles as well as several that we introduce. Moreover, our account builds on two familiar insights: the idea that conditionals restrict quantification and the idea that supererogation emerges from a clash between justifying and requiring reasons.
The Covid-19 pandemic has caused significant economic hardships for millions of people around the world. Meanwhile, many of the world’s richest people have seen their wealth increase substantially during the pandemic, despite the significant economic disruptions that it has caused on the whole. It is uncontroversial that these effects, which have exacerbated already unacceptable levels of poverty and inequality, call for robust policy responses from governments. In this paper, I argue that the disparate economic effects of the pandemic also generate direct obligations of justice for those who have benefitted from pandemic windfalls. Specifically, I argue that even if we accept that those who benefit from distributive injustice in the ordinary, predictable course of life within unjust institutions do not have direct obligations to redirect their unjust benefits to those who are unjustly disadvantaged, there are powerful reasons to hold that benefitting from pandemic windfalls does ground such an obligation.
We argue that inductive analysis (based on formal learning theory and the use of suitable machine learning reconstructions) and operational (citation metrics-based) assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.
Extrapolation of causal claims from study populations to other populations of interest is a problematic issue. The standard approach in experimental research, which prioritises randomized controlled trials and statistical evidence, is not devoid of difficulties. Granted that, it has been defended that evidence of mechanisms is indispensable for causal extrapolation. We argue, contrarily, that this sort of evidence is not indispensable. Nonetheless, we also think that occasionally it may be helpful. In order to clarify its relevance, we introduce a distinction between a positive and a negative role of evidence of mechanisms. Our conclusion is that the former is highly questionable, but the latter may be a trustworthy resource for causal extrapolation. KEYWORDS: Extrapolation; evidence of mechanisms; statistical evidence; mechanism; causality; evidence; external validity; randomized controlled trial.
Social media platforms have been rapidly increasing the number of informational labels they are appending to user-generated content in order to indicate the disputed nature of messages or to provide context. The rise of this practice constitutes an important new chapter in social media governance, as companies are often choosing this new “middle way” between a laissez-faire approach and more drastic remedies such as removing or downranking content. Yet information labeling as a practice has, thus far, been mostly tactical, reactive, and without strategic underpinnings. In this paper, we argue against defining success as merely the curbing of misinformation spread. The key to thinking about labeling strategically is to consider it from an epistemic perspective and to take as a starting point the “social” dimension of online social networks. The strategy we articulate emphasizes how the moderation system needs to improve the epistemic position and relationships of platform users — i.e., their ability to make good judgements about the sources and quality of the information with which they interact on the platform — while also appropriately respecting sources, seekers, and subjects of information. A systematic and normatively grounded approach can improve content moderation efforts by providing clearer accounts of what the goals are, how success should be defined and measured, and where ethical considerations should be taken into consideration. We consider implications for the policies of social media companies, propose new potential metrics for success, and review research and innovation agendas in this regard.
This paper describes a method for learning from a teacher’s potentially unreliable corrective feedback in an interactive task learning setting. The graphical model uses discourse coherence to jointly learn symbol grounding, domain concepts and valid plans. Our experiments show that the agent learns its domain-level task in spite of the teacher’s mistakes.