I argue against sceptical invariantism on the grounds that, in common with a number of contemporary proposals in this regard, it misdiagnoses the source of radical scepticism. The nub of the matter is that the problem of radical scepticism does not essentially trade on an appeal to an austere epistemic standard for knowledge as sceptical invariantism supposes; indeed, the putative radical sceptical paradox is no less troubling if we stipulate that the operative epistemic standard for knowledge is very undemanding. As I explain, the idea that the source of radical scepticism concerns epistemic standards in this way pervades the recent treatment of this problem, and hence understanding where sceptical invariantism goes awry casts light on the wider contemporary debate about radical scepticism.
Uncertainty in climate science has drawn increasing attention in recent years (e.g., Parker 2006, 2010, 2011, 2013; Stainforth et al. 2007; Knutti 2008; Frigg et al. 2013, 2014; Parker and Risbey 2015). The topic is important epistemically and politically: epistemically, because scientists have only limited abilities to validate and confirm the output of climate models ; and politically, because policymakers have to take into account the current knowledge concerning the climate and its uncertainty.
This paper focuses on the interaction of reasons and argues that reasons for an action may transmit to the necessary means of that action. Analyzing exactly how this phenomenon may be captured by principles governing normative transmission has proved an intricate task in recent years. In this paper, I assess three formulations focusing on normative transmission and necessary means: Ought Necessity, Strong Necessity, and Weak Necessity. My focus is on responding to two of the main objections raised against normative transmission for necessary means, in that they seem to give us reasons for buying tickets to plays we have no intention of seeing and that the principles give us the wrong result when the means are necessary but not sufficient. Even though these objections have been discussed previously, the counterarguments have so far relied on rejecting premises that the proponents of these objections are unlikely to concede. In this paper, I show how we may answer the objections in a way more likely to convince proponents of the objections. The result is an argument for a key aspect when it comes to understanding how reasons and ends-means normativity function. Normative transmission from ends to necessary means is not only interesting at the structural level, it is also possible to argue that it has implications for areas as diverse as philosophy of rationality, political philosophy and applied ethics.
Effective political decision making, like other decision making, requires decision-makers to have accurate beliefs about the domain in which they are acting. In democratic societies, this often means that accurate beliefs must be held by a community, or at least a significant portion of a community, of voters. Voters are tasked with scrutinizing candidates and possible policy proposals and, considering their own experiences, interests, goals, knowledge, and values, with deciding which of various ballot measures is most likely to bring about their desired outcomes.
There are two debates regarding whether practical considerations play a role in determining what one ought to believe. The first concerns whether the fact that having some doxastic attitude (e.g. believing, disbelieving, withholding) would be beneficial or harmful is a genuine normative reason for or against that attitude. For example, consider the following: Beneficial Belief Believing in an afterlife would alleviate your crippling anxiety about death. Harmful Belief Believing that your missing child is dead would cause your spouse suffering.
On a widely shared generic conception of inferential justification—henceforth ‘the standard conception’—an agent is inferentially justified in believing that p only if she has antecedently justified beliefs in all the non-redundant premises of a good argument for p. This paper explores three questions that haven’t been given the attention they deserve, that complicate the application of the standard conception to cases, and that reveal it to be underspecified at the core—in ways not resolved, but inherited, by more specific (extant) versions of it. The goal isn’t to answer the questions, but to articulate them, explain what turns on them, and to invite a critical re-examination of the standard conception.
Big Data promises to revolutionise the production of knowledge within
and beyond science, by enabling novel, highly efficient ways to plan,
conduct, disseminate and assess research. The last few decades have
witnessed the creation of novel ways to produce, store, and analyse
data, culminating in the emergence of the field of data
science, which brings together computational, algorithmic,
statistical and mathematical techniques towards extrapolating
knowledge from big data. At the same time, the Open Data
movement—emerging from policy trends such as the push for Open
Government and Open Science—has encouraged the sharing and
interlinking of heterogeneous research data via large digital
The Rawlsian veil of ignorance should induce agents to behave fairly in a distributive context. This work tried to re-propose, through a dictator game with giving and taking options, a sort of original position in which reasoning behind the veil should have constituted a moral cue for subjects involved in the distribution of a common output with unequal means of production. However, our experimental context would unwittingly recall more the Hobbesian state of nature than the Rawlsian original position, showing that the heuristic resource to the Rawlsian idea of a choice behind the veil is inefficacious in distributive contexts.
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility (EU) formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity (i.e., being cut off from the methods and results of decision theory). To give the strategy more structure, philosophers have developed "principles of individuation"; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion (i.e., it is duly developed using the available tools of decision theory). We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory.
A “stopping rule” in a sequential experiment is a rule or procedure for determining when the experiment should end. For example, consider a pair of experiments designed to obtain evidence about the proportion of fruit flies in a given population with red eyes [Savage, 1962, pp. 17–8]. In both experiments, flies are caught, observed, and released sequentially and fairly, reporting in the end the number of red-eyed flies. In the first, the experiment is designed to stop after observing 100 flies, while the second is designed to stop after observing 6 red-eyed flies. In general the data from these experiments could be very different, but it is also possible that they be the same: in this case, 100 total flies would be observed in both experiments, of which 6 (including the last) would have red eyes. Is the evidence that each of the two would then provide for or against an hypothesis about the proportion of red-eyed flies the same? The stopping rule principle (SRP) states that this is so: Stopping Rule Principle: The evidential relationship between the data from a completed sequential experiment and a statistical hypothesis does not ever depend on the experiment’s stopping rule.
Amalgamating evidence from heterogeneous sources and across levels of inquiry is becoming increasingly important in many pure and applied sciences. This special issue provides a forum for researchers from diverse scientific and philosophical perspectives to discuss evidence amalgamation, its methodologies, its history, its pitfalls and its potential. We situate the contributions therein within six themes from the broad literature on this subject: the variety-of-evidence thesis, the philosophy of meta-analysis, the role of robustness/sensitivity analysis for evidence amalgamation, its bearing on questions of extrapolation and external validity of experiments, its connection with theory development, and its interface with causal inference, especially regarding causal theories of cancer.
It is widely held that the content of perceptual experience is propositional in nature. However, in a well-known article, “Is Perception a Propositional Attitude?” (2009), Crane has argued against this thesis. He therein assumes that experience has intentional content and indirectly argues that experience has non-propositional content by showing that from what he considers to be the main reasons in favour of “the propositional-attitude thesis”, it does not really follow that experience has propositional content. In this paper I shall discuss Crane’s arguments against the propositional-attitude thesis and will try to show, in contrast, that they are unconvincing. My conclusion will be that, despite all that Crane claims, perceptual content could after all be propositional in nature. KEYWORDS: Crane, propositional-attitude thesis, perceptual experience, propositional content, non-propositional content, accuracy conditions.
Archimedes’ statics is considered as an example of ancient Greek applied mathematics; it is even seen as the beginning of mechanics. Wilbur Knorr made the case regarding this work, as other works by him or other mathematicians from ancient Greece, that it lacks references to the physical phenomena it is supposed to address. According to Knorr, this is understandable if we consider the propositions of the treatise in terms of purely mathematical elaborations suggested by quantitative aspects of the phenomena. In this paper, we challenge Knorr’s view, and address propositions of Archimedes’ statics in their relation to physical phenomena.
When is a belief justified? I consider three sorts of arguments for different accounts of justification on the spectrum from extreme internalism to extreme externalism: arguments from intuitive responses to examples; arguments from the theoretical role of the term in epistemology; and arguments from the practical, moral, and political uses to which we wish to use the term. I focus particularly on the third sort, considering arguments from Clayton Littlejohn (2012) and Amia Srinivasan (2018) in favour of different versions of externalism. I offer counterarguments in the same vein for internalism. I conclude that we should adopt an Alstonian pluralism about the concept of justification.
Some, but not all, of the mistakes a person makes when acting in apparently necessary self-defense are reasonable: we take them not to violate the rights of the apparent aggressor. I argue that this is explained by duties grounded in agents’ entitlements to a fair distribution of the risk of suffering unjust harm. I suggest that the content of these duties is filled in by a social signaling norm, and offer some moral constraints on the form such a norm can take.
Our understanding of what exactly needs protected against in order to safeguard a plausible construal of our ‘freedom of thought’ is changing. And this is because the recent influx of cognitive offloading and outsourcing—and the fast-evolving technologies that enable this—generate radical new possibilities for freedom-of-thought violating thought manipulation. This paper does three main things. First, I briefly overview how recent thinking in the philosophy of mind and cognitive science recognises—contrary to traditional Cartesian ‘internalist’ assumptions—ways in which our cognitive faculties, and even our beliefs, can be materially realised by as well as stored non-biologically and extracranially. Second, and taking brain-computer interface technologies (BCIs) and the associated possibility of ‘extended’ beliefs as a reference point, I propose and defend a sufficient condition on freedom-of-thought violating (extended) thought manipulation. On the view proposed, the right not to have one’s thoughts or opinions manipulated is violated if one is (i) caused to acquire non-autonomous propositional attitudes (acquisition manipulation) or (ii) caused to have otherwise autonomous propositional attitudes non-autonomously eradicated (eradication manipulation). The implications of this view are then illustrated through four thought experiments, which map on to four distinct ways—what I call Type 1-Type 4 manipulation—in which, and with reference to the view defended, one’s freedom of thought is plausibly violated.
Leonard Savage famously contravened his own theory when first confronting the Allais Paradox, but then convinced himself that he had made an error. We examine the formal structure of Savage’s ‘error-correcting’ reasoning in the light of (i) behavioural economists’ claims to identify the latent preferences of individuals who violate conventional rationality requirements and (ii) John Broome’s critique of arguments which presuppose that rationality requirements can be achieved through reasoning. We argue that Savage’s reasoning is not vulnerable to Broome’s critique, but does not provide support for the view that behavioural scientists can identify and counteract errors in people’s choices.
The problem of skepticism, it hardly needs to be said, is widely regarded as one of the deepest and most important problems in all of philosophy and its history. Skepticism is often personified in the shadowy figure of “the skeptic,” who denies, of some large swathe of what we take to be our ordinary knowledge, that we know it after all. The philosophical challenge – as it’s sometimes framed, though this way of setting the problem up has its critics – is to say whether there’s anything we can say to the skeptic that rationally ought to change her mind. Outside of the philosophy classroom, global skeptics – those who are skeptics about all purported knowledge, or at least all purported empirical knowledge about the external world – are rare. But there are people who describe themselves, more or less aptly, as “skeptics” about various more specific domains. Among these are self-professed “climate change skeptics” – skeptics about the reality of anthropogenic climate change.
If a philosopher had fallen into a deep sleep – Rip Van Winkle-like – twenty years ago, and had just woken up, there’s no doubt much about today’s philosophical landscape that would confuse her. One such source of likely confusion is the currently thriving debate about the “normativity of rationality”. To someone (blissfully) ignorant of recent developments, the question of whether rationality is normative may seem to receive an obvious answer: well, of course. Claims about rationality and irrationality are normative claims. What else could they be? And yet the debate about the normativity of rationality rages on. Some prominent philosophers are skeptics about the normativity of rationality, while others devote whole books to defending it. Moreover, even many of those who defend the normativity of rationality are still skeptics about the normativity of structural rationality, the kind of rationality that is distinctively concerned only with coherence between our attitudes.
Epistemic Counterparts 2: Acquaintance, files, and suitable roles
Posted on Tuesday, 19 May 2020
This is part 2 of a series on epistemic counterpart semantics. Part 1 is here. I want to defend what I called the "Quine-Kaplan model" of de re belief ascriptions. …
In a recent article in this journal, David Faraci argues that the value of fairness can plausibly be appealed to in order to vindicate the view that consensual, mutually beneficial employment relationships can be wrongfully exploitative, even if employers have no obligation to hire or otherwise benefit those who are badly off enough to be vulnerable to wage exploitation. In this article, I argue that several values provide potentially strong grounds for thinking that it is at least sometimes better, morally speaking, for employers to hire worse off people at intuitively exploitative wages than to hire better off people at intuitively fair wages. Rather than suggesting that hiring badly off people at intuitively exploitative wages is permissible, however, I suggest that this gives us reason to think that employers can be obligated to hire worse off people rather than better off people and to pay them non-exploitative wages.
Schoen eld has constructed examples of proper inaccuracy measures that value verisimilitude (in a certain sense) in spaces of worlds equipped with a particular variety of verisimilitude metric. However, Schoen eld left it as an open question whether `for every space of worlds, there is a proper inaccuracy measure that values verisimilitude.' Here we answer this question in the armative.
According to many historical philosophical figures, knowledge must be based on infallible foundations. These foundations have been characterized in different ways; e.g., as “cognitive impressions” by the ancient Stoics, as “clear and distinct perceptions” by Descartes, and as “the given” element in experience by C. I. Lewis and other twentieth-century philosophers (Reed 2012: 585). In each case, it has been assumed that these foundations are infallible in that they preclude error on the part of the knower. To have knowledge, in other words, we must have justification that guarantees that our belief is true. This is infallibilism. It is the view that knowledge demands the highest degree of justification.
This paper trials new experimental methods for the analysis of natural language reasoning and the (re)development of critical ordinary language philosophy in the wake of J.L. Austin. Philosophical arguments and thought experiments are strongly shaped by default pragmatic inferences, including stereotypical inferences. Austin suggested that contextually inappropriate stereotypical inferences are at the root of some philosophical paradoxes and problems, and that these can be resolved by exposing those verbal fallacies. This paper builds on recent efforts to empirically document inappropriate stereotypical inferences that may drive philosophical arguments. We demonstrate that previously employed questionnaire-based output measures do not suffice to exclude relevant confounds. We then report an experiment that combines reading time measurements with plausibility ratings. The study seeks to provide evidence of inappropriate stereotypical inferences from appearance verbs that have been suggested to lie at the root of the influential ‘argument from illusion’. Our findings support a diagnostic reconstruction of this argument. They provide the missing component for proof of concept for an experimental implementation of critical ordinary language philosophy that is in line with the ambitions of current ‘evidential’ experimental philosophy.
Ethical Intuitionism was one of the dominant forces in British moral
philosophy from the early 18th century till the 1930s. It
fell into disrepute in the 1940s, but towards the end of the twentieth
century Ethical Intuitionism began to re-emerge as a respectable moral
theory. It has not regained the dominance it once enjoyed, but many
philosophers, including Robert Audi, Jonathan Dancy, David Enoch,
Michael Huemer, David McNaughton, and Russ Shafer-Landau, are now
happy to be labelled intuitionists. The most distinctive features of ethical intuitionism are its
epistemology and ontology.
Josh Greene (2007) famously argued that his cognitive-scientific results undermine deontological moral theorizing. Greene is wrong about this: at best, his research has revealed that at least some characteristically deontological moral judgments are sensitive to factors that we deem morally irrelevant. This alone is not enough to undermine those judgments. However, cognitive science could someday tell us more: it could tell us that in forming those judgments, we treat certain factors as reasons to believe as we do. If we independently deem such factors to be morally irrelevant, such a result would undermine those judgments and any moral theorizing built upon them. This paper brings charity, clarity, and epistemological sophistication to debates surrounding empirical debunking arguments in ethics.
Our aim here is to explore the prospects of a relativist response to moral debunking arguments. We begin by clarifying the relativist thesis under consideration, and we explain why relativists seem well-positioned to resist the arguments in a way that avoids the drawbacks of existing responses. We then show that appearances are deceiving. At bottom, the relativist response is no less question-begging than standard realists responses, and— when we turn our attention to the strongest formulation of the debunking argument—the virtues of relativism turn out to be vices.
This paper puts forward an account of blame combining two ideas that are usually set ABSTRACT up against each other: that blame performs an important function, and that blame is justified by the moral reasons making people blameworthy rather than by its functionality. The paper argues that blame could not have developed in a purely instrumental form, and that its functionality itself demands that its functionality be effaced in favour of non-instrumental reasons for blame—its functionality is self-effacing. This notion is sharpened and it is shown how it offers an alternative to instrumentalist or consequentialist accounts of blame which preserves their animating insight while avoiding their weaknesses by recasting that insight in an explanatory role. This not only allows one to do better justice to the authority and autonomy of non-instrumental reasons for blame, but also reveals that autonomy to be a precondition of blame’s functionality. Unlike rival accounts, it also avoids the “alienation effect” that renders blame unstable under reflection by undercutting the authority of the moral reasons which enable it to perform its function in the first place. It instead yields a vindicatory explanation that strengthens our confidence in those moral reasons.
A common objection to both contextualism and relativism about knowledge ascriptions is that they threaten knowledge norms of assertion and action. Consequently, if there is good reason to accept knowledge norms of assertion or action, there is good reason to reject both contextualism and relativism. In this paper we argue that neither contextualism nor relativism threaten knowledge norms of assertion or action.
A distinctive approach to the theory of knowledge is described, known as anti-luck epistemology. The goal of the paper is to consider whether there are specific features of this proposal that entails that it is committed to pragmatic encroachment, such that whether one counts as having knowledge significantly depends on non-epistemic factors. In particular, the plausibility of the following idea is explored: that since pragmatic factors play an essential role when it comes to the notion of luck, then according to anti-luck epistemology they must likewise play an essential role in our understanding of knowledge as well. It is argued that once anti-luck epistemology is properly understood¾where this means, in turn, having the right account of luck in play¾then this putative entailment to pragmatic encroachment does not go through.