Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception.
Symposium on Del Pinal and Spaulding, “Conceptual Centrality and Implicit Bias” Robert Briscoe April 23, 2018 Mind & Language Symposia / Philosophy of Mind / Psychology / Social CognitionI’m very glad to announce our latest Mind & Language symposium on Guillermo Del Pinal and Shannon Spaulding’s “Conceptual Centrality and Implicit Bias” from the journal’s February 2018 issue. …
In the posthumously published ‘Truth and Probability’ (1926), Ramsey sets out an influential account of the nature, measurement, and norms of partial belief. The essay is a foundational work on subjectivist interpretations of probability, according to which probabilities can be interpreted as rational degrees of belief (see entry on Interpretations of Probability). Many of its key ideas and arguments have since featured in other foundational works within the subjectivist tradition (e.g., Savage 1954, Jeffrey 1965). Ramsey’s central claim in ‘Truth and Probability’ is that the laws of probability supply us with a ‘logic of partial belief’. That is, the laws specify what would need to be true of any consistent set of partial beliefs, in a manner analogous to how the laws of classical logic might be taken to generate necessary conditions on any consistent set of full beliefs. His case for this is based on a novel account of what partial beliefs are and how they can be measured.
A good surgeon knows how to perform a surgery; a good architect knows how to design a house. We value their know-how. We ordinarily look for it. What makes it so valuable? A natural response is that know-how is valuable because it explains success. A surgeon’s know-how explains her success at performing a surgery. And an architect’s know-how explains his success at designing houses that stand up. We value know-how because of its special explanatory link to success. But in virtue of what is know-how explanatorily linked to success? This essay defends the thesis that know-how’s special link to success is to be explained at least in part in terms of its being, or involving, a doxastic attitude that is epistemically alike propositional knowledge. If its explanatory link to success is what makes know-how valuable, an upshot of my argument is that the value of know-how is due, to a considerable extent, to its being, or involving, propositional knowledge.
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced.
In this paper I consider an argument for the possibility of intending at will, and its relationship to an argument about the possibility of believing at will. I argue that although we have good reason to think we sometimes intend at will, we lack good reason to think this in the case of believing. Instead of believing at will, agents like us often suppose at will.
This paper defends a challenge, inspired by arguments drawn from contemporary ordinary language philosophy and grounded in experimental data, to certain forms of standard philosophical practice. There has been a resurgence of philosophers who describe themselves as practicing “ordinary language philosophy”. The resurgence can be divided into constructive and critical approaches. The critical approach to neo-ordinary language philosophy has been forcefully developed by Baz (2012a,b, 2014, 2015, 2016, forthcoming), who attempts to show that a substantial chunk of contemporary philosophy is fundamentally misguided. I describe Baz’s project and argue that while there is reason to be skeptical of its radical conclusion, it conveys an important truth about discontinuities between ordinary uses of philosophically significant expressions (“know”, e.g.) and their use in philosophical thought experiments. I discuss some evidence from experimental psychology and behavioral economics indicating that there is a risk of overlooking important aspects of meaning or misinterpreting experimental results by focusing only on abstract experimental scenarios, rather than employing more diverse and more ecologically valid experimental designs. I conclude by presenting a revised version of the critical argument from ordinary language.
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
Actualists hold that contrary-to-duty scenarios give rise to deontic dilemmas and provide counterexamples to the transmission principle, according to which we ought to take the necessary means to actions we ought to perform. In an earlier article, I have argued, contrary to actualism, that the notion of ‘ought’ that figures in conclusions of practical deliberation does not allow for deontic dilemmas and validates the transmission principle. Here I defend these claims, together with my possibilist account of contrary-to-duty scenarios, against Stephen White’s recent criticism.
This paper explores some of the ways in which agentive, deontic, and epistemic concepts combine to yield ought statements—or simply, oughts—of different characters. Consider an example. Suppose I place a coin on the table, either heads up or tails up, though the coin is covered and you do not know which. And suppose you are then asked to bet whether the coin is heads up or tails up, with $10 to win if you bet correctly. If the coin is heads up but you bet tails, there is a sense in which we would naturally say that you ought to have made the other choice—at least, things would have turned out better for you if you had. But an ought statement like this does not involve any suggestion that you should be criticized for your actual choice. Nobody could blame you, in this situation, for betting incorrectly. By contrast, imagine that the coin is placed in such a way that you can see that it is heads up, but you bet tails anyway. Again we would say that you ought to have done otherwise, but this time it seems that you could legitimately be criticized for your choice.
This paper presents a novel challenge to epistemic internalism, the view that epistemic justification supervenes on facts to which the believing agent has introspective access. The challenge rests on a new set of cases which feature subjects forming beliefs under conditions of ‘bad ideology’ – that is, conditions in which pervasively false beliefs sustain and are sustained by systems of social oppression. In such cases, I suggest, the externalistic view that justification is a matter of structural, worldly relations, rather than the internalistic view that justification is a matter of how things seem from the agent’s individual perspective, becomes the more intuitively attractive theory. But these ‘bad ideology’ cases do not merely yield intuitive verdicts that favour externalism over internalism. These cases are moreover analogous to precisely those canonical cases widely taken to be counterexamples to externalism: cases featuring brains-in-vats, clairvoyants, and dogmatists. That is, my ‘bad ideology’ cases are, in all relevant respects, just like cases that are thought to count against externalism – except that they intuitively favour externalism. This, I argue, is a serious worry for internalism, and bears interestingly on the debate over whether externalism is a genuinely ‘normative’ epistemology.
Scientific research is almost always conducted by communities of scientists of varying size and complexity. Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project. Scientists manage to do this without a central authority allocating them to different projects. Thanks largely to the pioneering studies of Philip Kitcher and Michael Strevens , understanding this self-organization has become an important area of research in the philosophy of science.
Taking literally the concept of emotional truth requires breaking the monopoly on truth of belief-like states. To this end, I look to perceptions for a model of non-propositional states that might be true or false, and to desires for a model of propositional attitudes the norm of which is other than the semantic satisfaction of their propositional object. Those models inspire a conception of generic truth, which can admit of degrees for analogue representations such as emotions; belief-like states, by contrast, are digital representations. I argue that the gravest problem—objectivity—is not insurmountable.
In this paper, I argue that the “positive argument” for Constructive Empiricism (CE), according to which CE “makes better sense of science, and of scientific activity, than realism does” (van Fraassen 1980, 73), is an Inference to the Best Explanation (IBE). But constructive empiricists are critical of IBE, and thus they have to be critical of their own “positive argument” for CE. If my argument is sound, then constructive empiricists are in the awkward position of having to reject their own “positive argument” for CE by their own lights.
There are four notions in this thesis that deserve close examination: epistemic status, opinion, dependence, and moral features. The first four sections of this paper examine each of these notions in turn. Along the way, I raise some objections to existing accounts of moral encroachment. For instance, many theories fail to give sufficient attention to moral encroachment on credences. Also, many theories focus on moral features that do not have the correct structure to support standard analogies between pragmatic and moral encroachment. The fifth and final section of the paper addresses several objections and frequently asked questions.
Empirical research into moral decision-making is often taken to have normative implications. For instance, in his recent book, Joshua Greene (2013) relies on empirical findings to establish utilitarianism as a superior normative ethical theory. Kantian ethics, and deontological ethics more generally, is a rival view that Greene attacks. At the heart of Greene’s argument against deontology is the claim that deontological moral judgments are the product of certain emotions and not of reason.
Change and local spatial variation are missing in Hamiltonian General Relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchaˇr waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity.
It’s no secret that there are many competing views on the semantics of conditionals. One of the tools of the trade is that of any experimental scientist: put the object of study in various environments and see what happens.
I predicted that the degree of agreement behind the ASA’s “6 principles” on p-values , partial as it was,was unlikely to be replicated when it came to most of the “other approaches” with which some would supplement or replace significance tests– notably Bayesian updating, Bayes factors, or likelihood ratios (confidence intervals are dual to hypotheses tests). …
The United Nations Population Division’s latest report predicts a global population of over 11 billion by 2100. That is the ‘medium’ projection, based on standard demographic transition theory. There is also a ‘low’ projection, in which the total fertility rate is lower by half a child per woman; here, population peaks at 8.7 billion mid-century, returning to 7.3 billion by 2100.
This chapter focusses on the question of optimal human population size: how many people it is best to have alive on Earth at a given time. The exercise is one of optimisation subject to constraints. Population axiology is one highly relevant input to the exercise, as it supplies the objective: it tells us which logically possible states of affairs – in the sense of assignments of well-being levels to persons – are better than which others. But not all logically possible states of affairs are achievable: we cannot in practice have (say) a population of a quadrillion humans, all living lives of untold bliss, on Earth simultaneously. The real world supplies constraints.
Consider this case: Tonya plans to do Y, but Irving wants her
to do X instead. Irving has tried unsuccessfully to provide
Tonya with reasons for doing X rather than Y. If Irving
is unwilling to resort to coercion or force, he might deploy any of
the following tactics to try to influence Tonya’s choice. For
example, Irving might …
Charm Tonya into wanting to please Irving by doing
X. Exaggerate the advantages of doing X and the
disadvantages of doing Y, and/or understate the disadvantages
of doing X and the advantages of doing Y.
Some people end up worse off than others partly because of their
bad luck. For instance, some die young due to a genetic disease,
whereas others live long lives. Are such differential luck induced
inequalities unjust? Many are inclined to answer this question
affirmatively. To understand this inclination, we need a clear account
of what luck involves. On some accounts, luck nullifies
responsibility. On others, it nullifies desert. It is often said that
justice requires luck to be ‘neutralized’. However, it is
contested whether a distributive pattern that eliminates the influence
of luck can be described.
I've written quite a lot on this blog recently about how we should aggregate the credences or subjective probabilities of a group of individuals to give their collective credences (here, here, here). In some of those posts, this one in particular, I asked how we should combine credences if we wish to use them to make a group decision. …
This article considers the popular thesis that a more proportional relationship between a cause and its effect yields a more abstract causal explanation of that effect, which in turn produces a deeper explanation. This thesis is taken to have important implications for choosing the optimal granularity of explanation for a given explanandum. In this article, I argue that this thesis is not generally true of probabilistic causal relationships. In light of this finding, I propose a pragmatic, interest-relative measure of explanatory depth. This measure uses a decision-theoretic model of information pricing to determine the optimal granularity of explanation for a given explanandum, agent, and decision problem.
Since the beginning of the millennium, Richard Joyce has made several influential contributions to contemporary metaethics. He has revived moral error theory, championed evolutionary debunking arguments, and developed and defended a position known as ‘moral fictionalism’. The twelve papers in this volume are organized around these themes—error theory, evolution and debunking, and projectivism and fictionalism—with four papers in each of the three categories. All papers but one are previously published. I had read nearly all of them before and I have used many of them in my own work. Needless to say, then, I think highly of Joyce’s work and I benefited from engaging with the material anew. The volume also contains a newly written introductory chapter, which I found helpful.
In what has become one of the most famous passages in philosophy of mind, Gareth Evans wrote: …in making a self-ascription of a belief, one’s eyes are, so to speak, or occasionally literally, directed outward—upon the world. If someone asks me ‘Do you think there is going to be a third world war?’ I must attend, in answering him, to precisely the same outward phenomena as I would attend to if I were answering the question ‘Will there be a third world war?’ (1982, 225) Evans went on to say that this observation tells against theories of self-knowledge or introspection that postulate a sort of inner sense (‘Cartesian’ theories, he called them) and instead points to something different. But how should we understand Evans’s comments, and what sort of anti-Cartesian theory do they point to?
An inductive logic is a logic of evidential support. In a deductive
logic, the premises of a valid deductive argument logically
entail the conclusion, where logical entailment means that every logically possible state of affairs that makes the premises true must
make the conclusion truth as well. Thus, the premises of a valid deductive argument provide total support for the conclusion. An inductive logic extends this idea to weaker arguments. In a good inductive argument, the truth of the premises provides some degree of support for the truth of the conclusion, where this degree-of-support might be measured via some numerical scale.
[Editor's Note: The following new entry by Leah Henderson replaces the
on this topic by the previous author.] We generally think that the observations we make are able to justify
some expectations or predictions about observations we have not yet
made, as well as general claims that go beyond the observed. For
example, the observation that bread of a certain appearance has thus
far been nourishing seems to justify the expectation that the next
similar piece of bread I eat will also be nourishing, as well as the
claim that bread of this sort is generally nourishing. Such inferences
from the observed to the unobserved, or to general laws, are known as
According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i) utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than standard Bayesian conditionalization, (ii) is able to characterise the special value of logically valid argument schemes in uncertain reasoning contexts, (iii) greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and (iv) undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic.