We often find ourselves in disagreement with others. You may think
nuclear energy is so volatile that no nuclear energy plants should be
built anytime soon. But you are aware that there are many people who
disagree with you on that very question. You disagree with your sister
regarding the location of the piano in your childhood home, with you
thinking it was in the primary living area and her thinking it was in
the small den. You and many others believe Jesus Christ rose from the
dead; millions of others disagree. It seems that awareness of disagreement can, at least in many cases,
supply one with a powerful reason to think that one’s belief is
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett’s work on backward causation and Lewis’s on chance and crystal balls. We show that the possibility that the causal facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis’s own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action.
This continues my previous post: “Can’t take the fiducial out of Fisher…” in recognition of Fisher’s birthday, February 17. I supply a few more intriguing articles you may find enlightening to read and/or reread on a Saturday night
Move up 20 years to the famous 1955/56 exchange between Fisher and Neyman. …
When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be ‘robust’. This article investigates the logic of such ‘robustness analysis’ (RA). The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful new foothold on the logic undergirding RAs.
R.A. Fisher: February 17, 1890 – July 29, 1962
Continuing with posts in recognition of R.A. Fisher’s birthday, I post one from a couple of years ago on a topic that had previously not been discussed on this blog: Fisher’s fiducial probability. …
(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)
Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. …
(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)
We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. …
We all rely on this basic assumption when we try to interpret each other’s actions. For instance, if Suzie frequently asks for chocolate ice cream, we infer that she likes chocolate ice cream. Why? Because we assume she is choosing the correct way of getting what she wants. It could be that she hates chocolate ice cream, but she’s irrational, so she acts to get the things she hates; but we assume this isn’t the case. Notice that without this sort of assumption, we would have no way of identifying each other’s desires and beliefs.
In the following, I explain the “Twin Paradox”, which is supposed to be a paradoxical consequence of the Special Theory of Relativity (STR). I give the correct resolution of the “paradox,” explaining why STR is not inconsistent as it appears at first glance. I also debunk two common, incorrect responses to the paradox. This should help the reader to understand Special Relativity and to see how the theory is coherent.
. As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012/2017. The comments from 2017 lead to a troubling issue that I will bring up in the comments today. …
There is a vast literature that seeks to uncover features underlying moral judgment by eliciting reactions to hypothetical scenarios such as trolley problems. These thought experiments assume that participants accept the outcomes stipulated in the scenarios. Across seven studies (N = 968), we demonstrate that intuition overrides stipulated outcomes even when participants are explicitly told that an action will result in a particular outcome. Participants instead substitute their own estimates of the probability of outcomes for stipulated outcomes, and these probability estimates in turn influence moral judgments. Our findings demonstrate that intuitive likelihoods are one critical factor in moral judgment, one that is not suspended even in moral dilemmas that explicitly stipulate outcomes. Features thought to underlie moral reasoning, such as intention, may operate, in part, by affecting the intuitive likelihood of outcomes, and, problematically, moral differences between scenarios may be confounded with non-moral intuitive probabilities.
It is often said that ‘what it is like’-knowledge cannot be acquired by consulting testimony or reading books [Lewis 1998; Paul 2014; 2015a]. However, people also routinely consult books like What It Is Like to Go to War [Marlantes 2014], and countless ‘what it is like’ articles and youtube videos, in the apparent hope of gaining knowledge about what it is like to have experiences they have not had themselves. This article examines this puzzle and tries to solve it by appealing to recent work on knowing-wh ascriptions. In closing I indicate the wider significance of these ideas by showing how they can help us to evaluate prominent arguments by Paul [2014; 2015a] concerning transformative experiences.
In ‘Freedom and Resentment’ P. F. Strawson argues that reactive attitudes like resentment and indignation cannot be eliminated altogether, because doing so would involve exiting interpersonal relationships altogether. I describe an alternative to resentment: a form of moral sadness about wrongdoing that, I argue, preserves our participation in interpersonal relationships. Substituting this moral sadness for resentment and indignation would amount to a deep and far-reaching change in the way we relate to each other – while keeping in place the interpersonal relationships, which, Strawson rightfully believes, cannot be eliminated.
The debate about the nature of knowledge-how is standardly thought to be divided between Intellectualist views, which take knowledge-how to be a kind of propositional knowledge, and Anti-Intellectualist views which take knowledge-how to be a kind of ability. In this paper, I explore a compromise position—the Interrogative Capacity view—which claims that knowing how to do something is a certain kind of ability to generate answers to the question of how to do it. This view combines the Intellectualist thesis that knowledge-how is a relation to a set of propositions with the Anti-Intellectualist thesis that knowledge-how is a kind of ability. I argue that this view combines the positive features of both Intellectualism and Anti-Intellectualism.
Comparativism is the position that the fundamental doxastic state consists in comparative beliefs (e.g., believing p to be more likely than q), with partial beliefs (e.g., believing p to degree x) being grounded in and explained by patterns amongst comparative beliefs that exist under special conditions. In this paper, I develop a version of comparativism that originates with a suggestion made by Frank Ramsey in his ‘Probability and Partial Belief’ (1929). By means of a representation theorem, I show how this ‘Ramseyan comparativism’ can be used to weaken the (unrealistically strong) conditions required for probabilistic coherence that comparativists usually rely on, while still preserving enough structure to let us retain the usual comparativists’ account of quantitative doxastic comparisons.
A number of naturalistic philosophers of mind endorse a realist attitude towards the results of Bayesian cognitive science. This realist attitude is currently unwarranted, however. It is not obvious that Bayesian models possess special epistemic virtues over alternative models of mental phenomena involving uncertainty. In particular, the Bayesian approach in cognitive science is not more simple, unifying and rational than alternative approaches; and it not obvious that the Bayesian approach is more empirically adequate than alternatives. It is at least premature, then, to assert that mental phenomena involving uncertainty are best explained within the Bayesian approach. To continue on with an exclusive praise for Bayes would be dangerous as it risks monopolizing the center of attention, leading to the neglect of different but promising formal approaches. Naturalistic philosophers of mind would be wise instead to endorse an agnostic, instrumentalist attitude towards Bayesian cognitive science to correct their mistake.
People often talk about the synchronic Dutch Book argument for Probabilism and the diachronic Dutch Strategy argument for Conditionalization. But the synchronic Dutch Book argument for the Principal Principle is mentioned less. …
[The following is a guest post by Bob Lockie. — JS]He who says that all things happen of necessity can hardly find fault with one who denies that all happens by necessity; for on his own theory this very argument is voiced by necessity (Epicurus 1964: XL).Lockie, Robert. …
This essay is an opinionated exploration of the constraints that modal discourse imposes on the theory of assertion. Primary focus is on the question whether modal discourse challenges the traditional view that all assertions have propositional content. This question is tackled largely with reference to discourse involving epistemic modals, although connections with other flavors of modality are noted along the way.
There is an emerging skepticism about the existence of testimonial knowledge-how (Hawley (2010) , Poston (2016), Carter and Pritchard (2015a)). This is unsurprising since a number of influential approaches to knowledge-how struggle to accommodate testimonial knowledge how. Nonetheless, this scepticism is misguided. This paper establishes that there are cases of easy testimonial knowledge-how. It is structured as follows: First, a case is presented in which an agent acquires knowledge-how simply by accepting a speaker’s testimony. Second, it is argued that this knowledge-how is genuinely testimonial. Next, Poston’s (2016) arguments against easy testimonial knowledge-how are considered and rejected. The implications of the argument differ for intellectualists and anti-intellectualists about knowledge-how. The intellectualist must reject widespread assumptions about the communicative preconditions for the acquisition of testimonial knowledge. The Anti-intellectualist must find a way of accommodating the dependence of knowledge-how on speaker reliability. It is not clear how this can be done.
Many of our mental states such as beliefs and desires are
intentional mental states, or mental states with content. Externalism with regard to mental content says that in order
to have certain types of intentional mental states (e.g. beliefs), it
is necessary to be related to the environment in the right way. Internalism (or individualism) denies this, and it
affirms that having those intentional mental states depends solely on
our intrinsic properties. This debate has important consequences with
regard to philosophical and empirical theories of the mind, and the
role of social institutions and the physical environment in
constituting the mind.
Three arguments against universally regular probabilities have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances.
A heated debate surrounds the significance of reproducibility as an indicator for research quality and reliability, with many commentators linking a “crisis of reproducibility” to the rise of fraudulent, careless and unreliable practices of knowledge production. Through the analysis of discourse and practices across research fields, I point out that reproducibility is not only interpreted in different ways, but also serves a variety of epistemic functions depending on the research at hand. Given such variation, I argue that the uncritical pursuit of reproducibility as an overarching epistemic value is misleading and potentially damaging to scientific advancement. Requirements for reproducibility, however they are interpreted, are one of many available means to secure reliable research outcomes. Furthermore, there are cases where the focus on enhancing reproducibility turns out not to foster high-quality research. Scientific communities and Open Science advocates should learn from inferential reasoning from irreproducible data, and promote incentives for all researchers to explicitly and publicly discuss (1) their methodological commitments, (2) the ways in which they learn from mistakes and problems in everyday practice, and (3) the strategies they use to choose which research component of any project needs to be preserved in the long term, and how.
What does it mean for something, like the fact that rain is forecast, to be a normative reason for an action like taking your umbrella, or attitude like believing it will rain? According to a widely and perennially popular view, concepts of “reasons” are all concepts of some kind of explanation. But explanations of what? On one way of developing this idea, the concept of a normative reason for an agent S to perform an action A is that of an explanation why it would be good (in some way, to some degree) for S to do A. This Reasons as Explanations of Goodness hypothesis (REG) has numerous virtues, and has had a number of champions. But like every other extant theory of normative reasons it faces some significant challenges, which prompt many more philosophers to be skeptical that it can correctly account for (all) our reasons. This paper demonstrates how five different puzzles about normative reasons can be solved by careful attention to the concept of goodness, and in particular observing the ways in which it—and consequently, talk about reasons—is sensitive to context. Rather than asking simply whether or not certain facts are reasons for S to do A, we need to explore the contexts in which it is and is not correct to describe a certain fact as “a reason” for S to do A.
Do we have a duty to explore space? In part one, I looked at Schwartz’s positive case for the existence of such a duty. That positive case rested on three main arguments. The first argument claimed that we have a duty to explore space in order to access scarce resources. …
This paper argues that the controversy over GM crops is not best understood in terms of the supposed bias, dishonesty, irrationality, or ignorance on the part of proponents or critics, but rather in terms of differences in values. To do this, the paper draws upon and extends recent work of the role of values and interests in science, focusing particularly on inductive risk and epistemic risk, and it shows how the GMO debate can help to further our understanding of the various epistemic risks that are present in science and how these risks might be managed.
This paper distinguishes two reasoning strategies for using a model as a “null”. Null modeling evaluates whether a process is causally responsible for a pattern by testing it against a null model. Baseline modeling measures the relative significance of various processes responsible for a pattern by detecting deviations from a baseline model. Scientists sometimes conflate these strategies because their formal similarities, but they must distinguish them lest they privilege null models as accepted until disproved. I illustrate this problem with the neutral theory of ecology and use this as a case study to draw general lessons. First, scientists cannot draw certain kinds of causal conclusions using null modeling. Second, scientists can draw these kinds of causal conclusions using baseline modeling, but this requires more evidence than does null modeling.
Bayesian epistemology proposes norms on degrees of belief that are supposed to constitute rational ideals. The most widely endorsed norm is probabilism, which requires ideally rational agents to have degrees of belief that can be represented by a probability function. Unfortunately, probabilistic coherence is unattainable for human thinkers, because fully complying with the norm is too difficult for us. In response, Bayesians suggest that for limited thinkers, probabilistic coherence is an ideal to be approximated. We are supposedly better off the more closely our credences approximate the ideal. However, it is rarely discussed exactly in what sense credences are better if they approximate coherence more closely. In this article, we first clarify the way in which approximating coherence needs to be beneficial in order for probabilism to constitute an ideal in the intended sense. In Section 3, we present existing results from the literature that support the idea that probabilism is an ideal that should be approximated: On some measures of incoherence, being less incoherent reduces vulnerability to Dutch books. Furthermore, given certain other incoherence measures, some ways of being less incoherent have guaranteed benefits for the accuracy of one’s credences. The problem is that these known results rely on different ways of measuring closeness to coherence.
tion to perform in order to change a currently undesirable situation. The policymaker has at her disposal a team of experts, each with their own understanding of the causal dependencies between different factors contributing to the outcome. The policymaker has varying degrees of confidence in the experts’ opinions. She wants to combine their opinions in order to decide on the most effective intervention. We formally define the notion of an effective intervention, and then consider how experts’ causal judgments can be combined in order to determine the most effective intervention. We define a notion of two causal models being compatible, and show how compatible causal models can be combined. We then use it as the basis for combining experts causal judgments. We illustrate our approach on a number of real-life examples.
We provide formal definitions of degree of blameworthiness and intention relative to an epistemic state (a probability over causal models and a utility function on outcomes). These, together with a definition of actual causality, provide the key ingredients for moral responsibility judgments. We show that these definitions give insight into commonsense intuitions in a variety of puzzling cases from the literature.