The New England Journal of Medicine NEJM announced new guidelines for authors for statistical reporting yesterday*. The ASA describes the change as “in response to the ASA Statement on P-values and Statistical Significance and subsequent The American Statistician special issue on statistical inference” (ASA I and II, in my abbreviation). …
There is long standing agreement both among philosophers and linguists that the term ‘counterfactual conditional’ is misleading if not a misnomer. Speakers of both non-past subjunctive (or ‘would’ ) conditionals and past subjunctive (or ‘would have’ ) conditionals need not convey counterfactuality. The relationship between the conditionals in question and the counterfactuality of their antecedents is thus not one of presupposing. It is one of conversationally implicating. This paper provides a thorough examination of the arguments against the presupposition view as applied to past subjunctive conditionals and finds none of them conclusive. All the relevant linguistic data, it is shown, are compatible with the assumption that past subjunctive conditionals presuppose the falsity of their antecedents. This finding is not only interesting on its own. It is of vital importance both to whether we should consider antecedent counterfactuality to be part of the conventional meaning of the conditionals in question and to whether there is a deep difference between indicative and subjective conditionals.
Much has been said about Moore’s proof of the external world, but the notion of proof that Moore employs has been largely overlooked. I suspect that most have either found nothing wrong with it, or they have thought it somehow irrelevant to whether the proof serves its anti-skeptical purpose. I show, however, that Moore’s notion of proof is highly problematic. For instance, it trivializes in the sense that any known proposition is provable. This undermines Moore’s proof as he conceives it since it introduces a skeptical regress that he goes at length to resist. I go on to consider various revisions of Moore’s notion of proof and finally settle on one that I think is adequate for Moore’s purposes and faithful to what he says concerning immediate knowledge.
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
Recent work on the epistemology of moral deference suggests that moral knowledge must derive from a knower’s own ability in a way that knowledge acquired easily through testimony need not. This paper transposes this idea to the collective level, and in doing so, shows how two leading accounts of collective knowledge, the joint acceptance account and the distributed account, would be best positioned to countenance group-level moral knowledge as knowledge creditable to group-level ability. The upshot is that we uncover some hitherto unnoticed puzzles to do with defeat in collective moral epistemology, puzzles which reveal collective moral knowledge to be surprisingly fragile vis-à-vis higher-order defeat compared to individual-level moral knowledge. A consequence of this disanalogy is that more work needs done if non-skeptical collective moral epistemology is to hold water.
We reexamine some of the classic problems connected with the use of cardinal utility functions in decision theory, and discuss Patrick Suppes’ contributions to this field in light of a reinterpretation we propose for these problems. We analytically decompose the doctrine of ordinal-ism, which only accepts ordinal utility functions, and distinguish between several doctrines of cardinalism, depending on what components of ordinalism they specifically reject. We identify Suppes’ doctrine with the major deviation from ordinalism that conceives of utility functions as representing preference differences, while being nonetheless empirically related to choices. We highlight the originality, promises and limits of this choice-based cardinalism.
In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs to be re-evaluated accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes.
From the point of view of cognitive development, the present paper by Bart Geurts is highly relevant, welcome and timely. It speaks to a fundamental puzzle in developmental pragmatics that used to be seen as such, then was considered to be resolved by many researchers, but may return nowadays with its full puzzling force.
I argue that our best science supports the rationalist idea that, independent of reasoning, emotions aren’t integral to moral judgment. There’s ample evidence that ordinary moral cognition often involves conscious and unconscious reasoning about an action’s outcomes and the agent’s role in bringing them about. Emotions can aid in moral reasoning by, for example, drawing one’s attention to such information. However, there is no compelling evidence for the decidedly sentimentalist claim that mere feelings are causally necessary or sufficient for making a moral judgment or for treating norms as distinctively moral. I conclude that, even if moral cognition is largely driven by automatic intuitions, these shouldn’t be mistaken for emotions or their non-cognitive components. Non-cognitive elements in our psychology may be required for normal moral development and motivation but not necessarily for mature moral judgment.
Socialism is a rich tradition of political thought and practice, the
history of which contains a vast number of views and theories, often
differing in many of their conceptual, empirical, and normative
commitments. In his 1924 Dictionary of Socialism, Angelo
Rappoport canvassed no fewer than forty definitions of socialism,
telling his readers in the book’s preface that “there are
many mansions in the House of Socialism” (Rappoport 1924: v,
34–41). To take even a relatively restricted subset of socialist
thought, Leszek Kołakowski could fill over 1,300 pages in his
magisterial survey of Main Currents of Marxism
(Kołakowski 1978 ).
Process reliabilism is a theory about ex post justification, the justification of a doxastic attitude one has, such as belief. It says roughly that a justified belief is a belief formed by a reliable process. It is not a theory about ex ante justification, one’s justification for having a particular attitude toward a proposition, an attitude one might lack. But many reliabilists supplement their theory such that it explains ex ante justification in terms of reliable processes. In this paper I argue that the main way reliabilists supplement their theory fails. In the absence of an alternative, reliabilism does not account for ex ante justification.
I It is intuitively plausible to assume that if it is asserted that ‘a is overall better than b (all things considered)’ such a verdict is often based on multiple evaluations of the items a and b under considerations, which are sometimes also called ‘criteria’, ‘features’, or ‘attributes’. I Usually, an item a is better than an item b in some aspects, but not in others, and there is a weighing or outranking of these aspects to determine which item is better.
On the standard view, when we forgive, we overcome or renounce future blaming responses to an agent in virtue of what the forgiver understands to be, and is in fact, an immoral action he has performed. Crucially, on the standard view the blaming response is understood as essentially involving a reactive attitude and its expression. In the central case in which the forgiver has been wronged by the party being forgiven, this reactive attitude is moral resentment, that is, anger with an agent due to a wrong he has done to oneself. When someone other than the forgiver has been wronged by the one being forgiven, the attitude is indignation, anger with an agent because of a wrong he has done to a third party. Such a position was developed by Joseph Butler (1749/1900), and in more recent times endorsed by P. F. Strawson (1962), Jeffrie Murphy (1982), and Jay Wallace (1994). Wallace (1994: 72), for example, claims that “in forgiving people we express our acknowledgment that they have done something that would warrant resentment and blame, but we renounce the responses that we thus acknowledge to be appropriate.”
We are familiar with the idea that belief sometimes amounts to knowledge – i.e. that there are instances of belief that are also instances of knowledge. Here I defend an unfamiliar idea: that desire sometimes amounts to knowledge – i.e. that there are instances of desire that are also instances of knowledge. My argument rests on two premises. First, I assume that goodness is the correctness condition for desire. Second, I assume a virtue-theoretic account of knowledge, on which knowledge is apt mental representation. With those assumptions made, I’ll argue that desires can amount to instances of apt representation, and thus to knowledge.
In her paper “Why Suspend Judging?” Jane Friedman has argued that being agnostic about some question entails that one has an inquiring attitude towards that question. Call this the agnostic-as-inquirer thesis. I argue that the agnostic-as-inquirer thesis is implausible. Specifically, I maintain that the agnostic-as-inquirer thesis requires that we deny the existence of a kind of agent that plausibly exists; namely, one who is both agnostic about Q because they regard their available evidence as insufficient for answering Q and who decides not to inquire into Q because they believe Q to be unanswerable. I claim that it is not only possible for such an agent to exist, but that such an agent is also epistemically permissible.
Work on chance has, for some time, focused on the normative nature of chance: the way in which objective chances constrain what partial beliefs, or credences, we ought to have. According to me, an agent is an expert if and only if their credences are maximally accurate; they are an analyst expert with respect to a body of evidence if and only if their credences are maximally accurate conditional on that body of evidence. I argue that the chances are maximally accurate conditional on local, intrinsic information. This matches nicely with a requirement that Schaffer (2003, 2007) places on chances, called at different times (and in different forms) the Stable Chance Principle and the Intrinsicness Requirement. I call my account the Accuracy-Stability account. I then show how the Accuracy-Stability account underlies some arguments for the New Principle, and show how it revives a version of Van Fraassen’s calibrationist approach. But two new problems arise: first, the Accuracy-Stability account risks collapsing into simple frequentism. But simple frequentism is a bad view. I argue that the same reasoning which motivates the Stability requirement motivates a continuity requirement, which avoids at least some of the problems of frequentism. I conclude by considering an argument from Briggs (2009) that Humean chances aren’t fit to be analyst experts; I argue that the Accuracy-Stability account overcomes Briggs’ difficulties.
Logical pluralism is the view that there is more than one correct logic. Most logical pluralists think that logic is normative in the sense that you make a mistake if you accept the premisses of a valid argument but reject its conclusion. Some authors have argued that this combination is self-undermining: Suppose that L1 and L2 are correct logics that coincide except for the argument from Γ to φ, which is valid in L1 but invalid in L2. If you accept all sentences in Γ, then, by normativity, you make a mistake if you reject φ. In order to avoid mistakes, you should accept φ or suspend judgment about φ. Both options are problematic for pluralism. Can pluralists avoid this worry by rejecting the normativity of logic? I argue that they cannot. All else being equal, the argument goes through even if logic is not normative.
We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time.
Lyons (2016, 2017, 2018) formulates Laudan’s (1981) historical objection to scientific realism as a modus tollens. I present a better formulation of Laudan’s objection, and then argue that Lyons’s formulation is supererogatory. Lyons rejects scientific realism (Putnam, 1975) on the grounds that some successful past theories were (completely) false. I reply that scientific realism is not the categorical hypothesis that all successful scientific theories are (approximately) true, but rather the statistical hypothesis that most successful scientific theories are (approximately) true. Lyons rejects selectivism (Kitcher, 1993; Psillos, 1999) on the grounds that some working assumptions were (completely) false in the history of science. I reply that selectivists would say not that all working assumptions are (approximately) true, but rather that most working assumptions are (approximately) true.
In 2012, CERN scientists announced the discovery of the Higgs boson, claiming their experimental results finally achieved the 5σ criterion for statistical significance. Although particle physicists apply especially stringent standards for statistical significance, their use of “classical” (rather than Bayesian) statistics is not unusual at all. Classical hypothesis testing—a hybrid of techniques developed by Fisher, Neyman and Pearson—remains the dominant form of statistical analysis, and p-values and statistical power are often used to quantify evidential strength.
Traditionally, epistemologists have distinguished between epistemic and pragmatic goals. In so doing, they presume that much of game theory is irrelevant to epistemic enterprises. I will show that this is a mistake. Even if we restrict attention to purely epistemic motivations, members of epistemic groups will face a multitude of strategic choices. I illustrate several contexts where individuals who are concerned solely with the discovery of truth will nonetheless face difficult game theoretic problems. Examples of purely epistemic coordination problems and social dilemmas will be presented. These show that there is a far deeper connection between economics and epistemology than previous appreciated.
On the formulation discussed here, epistemological disjunctivism is the view that in paradigmatic cases of perceptual knowledge, a thinker’s perceptual beliefs constitute knowledge when they are based on reasons that provide them with factive support (i.e., the complete description of the thinker’s reason for believing, say, that it is Agnes curled up on the sofa entails that Agnes is curled up on the sofa). A thinker is in a position to know that p perceptually if the thinker sees that p. It is the seeing that p that constitutes the thinker’s reason for believing p and provides the requisite support for that belief. This perceptual relation between a thinker and a fact guarantees that the thinker is in a position to know things about things in her surroundings. Without this kind of support, perceptual knowledge isn’t possible.
If I were to say, “Agnes does not know that it is raining, but it is,” this seems like a perfectly coherent way of describing Agnes’s epistemic position. If I were to add, “And I don’t know if it is, either,” this seems quite strange. In this chapter, we shall look at some statements that seem, in some sense, contradictory, even though it seems that these statements can express propositions that are contingently true or false. Moore thought it was paradoxical that statements that can express true propositions or contingently false propositions should nevertheless seem absurd like this. If we can account for the absurdity, we shall solve Moore’s Paradox. In this chapter, we shall look at Moore’s proposals and more recent discussions of Moorean absurd thought and speech.
Extended cognition is when cognitive processes extend beyond the brain and nervous system of the subject, and in the process properly include such ‘external’ devices as technology. This paper explores what relevance extended cognitive processes might have for humility, and especially for the specifically cognitive aspect of humility—viz., intellectual humility. As regards humility in general, it is argued that there are no in principle barriers to extended cognitive processes helping to enable the development and manifestation of this character trait, but that there may be limitations to the extent to which one’s manifestation of humility can be dependent upon these processes, at least insofar as we follow orthodoxy and treat humility as a virtue. As regards the cognitive trait of intellectual humility in particular, the question becomes whether this can itself be an extended cognitive process. It is argued that this wouldn’t be a plausible conception of intellectual humility, at least insofar as we treat intellectual humility (like humility in general) as a virtue.
Sextus Empiricus was a Pyrrhonian Skeptic living probably in the
second or third century CE, many of whose works survive, including the
Outlines of Pyrrhonism, the best and fullest account we have
of Pyrrhonian skepticism (a kind of skepticism named for Pyrrho (see
entry on Ancient Skepticism)). Pyrrhonian skepticism involves having no beliefs
about philosophical, scientific, or theoretical matters—and
according to some interpreters, no beliefs at all, period. Whereas
modern skepticism questions the possibility of knowledge, Pyrrhonian
skepticism questions the rationality of belief: the Pyrrhonian skeptic
has the skill of finding for every argument an equal and opposing
argument, a skill whose employment will bring about suspension of
judgment on any issue which is considered by the skeptic, and
Brian Haig, Professor Emeritus
Department of Psychology
University of Canterbury
Christchurch, New Zealand
The American Statistical Association’s (ASA) recent effort to advise the statistical and scientific communities on how they should think about statistics in research is ambitious in scope. …
In this paper, we consider two competing explanations of the empirical finding that people’s causal attributions are responsive to normative details, such as whether an agent’s action violated an injunctive norm—the counterfactual view and the responsibility view. We then present experimental evidence that uses the trolley dilemma in a new way to investigate causal attribution. In the switch version of the trolley problem, people judge that the agent ought to flip the switch, but they also judge that she is more responsible for the resulting outcome when she does so than when she refrains. As predicted by the responsibility view, but not the counterfactual view, people are more likely to say that the agent caused the outcome when she flips the switch.
I argue that we can visually perceive others as seeing agents. I start by characterizing perceptual processes as those that are causally controlled by proximal stimuli. I then distinguish between various forms of visual perspective-taking, before presenting evidence that most of them come in perceptual varieties. In doing so, I clarify and defend the view that some forms of visual perspective-taking are “automatic”—a view that has been marshalled in support of dual-process accounts of mindreading.
I develop a challenge for a widely suggested knowledge-first account of belief that turns, primarily, on unknowable propositions. I consider and reject several responses to my challenge and sketch a new knowledge-first account of belief that avoids it.
I argue that you can be permitted to discount the interests of your adversaries even though doing so would be impartially suboptimal. This means that, in addition to the kinds of moral options that the literature traditionally recognises, there exist what I call other-sacrificing options. I explore the idea that you cannot discount the interests of your adversaries as much as you can favour the interests of your intimates; if this is correct, then there is an asymmetry between negative partiality toward your adversaries and positive partiality toward your intimates.