Whereas Bayesians have proposed norms such as probabilism, which requires immediate and permanent certainty in all logical truths, I propose a framework on which credences, including credences in logical truths, are rational because they are based on reasoning that follows plausible rules for the adoption of credences. I argue that my proposed framework has many virtues. In particular, it resolves the problem of logical omniscience.
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: First, the 'purity' of a data model is not a measure of its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved 'vicariously', such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate (or inadequate) for particular purposes.
. Excerpts from the Preface:
The Statistics Wars:
Today’s “statistics wars” are fascinating: They are at once ancient and up to the minute. They reflect disagreements on one of the deepest, oldest, philosophical questions: How do humans learn about the world despite threats of error due to incomplete and variable data? …
[It] was hard to know what lessons to draw. The democracies had shown their resilience in the long run. How they had done it, and what it meant for the future, was much less clear. The knowledge of their hidden strengths the democracies had been given did not translate into greater self-knowledge or self-control. …
My favourite fallacy is the fallacy fallacy. It’s the fallacy of thinking that something is a fallacy when it isn’t. This paper concerns a high-profile instance, namely the phenomenon of hindsight bias. Roughly, it is the phenomenon of being more confident that some body of evidence supports a hypothesis when one knows that the hypothesis is true, than when one doesn’t.
Decision-makers face severe uncertainty when they are not in a position to assign precise probabilities to all of the relevant possible outcomes of their actions. Such situations are common—novel medical treatments and policies addressing climate change are two examples. Many decision-makers respond to such uncertainty in a cautious manner and are willing to incur a cost to avoid it. There are good reasons for taking such a cautious, uncertainty-averse attitude to be permissible. So far, however, there has been very little work on developing a theory of distributive justice which incorporates it. We aim to remedy this lack. We put forward a novel, uncertainty-averse egalitarian view of distributive justice. We analyse when the twin aims of reducing inequality and limiting the burdens of severe uncertainty are congruent and when they conflict, and highlight several practical implications of the proposed view. We also demonstrate that if uncertainty aversion is permissible, then utilitarians must relinquish a favourite argument against egalitarianism.
Scientists are generally subject to social pressures, including pressures to conform with others in their communities, that affect achievement of their epistemic goals. Here we analyze a network epistemology model in which agents, all else being equal, prefer to take actions that conform with those of their neighbors. This preference for conformity interacts with the agents’ beliefs about which of two (or more) possible actions yields the better outcome. We find a range of possible outcomes, including stable polarization in belief and action. The model results are sensitive to network structure. In general, though, conformity has a negative effect on a community’s ability to reach accurate consensus about the world.
The sustained failure of efforts to design an infinite lottery machine using ordinary probabilistic randomizers is traced back to a problem familiar to set theorists: there are no constructive prescriptions for probabilistically non-measurable sets. Yet construction of such sets is required if we are to be able to read the result of an infinite lottery machine that is built from ordinary probabilistic randomizers. All such designs face a dilemma: they can provide an accessible (readable) result with probability zero; or an inaccessible result with probability greater than zero.
Have you ever had an argument with someone about an issue that you cared deeply about, and you just knew you were right? But the other person kept citing statistics and studies and factual claims that felt suspect to you, but you couldn't prove it on the spot. …
Climate science investigates the structure and dynamics of
earth’s climate system. It seeks to understand how global,
regional and local climates are maintained as well as the processes by
which they change over time. In doing so, it employs observations and
theory from a variety of domains, including meteorology, oceanography,
physics, chemistry and more. These resources also inform the
development of computer models of the climate system, which are a
mainstay of climate research today. This entry provides an overview of
some of the core concepts and practices of contemporary climate
science as well as philosophical work that engages with them.
We advocate and develop a states-based semantics for both nominal and adjectival confidence reports, as in Ann is confident/has confidence that it’s raining, and their comparatives Ann is more confident/has more confidence that it’s raining than that it’s snowing. Other examples of adjectives that can report confidence include sure and certain. Our account adapts Wellwood’s account of adjectival comparatives in which the adjectives denote properties of states, and measure functions are introduced compositionally. We further explore the prospects of applying these tools to the semantics of probability operators. We emphasize three desirable and novel features of our semantics: (i) probability claims only exploit qualitative resources unless there is explicit compositional pressure for quantitative resources; (ii) the semantics applies to both probabilistic adjectives (e.g., likely) and probabilistic nouns (e.g., probability); (iii) the semantics can be combined with an account of belief reports that allows thinkers to have incoherent probabilistic beliefs (e.g. thinking that A & B is more likely than A) even while validating the relevant purely probabilistic claims (e.g. validating the claim that A & B is never more likely than A). Finally, we explore the interaction between confidence-reporting discourse (e.g., I am confident that...) and belief-reports about probabilistic discourse (e.g., I think it’s likely that...).
But this changes nothing. The decisive claim is that in assessing the counterfactuals implicit in (A) we do not have to take sceptical worlds into the reckoning, whereas we must do that in assessing (B) because (B) explicitly speaks of them. Accept, provisionally, what is here said about (B) and focus on the claim about (A). Nobody should make it unless they are already in a position to assert that the actual world is not a sceptical world. And with that we are back to the choice between impotence and redundancy.
they each know that this is so, and so on.3 It need not matter how rationality is understood in the present context, as long as it entails thefollowing: that if a rational agent knows he can obtain m by performing one of two alternative actions, n by performing the other, and m isbetter by his standards, then he performs the first alternative: he
In a recent paper, Matthew Frise argues that reliabilist theories of justification have a temporality problem (2018). He describes this as the problem of “providing a principled, explanatory account of the temporal parameters which settle a process’s reliability at a time, and thus its justificatory power at a time” (926). For example, if perceptual reliability determines whether a given perceptual belief B formed at t1 is justified, one might wonder which perceptual belief-forming episodes fix the relevant truth ratio that determines B’s justification. Is it every case of perception from within 20 minutes of t1? Every case of perception in the past? Every case of perception throughout all time? It’s unclear, initially, what to say here.
It has been argued that an epistemically rational agent’s evidence is subjectively mediated through some rational epistemic standards, and that there are incompatible but equally rational epistemic standards available to agents. This supports Permissiveness, the view according to which one or multiple fully rational agents are permitted to take distinct incompatible doxastic attitudes towards P (relative to a body of evidence). In this paper, I argue that the above claims entail the existence of a unique and more reliable epistemic standard. My strategy relies on Condorcet’s Jury Theorem. This gives rise to an important problem for those who argue that epistemic standards are permissive, since the reliability criterion is incompatible with such a type of Permissiveness.
Posted on Tuesday, 08 May 2018
A might counterfactual is a statement of the form 'if so-and-so were
the case then such-and-such might be the case'. I used to think that
there are different kinds of might counterfactuals: that sometimes
the 'might' takes scope over the entire conditional, and other times
it does not. …
Much of the discussion of set-theoretic independence, and whether or not we could legitimately expand our foundational theory, concerns how we could possibly come to know the truth value of independent sentences. This paper pursues a slightly different tack, examining how we are ignorant of issues surrounding their truth. We argue that a study of how we are ignorant reveals a need for an understanding of set-theoretic explanation and motivates a pluralism concerning the adoption of foundational theory.
Reports of the Intergovernmental Panel on Climate Change (IPCC) employ an evolving framework of calibrated language for assessing and communicating degrees of certainty in findings. A persistent challenge for this framework has been ambiguity in the relationship between multiple degree-of-certainty metrics. We aim to clarify the relationship between the likelihood and confidence metrics used in the Fifth Assessment Report (2013), with benefits for mathematical consistency among multiple findings and for usability in downstream modeling and decision analysis. We discuss how our proposal meshes with current and proposed practice in IPCC uncertainty assessment.
In an ideal epistemic world, our beliefs would correspond to our evidence, and our evidence would be bountiful. In the world we live in, however, if we wish to live meaningful lives, other epistemic strategies are necessary. Here I attempt to work out, systematically, the ways in which evidentialism fails us as a guide to belief. This is so preeminently for lives of a religious character, but the point applies more broadly.
Rini 2015 [Synthese 192, (2): 431-452] claims to have identified a methodological flaw that invalidates the results of two experimental studies [Schwitzgebel & Cushman (2012) Mind and Language 27, (2): 135-153; Tobia, Buckwalter & Stich (2013) Philosophical Psychology 26, (5): 629–638] demonstrating order effects in professional philosophical intuition. This conclusion is reached on the basis of unsupported empirical premises for which no evidence is given. Subsequent findings in experimental cognitive science further reveal Rini’s challenge as idle speculation.
Welcome to our fifth Ergo symposium. This week we are showcasing Joshua Shepherd’s paper “Halfhearted Action and Control”, with commentaries by Andreas Elpidorou (Louisville), Nora Heinzelmann (Munich), Zachary Irving (Virginia). …
This project began in a conversation between the two of us about poverty and gender. Alison was very enthusiastic about Thomas’s work on global poverty but asked why he had so not far addressed the so-called feminisation of poverty. Thomas asked for evidence supporting the familiar claim that “poverty wears a woman’s face” and, when we looked into the matter more deeply, we found that the available evidence was quite unconvincing. Not only were the statistics sketchy and the term “feminisation of poverty” used equivocally; worse, the existing poverty metrics were arguably biased by culture and gender and also lacked explicit and plausible justifications. In order to investigate the gendered dimensions of global poverty, we needed a non-arbitrary metric supported by sound and open reasoning.
Detection of deception is of fundamental importance for everyday social life and might require “mindreading” (the ability to represent others’ mental states). People with diminished mindreading, such as those with autism spectrum disorder (ASD), might be at risk of manipulation because of lie detection difficulties. In Experiment 1, performance among 216 neurotypical adults on a realistic lie detection paradigm was significantly negatively associated with number of ASD traits, but not with mindreading ability. Bayesian analyses complemented null hypothesis significance testing and suggested the data supported the alternative hypothesis in this key respect. Cross validation of results was achieved by randomly splitting the full sample into two subsamples of 108 and rerunning analyses. The association between lie detection and ASD traits held in both subsamples, showing the reliability of findings. In Experiment 2, lie detection was significantly impaired in 27 adults with a diagnosis of ASD relative to 27 matched comparison participants. Results suggest that people with ASD (or ASD traits) may be particularly vulnerable to manipulation and may benefit from lie detection training. Autism Res 2018, 0: 000–000. VC 2018 The Authors Autism Research published by International Society for Autism Research and Wiley Periodicals, Inc.
Is it possible for us to know the fundamental truths of logic a priori? This question presupposes another: is it possible for us to know them at all, a priori or a posteriori? In the case of the fundamental truths of logic, there has always seemed to be a difficulty about this, one that may be vaguely glossed as follows (more below): since logic will inevitably be involved in any account of how we might be justified in believing it, how is it possible for us to be justified in our fundamental logical beliefs? In this essay, I aim to explain how we might be justified in our fundamental logical beliefs. If the explanation works, it will explain not merely how we might know logic, but how we might know it a priori.
Contemporary epistemologists have devoted considerable attention to conceptual analyses of the nature of epistemic justification but there is great disagreement about whether the factors relevant to the justification of a person’s belief must be internally accessible to that person (Alston 1989; Fumerton 1996; Kornblith 2001; Pryor 2001; BonJour and Sosa 2003; McGrew and McGrew 2006; Goldberg 2007; and Poston 2008). This debate between internalists, who endorse the access requirement, and externalists, who reject it, has been little discussed by philosophers of science. Yet epistemic justification is a central concern in philosophy of science. In particular, the wide-ranging debates over evidence and confirmation seem to be concerned to a significant degree with the question of justifying conclusions from data. Theories of evidence can indeed be understood in part as attempts to explicate a concept of scientific justification. But how do such theories depict scientific justification? Do they employ an internalist or externalist notion of justification?
Peter Galison has recently claimed that twentieth-century microphysics has been pursued by two distinct experimental traditions—the image tradition and the logic tradition—that have only recently merged into a hybrid tradition. According to Galison, the two traditions employ fundamentally different forms of experimental argument, with the logic tradition using statistical arguments, while the image tradition strives for non-statistical demonstrations based on compelling (“golden”) single events. I show that discoveries in both traditions have employed the same statistical form of argument, even when basing discovery claims on single, golden events. Where Galison sees an epistemic divide between two communities that can only be bridged by a creole- or pidgin-like “interlanguage,” there is in fact a shared commitment to a statistical form of experimental argument.
Philosophers have claimed that education aims at fostering disparate epistemic goals––for instance: knowledge, true belief, understanding, epistemic character, critical thinking. In this paper we focus on an important segment of the debate
Experimental philosophy is the name for a recent movement whose participants use the methods of experimental psychology to probe the way people think about philosophical issues and then examine how the results of such studies bear on traditional philosophical debates. Given both the breadth of the research being carried out by experimental philosophers and the controversial nature of some of their central methodological assumptions, it is of no surprise that their work has recently come under attack. In this paper we respond to some criticisms of experimental philosophy that have recently been put forward by Antti Kauppinen. Unlike the critics of experimental philosophy, we do not think the fledgling movement either will or should fall before it has even had a chance to rise up to explain what it is, what it seeks to do (and not to do), and exactly how it plans to do it. Filling in some of the salient details is the main goal of the present paper.
Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception.
Symposium on Del Pinal and Spaulding, “Conceptual Centrality and Implicit Bias” Robert Briscoe April 23, 2018 Mind & Language Symposia / Philosophy of Mind / Psychology / Social CognitionI’m very glad to announce our latest Mind & Language symposium on Guillermo Del Pinal and Shannon Spaulding’s “Conceptual Centrality and Implicit Bias” from the journal’s February 2018 issue. …