Higher-order evidence is, roughly, evidence of evidence. The idea is that evidence comes in levels. At the lowest level is evidence of the familiar type —evidence concerning some proposition that is not itself about evidence. At a higher level the evidence concerns some proposition about evidence at a lower level. Only in recent years has this less familiar type been the subject of epistemological focus, and the work on it remains relegated to a small circle of authors—far disproportionate to the attention it deserves. It deserves to occupy center stage for several reasons. First, higher-order evidence frequently arises in a diverse range of contexts, including testimony, disagreement, empirical observation, introspection, and memory, among others. Second, such evidence often plays a crucial epistemic role in these contexts. Third, the role it plays is complex, yields interesting epistemological puzzles, and therefore remains controversial and not yet fully understood. Although the ultimate goal of an investigation into higher-order evidence is to produce an account of its epistemic significance, my present concern is more fundamental. I have two primary sets of goals here. The first is expositional: to serve as an introduction for readers new to the topic. The second is argumentative: to establish that the existing characterizations of the concept of higher-order evidence and various related concepts are in dire need of refinement, to demonstrate that this lack of refinement is the source of major errors in the literature, and to provide the needed refinement to set the stage for further progress.
This paper concerns an investigation of the conceptual spaces account of graded membership in the case of gradable adjectives. Douven and collaborators have shown that the degree of membership of an item intermediate between two color categories (green vs. blue) or two shape categories (vase vs. bowl) can be derived from the categories’ typical instances. An issue left open is whether the conceptual spaces approach can account for graded membership in more abstract categories. In this paper we consider dimensional adjectives such as tall and expensive, for which the notion of prototypicality is more problematic. We present the results of an empirical study showing that the account can be extended successfully to that class, taking advantage of systematic relations of antonymy in those adjectives. The approach’s assumption that typical instances of a category are equally typical and its ability to account for inter-individual differences in degree membership are discussed.
Imogen Dickie, Fixing Reference (Oxford, 2016)
So, why is the proposal of Fixing Reference not guilty of circularity or equivocation or both? Here are two observations to prepare the way for the answer to this question that I want to propose. …
Head of Competence Center for Methodology and Statistics (CCMS)
Luxembourg Institute of Health
Automatic for the people? Not quite
What caught my eye was the estimable (in its non-statistical meaning) Richard Lehman tweeting about the equally estimable John Ioannidis. …
This paper provides a critical guide to the literature concerning the answer to the question: when does a quantum experiment have an result? This question was posed and answered by Rovelli (Rovelli ) and his proposal was critiqued by Oppenheim, Reznick and Unruh (Oppenheim et al. ), who also suggest another approach that (as they point out) leads to the quantum Zeno effect. What these two approaches have in common is the idea that a question about the time at which an event occurs can be answered through the instantaneous measurement of a projector (in Rovelli’s case, a single measurement; in that of Oppenheim et al. , a repeated measurement). However, the interpretation of a projection as an instantaneous operation that can be performed on a system at a time of the experimenter’s choosing is problematic, particularly when it is the time of the outcome of the experiment that is at issue.
The idea that the quantum probabilities are best construed as the personal/subjective degrees of belief of Bayesian agents is an old one. In recent years the idea has been vigorously pursued by a group of physicists who ‡y the banner of quantum Bayesianism (QBism). The present paper aims to identify the prospects and problems of implementing QBism, and it critically assesses the claim that QBism provides a resolution (or dissolution) of some of the long standing foundations issues in quantum mechanics, including the measurement problem and puzzles of non-locality.
Goldman’s epistemology has been influential in two ways. First, it has influenced some philosophers to think that, contrary to erstwhile orthodoxy, relations of evidential support, or confirmation, are not discoverable a priori. Second, it has offered some philosophers a powerful argument in favor of methodological reliance on intuitions about thought experiments in doing philosophy. This paper argues that these two legacies of Goldman’s epistemology conflict with each other.
How do you determine the answer to the question of how likely it is that H is true? To answer it, you will need to do at least two things. First, you will need to figure out what evidence is in your possession. And second, you will need to figure out how likely H is to be true, given that totality of evidence. Most of the efforts of confirmation theory have been dedicated to understanding the second of these two tasks, i.e., explaining the confirmation relations between any arbitrary evidence set and any arbitrary hypothesis. But in the past two decades, many epistemologists have tried to understand the first of the two tasks, viz., determining what evidence is in your possession.
In my first post I sketched an argument for a principle connecting aboutness and justification. Here is the sketch version again as a little graphic:
The resulting principle, which I call in the book ‘Reference and Justification’, brings out the significance for accounts of aboutness of the fact that justification is truth conducive. …
Neyman April 16, 1894 – August 5, 1981
For my final Jerzy Neyman item, here’s the post I wrote for his birthday last year:
A local acting group is putting on a short theater production based on a screenplay I wrote: “Les Miserables Citations” (“Those Miserable Quotes”) . …
Much recent work in modal epistemology assumes a kind of modal realism according to which reality includes basic modal elements—basic capacities, essences, counterfactuals, etc., which are simply out there, waiting to be discovered. Alternative views of modality put modal epistemology in a very different light. On the reductionist Humean view championed by Lewis (e.g. [Lewis 1986b], [Lewis 1986a], [Lewis 1994]), modal statements express ultimately non-modal propositions concerning the spatiotemporal distribution of categorical properties, and thus modal knowledge is not knowledge of special modal facts. On conventionalist accounts (like [Ayer 1936] or [Sidelle 1989]), modal knowledge is presumably knowledge of linguistic conventions. On projectivist accounts (like [Skyrms 1980] or [Blackburn 1986]), modal knowledge is not knowledge of genuine objective facts at all.
April 16, 1894 – August 5, 1981
I’ll continue to post Neyman-related items this week in honor of his birthday. This isn’t the only paper in which Neyman makes it clear he denies a distinction between a test of statistical hypotheses and significance tests. …
Transmission views of testimony hold that the epistemic state of a speaker can, in some robust sense, be transmitted to their audience. That is, the speaker's knowledge or justification can become the audience's knowledge or justification via testimony. We argue that transmission views are incompatible with the hypothesis that one's epistemic state, together with one's practical circumstances (one's interests, stakes, ability to acquire new evidence etc.), determines what actions are rationally permissible for an agent. We argue that there are cases where, if the speaker's epistemic state were (in any robust sense) transmitted to the audience, then the audience would be warranted in acting in particular ways. Yet, the audience in these cases is not warranted in acting in the relevant ways, as their strength of justification does not come close to the speaker's. So transmission views of testimony are false.
Today is Jerzy Neyman’s birthday. I’ll post various Neyman items this week in honor of it, starting with a guest post by Aris Spanos. Happy Birthday Neyman! A. Spanos
A Statistical Model as a Chance Mechanism
Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. …
I was just reading a paper by Martin and Liu (2014) in which they allude to the “questionable logic of proving H0 false by using a calculation that assumes it is true”(p. 1704). They say they seek to define a notion of “plausibility” that
“fits the way practitioners use and interpret p-values: a small p-value means H0 is implausible, given the observed data,” but they seek “a probability calculation that does not require one to assume that H0 is true, so one avoids the questionable logic of proving H0 false by using a calculation that assumes it is true“(Martin and Liu 2014, p. 1704). …
In philosophy of statistics, Deborah Mayo and Aris Spanos have championed the following epistemic principle, which applies to frequentist tests: Severity Principle (full). Data x (produced by process G) provides good evidence for hypothesis H (just) to the extent that test T severely passes H with x . (Mayo and Spanos 2011, p.162). They have also devised a severity score that is meant to measure the strength of the evidence by quantifying the degree of severity with which H passes the test T (Mayo and Spanos 2006, 2011; Spanos 2013). That score is a real number defined on the interval [0,1]. That score is particularly high for hypotheses that are substantially different from the null-hypothesis when a significant result is obtained by using an under-powered test. This means that such hypotheses are very well supported by the evidence according to that measure. However, it is now well documented that significant tests with low power display inflated effect sizes. They systematically show departures from the null hypothesis H0 that are much greater than they really are:”theoretical considerations prove that when true discovery is claimed based on crossing a threshold of statistical significance and the discovery study is underpowered, the observed effects are expected to be inflated”(Ioannidis 2008, p.640) This is problematic in research contexts where the differences between H0 and H1 is particularly small and where the sample size is also small. See(Button et al. 2013; Ioannidis 2008; Gelman and Carlin 2014) for examples).
This paper presents a systematic approach for analyzing and explaining the nature of social groups. I argue against prominent views that attempt to unify all social groups or to divide them into simple typologies. Instead I argue that social groups are enormously diverse, but show how we can investigate their natures nonetheless. I analyze social groups from a bottom-up perspective, constructing profiles of the metaphysical features of groups of specific kinds. We can characterize any given kind of social group with four complementary profiles: its “construction profile,” its “extra essentials” profile, its “anchor” profile, and its “accident” profile. Together these provide a framework for understanding the nature of groups, help classify and categorize groups, and shed light on group agency.
Many accounts of structural rationality give a special role to logic. This paper reviews the problem case of clear-eyed logical uncertainty. An account of rational norms on belief that does not give a special role to logic is developed: doxastic probabilism.
This paper provides a new argument for a natural view in distributive ethics: that the interests of the relatively worse off matter more than the interests of the relatively better off, in the sense that it is more important to give some benefit to those that are worse off than it is to give that same benefit to those that are better off, and that it is sometimes (but not always) more important to give a smaller benefit to the worse off than to give a larger benefit to those better off. I will refer to this position as relative prioritarianism. The formal realization of this position is known as weighted-rank utilitarianism or the Gini social welfare function, and it is typically classified as an egalitarian view, though for reasons I will mention that classification may be misleading.
Traditionally, theories of mindreading have focused on the representation of beliefs and desires. However, decades of social psychology and social neuroscience have shown that, in addition to reasoning about beliefs and desires, human beings also use representations of character traits to predict and interpret behavior. While a few recent accounts have attempted to accommodate these findings, they have not succeeded in explaining the relation between trait attribution and belief-desire reasoning. On the account I propose, character-trait attribution is part of a hierarchical system for action prediction, and serves to inform hypotheses about agents' beliefs and desires, which are in turn used to predict and interpret behavior.
A theorem from Archimedes on the area of a circle is proved in a setting where some inconsistency is permissible, by using paraconsistent reasoning. The new proof emphasizes that the famous method of exhaustion gives approximations of areas closer than any consistent quantity. This is equivalent to the classical theorem in a classical context, but not in a context where it is possible that there are inconsistent infinitesimals. The area of the circle is taken ‘up to inconsistency’. The fact that the core of Archimedes’s proof still works in a weaker logic is evidence that the integral calculus and analysis more generally are still practicable even in the event of inconsistency.
Susanna Siegel, The Rationality of Perception (Oxford, 2017)
I find the challenges to the coherence of inferentialism much more powerful than the objections inherent in alternatives. That’s why I devote more time in the book to making the case that inferentialism is coherent, and to explaining what form it could take. …
Deontological theories face difficulties in accounting for situations involving risk; the most natural ways of extending deontological principles to such situations have unpalatable consequences. In extending ethical principles to decision under risk, theorists often assume that the risk must be incorporated into the theory by means of a function from the product of probability assignments to certain values. Deontologists should reject this assumption; essentially different actions are available to the agent when she cannot know that a certain act is in her power, so we cannot simply understand her choice situation as a “risk-weighted” version of choice under certainty.
Epistemic two-dimensional semantics (E2D), advocated by Chalmers (2006) and Jackson (1998), among others, aims to restore the link between necessity and a priority seemingly broken by Kripke (1972/1980), by showing how armchair access to semantic intensions provides a basis for knowledge of necessary a posteriori truths (among other modal claims). The most compelling objections to E2D are that, for one or other reason, the requisite intensions are not accessible from the armchair (see, e.g., Wilson 1982, Melnyk 2008). As we substantiate here, existing versions of E2D are indeed subject to such access-based objections. But, we moreover argue, the difficulty lies not with E2D but with the typically presupposed conceiving-based epistemology of intensions. Freed from that epistemology, and given the right alternative---one where inference to the best explanation (i.e., abduction) provides the operative guide to intensions---E2D can meet access-based objections, and fulfill its promise of restoring the desirable link between necessity and a priority. This result serves as a central application of Biggs and Wilson 2016 (summarized here), according to which abduction is an a priori mode of inference.
How are biases encoded in our representations of social categories? Philosophical and empirical discussions of implicit bias overwhelmingly focus on salient or statistical associations between target features and representations of social categories. These are the sorts of associations probed by the Implicit Association Test and various priming tasks. In this paper, we argue that these discussions systematically overlook an alternative way in which biases are encoded, i.e., in the dependency networks that are part of our representations of social categories. Dependency networks encode information about how the features in a conceptual representation depend on each other, which determines their degree of centrality in a conceptual representation. Importantly, centrally encoded biases systematically disassociate from those encoded in salient-statistical associations. Furthermore, the degree of centrality of a feature determines its cross-contextual stability: in general, the more central a feature is for a concept, the more likely it is to survive into a wide array of cognitive tasks involving that concept. Accordingly, implicit biases that are encoded in the central features of concepts are predicted to be more resilient across different tasks and contexts. As a result, our distinction between centrally encoded and salient-statistical biases has important theoretical and practical implications.
Mechanistic evidence for probabilistic models
Posted on Thursday, 06 Apr 2017
You observe a process that generates two kinds of outcomes, 'heads'
and 'tails'. The outcomes appear in seemingly random order, with
roughly the same amount of heads as tails. …
Recall the example discussed in my earlier post:
Jill and Jack: Jill fears (without good reason) that Jack is angry with her. As a result of her fear, Jack’s face looks angry to her when she sees it. If you saw Jack, you’d see his neutral expression for what it is. …
How should we choose between uncertain prospects in which different possible people might exist at different levels of wellbeing? Alex Voorhoeve and Marc Fleurbaey offer an egalitarian answer to this question. I explain their motivation for this answer in section 1. In sections 2 and 3, I give some objections to their version of egalitarianism. In section 4, I sketch an alternative account of their central intuition. This account, which I call person-affecting prioritarianism, avoids my objections to Voorhoeve and Fleurbaey’s egalitarianism and many of the objections to other versions of prioritarianism.
The reward system of science is the priority rule (Merton, 1957). The first scientist making a new discovery is rewarded with prestige while second runners get little or nothing. Strevens (2003, 2011), following Kitcher (1990), defends this reward system arguing that it incentivizes an efficient division of cognitive labor. I argue that this assessment depends on strong implicit assumptions about the replicability of findings. I question these assumptions based on meta-scientific evidence and argue that the priority rule systematically discourages replication. My analysis leads us to qualify Kitcher and Strevens’ contention that a priority-based reward system is normatively desirable for science.
Disagreeing with others about how to interpret a social interaction is a common occurrence. We often find ourselves offering divergent interpretations of others’ motives, intentions, beliefs, and emotions. Remarkably, philosophical accounts of how we understand others do not explain, or even attempt to explain such disagreements. I argue these disparities in social interpretation stem, in large part, from the effect of social categorization and our goals in social interactions, phenomena long studied by social psychologists. I argue we ought to expand our accounts of how we understand others in order to accommodate these data and explain how such profound disagreements arise amongst informed, rational, well-meaning individuals.