. The allegation that P-values overstate the evidence against the null hypothesis continues to be taken as gospel in discussions of significance tests. All such discussions, however, assume a notion of “evidence” that’s at odds with significance tests–generally Bayesian probabilities of the sort used in Jeffrey’s-Lindley disagreement (default or “I’m selecting from an urn of nulls” variety). …
The talk gives a formal analysis of public lies, and explains how public lying is related to public announcement.
Constructive empiricism is the version of scientific anti-realism
promulgated by Bas van Fraassen in his famous book The Scientific
Image (1980). Van Fraassen defines the view as follows:
Science aims to give us theories which are empirically adequate; and
acceptance of a theory involves as belief only that it is empirically
adequate. (1980, 12)
With his doctrine of constructive empiricism, van Fraassen is widely
credited with rehabilitating scientific anti-realism. There has been a
contentious debate within the philosophy of science community over
whether constructive empiricism is true or false.
Recently there has been a lot of discussion of the value of the Implicit Association Test (IAT) as a measure of implicit bias — discussion generated largely by a new paper by Calvin Lai, Patrick Forscher and their colleagues that presents the results of a meta-analysis of studies conducted using the IAT, plus a provocative article in New York magazine by Jesse Singal that discusses that paper and the methodological controversy it’s a part of. …
Recently, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson published a paper in the British Journal of Philosophy of Science in which they claim that the Principal Principle entails the Principle of Indifference -- indeed, the paper is called 'The Principal Principle implies the Principle of Indifference'. …
All Bayesian epistemologists agree on two claims. The first — which we might call Precise Credences — says that an agent’s doxastic state at a given time t in her epistemic life can be represented by a single credence function Pt, which assigns to each proposition A about which she has an opinion a precise numerical value Pt(A) that is at least 0 and at most 1. Pt(A) is the agent’s credence in A at t. It measures how strongly she believes A at t, or how confident she is at t that A is true. The second — which is typically called Probabilism — says that an agent’s credence function at a given time should be a probability function — that is, for all times t, Pt( ) = 1 for any tautology , Pt(⊥) = 0 for any contradiction ⊥, and Pt(A ∨ B) = Pt(A) + Pt(B) − Pt(AB) for any propositions A and B.
Biologist Steve Jones claims that a piece of research cannot be science if the person who did the research does not communicate their findings. He then dismisses Fermat’s proof of his last theorem as something that Fermat might as well not have done. I give reasons to reject the argument Jones offers for his communication requirement, the requirement itself and what he says about Fermat’s last theorem.
Is it appropriate to convict and punish defendants using only statistical evidence? In this paper, I argue that it is not and try to explain why it is not. This is difficult to do because there is a powerful argument for thinking that we should convict and punish using statistical evidence. It looks as if the relevant cases are cases of decision under risk and it seems pretty clear that we should act to maximize expected value in such cases. Given some standard assumptions about the values at stake, the case for convicting and punishing using statistical evidence seems solid. In trying to show where this argument goes wrong, I shall argue (against Lockeans, reliabilists, and others) that beliefs supported only by statistical evidence are epistemically defective and (against Enoch, Fisher, and Spectre) that these epistemic considerations should matter to the law. The key to solving the puzzle about the role of statistical evidence in the law is to revise some commonly held views about epistemic value and to defend the relevance of epistemology to this practical question.
Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature in the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by (i) the prior credibility of a potential explanation, (ii) the causal framing used to describe the explanation, (iii) the generalizability of the explanation, and (iv) its statistical relevance for the evidence. Collectively, the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations were not susceptible to causal framing effects. Instead, highly credible hypotheses were sensitive to the effects of factors which are usually considered relevant from a normative point of view: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they illuminate the close relationship between prior beliefs and explanatory power. Third, they clarify the relationship between abductive and probabilistic reasoning.
A small probability space representation of quantum mechanical probabilities is defined as a collection of Kolmogorovian probability spaces, each of which is associated with a context of a maximal set of compatible measurements, that portrays quantum probabilities as Kolmogorovian probabilities of classical events. Bell’s theorem is stated and analyzed in terms of the small probability space formalism.
Both advocates and critics of experimental philosophy often describe it in narrow terms as being the empirical study of people’s intuitions about philosophical cases. This conception corresponds with a narrow origin story for the field—it grew out of a dissatisfaction with the uncritical use of philosophers’ own intuitions as evidence for philosophical claims. In contrast, a growing number of experimental philosophers have explicitly embraced a broad conception of the sub-discipline, which treats it as simply the use of empirical methods to inform philosophical problems. And this conception has a corresponding broad origin story—the field grew out of a recognition that philosophers often make empirical claims and that empirical claims call for empirical support. In this paper, I argue that the broad conception should be accepted, offering support for the broad origin story.
Suppose a Newtonian universe where an elastic and perfectly round ball is dropped. At some point in time, the surface of the ball will no longer be spherical. If an object is F at one time and not F at another, while existing all the while, at least normally the object changes in respect of being F. I am not claiming that that is what change in respect of F is (as I said recently in a comment, I think there is more to change than that), but only that normally this is a necessary and sufficient condition for it. …
Roger White has drawn my attention to an interesting problem, having to do with what to believe in a situation in which you have evidence that the world is infinite. I will build up to the situation in stages.
Participants evaluated whether emotions expressed in facial displays by self and a stranger were responses to particular emotion-eliciting photos or not. Performance on self was superior to a stranger when paired eliciting stimuli produce different emotions (e.g. sad vs cute), but not the same emotion (e.g. both amusing), supporting a “common code” not memory account.
and future in moral judgment, we administered a well-established moral judgment battery to individuals with hippocampal damage and deficits in episodic thought (insert Greene et al. 2001). Healthy controls select deontological answers in high-conflict moral scenarios more frequently when they vividly imagine themselves in the scenarios than when they imagine scenarios abstractly, at some personal remove. If this bias is mediated by episodic thought, individuals with deficits in episodic thought should not exhibit this effect. We report that individuals with deficits in episodic memory and future thought make moral judgments and exhibit the biasing effect of vivid, personal imaginings on moral judgment. These results strongly suggest that the biasing effect of vivid personal imagining on moral judgment is not due to episodic thought about the past and future. VC 2016 Wiley Periodicals, Inc.
In a choice between saving five people or saving another person, is it better to save the five, other things being equal? According to utilitarianism, it would be better to save the five if the combined gain in well-being for them would be greater than the loss for the one. A standard objection is that adding up the gains or losses of different people in this manner is a problematic form of interpersonal aggregation. It is far from clear, however, what more precisely is supposed to be problematic about utilitarian aggregation. The aggregation sceptics—that is, among others, John Rawls, Robert Nozick, Thomas Nagel, John M. Taurek, and T. M. Scanlon—have not offered a clear criterion for what counts as a morally problematic form of aggregation and what does not. Hence it is hard to know what to make of this objection.
Bas van Fraassen claims that constructive empiricism strikes a balance between the empiricist’s commitments to epistemic modesty – that one’s opinion should extend no further beyond the deliverances of experience than is necessary – and to the rationality of science. In ‘‘Should the Empiricist be a Constructive Empiricist?’’ I argued that if the constructive empiricist follows through on her commitment to epistemic modesty she will find herself adopting a much more extreme position than van Fraassen suggests. Van Fraassen and Bradley Monton have recently responded. My purpose here is to contest their response. The goal is not merely the rebuttal of a rebuttal; there is a lesson to learn concerning the realist/ anti-realist dialectic generated by van Fraassen’s view.
In this paper, I will examine the representative halfer and thirder solutions to the Sleeping Beauty problem. Then by properly applying the event concept in probability theory and examining similarity of the Sleeping Beauty problem to the Monty Hall problem, it is concluded that the representative thirder solution is wrong and the halfers are right, but that the representative halfer solution also contains a wrong logical conclusion.
Is it ever rational to calculate expected utilities? Posted on Wednesday, 04 Jan 2017
Decision theory says that faced with a number of options, one
should choose an option that maximizes expected utility. …
A number of philosophers have recently found it congenial to talk in terms of grounding. Grounding discourse features grounding sentences that are answers to questions about what grounds what. The goal of this article is to explore and defend a counterpart-theoretic interpretation of grounding discourse. We are familiar with David Lewis’s applications of the method of counterpart theory to de re modal discourse. Counterpart-theoretic interpretations of de re modal idioms and grounding sentences share similar motivations, mechanisms, and applications. I shall explain my motivations and describe two applications of a counterpart theory for grounding discourse. But, in this article, my main focus is on counterpart-theoretic mechanisms.
Just as in the past 5 years since I’ve been blogging, I revisit that spot in the road at 11p.m., just outside the Elbar Room, get into a strange-looking taxi, and head to “Midnight With Birnbaum”. (The pic on the left is the only blurry image I have of the club I’m taken to.) …
Social scientists use many different methods, and there are often substantial disagreements about which method is appropriate for a given research question. In response to this uncertainty about the relative merits of different methods, W. E. B. Du Bois advocated for and applied “methodological triangulation”. This is to use multiple methods simultaneously in the belief that, where one is uncertain about the reliability of any given method, if multiple methods yield the same answer that answer is confirmed more strongly than it could have been by any single method. Against this, methodological purists believe that one should choose a single appropriate method and stick with it.
So turning to Nick Smith’s long and discursive book, what line does he take on the relationship between everyday conditionals and the material conditional? Smith, as usual, sets aside subjunctive conditionals; the issue then is the relation beween indicative conditional and the truth-functional ‘‘ (in his preferred notation). …
When logical fallacies of statistics go uncorrected, they are repeated again and again…and again. And so it is with the limb-sawing fallacy I first posted in one of my “Overheard at the Comedy Hour” posts. …
The project of this part of this book is to show how standard inductive inference forms are materially grounded in background facts as opposed to inductive inference schema of universal applicability. This chapter and the next address the inductive inference form known as “inference to the best explanation” or “abduction.” The leading idea is that a theory or hypothesis must do more than merely accommodate or predict the evidence. If it is to accrue inductive support from the evidence, it must explain it. Since multiple explanations are possible, we are enjoined to infer to the best of them. That means that greater explanatory prowess confers greater inductive support. In 1964, Penzias and Wilson found puzzling residual noise in their radio antenna that turned out to be cosmic in origin. Subsequent investigation showed it to be thermal radiation at 2.7 degrees Kelvin. The radiation was explained by big bang cosmology as the much diluted and cooled thermal radiation left over from the hot big bang over 1010 years ago. The competing steady state cosmology and other now less well-known models could provide no comparably strong explanation. Cosmologist inferred to big bang cosmology as the best explanation.
What is the relationship between the ordinary language conditional and the material conditional which standard first-order logic uses as its counterpart, surrogate, or replacement? Let’s take it as agreed for present purposes that there is a distinction to be drawn between two kinds of conditional, traditionally “indicative” and “subjunctive” (we can argue the toss about the aptness of these labels for the two kinds, and argue further about where the boundary between the two kinds is to be drawn: but let’s set such worries aside). …
Measures of epistemic utility are used by formal epistemologists to make determinations of epistemic betterness among cognitive states. The Brier rule is the most popular choice (by far) among formal epistemologists for such a measure. In this paper, however, we show that the Brier rule is sometimes seriously wrong about whether one cognitive state is epistemically better than another. In particular, there are cases where an agent gets evidence that definitively eliminates a false hypothesis (and the probabilities assigned to the other hypotheses stay in the same ratios), but the Brier rule says that things have gotten epistemically worse. Along the way to this ‘elimination experiment’ counter-example to the Brier rule as a measure of epistemic utility, we identify several useful monotonicity principles for epistemic betterness. We also reply to several potential objections to this counter-example.
your left and the other two on your far right. You can easily go on a rescue with your boat. But, unfortunately, there is not sufficient time to save all. Faced with the choice between (a) saving person A and letting persons B and C die, and (b) saving B and C while letting A die, what should you do? John Taurek (1977) argued that you should flip a coin to decide whom to save so as to give each individual an equal chance of being rescued, thereby showing equal and positive respect to each. For, as he sees it, there is no impersonal perspective to or for which the death of two is twice as worse than that of one, and thus no particular reason to favor the larger group over the smaller one. Although based on a moral conviction that is widely recognized, this conclusion has caused a sense of unease to many, whose intuition is that we ought to save the two. Many consequentialists can offer a straightforward rationale for the intuition by appealing to interpersonal aggregation. But many other philosophers have attempted to provide a justification for, if anything, the duty to save the greater number without combining the utilities and claims of separate individuals and thus opening the door to the tyranny of the majority.
We consider it to be a bad thing to be inconsistent. Similarly, we
criticize others for failing to appreciate (at least the more obvious)
logical consequences of their beliefs. In both cases there is a
failure to conform one’s attitudes to logical strictures. We
generally take agents who fall short of the demands of logic to be
rationally defective. This suggests that logic has a normative role to
play in our rational economy; it instructs us how we ought or ought
not to think or reason. The notion that logic has such a logical role
to play is deeply anchored in the way we traditionally think about
logic as well as in the way we teach logic.
Here is how Ephraim Glick puts the first premise of my argument for the existence of propositions: (M1) ∃xx ∃y (~(y<xx) & ☐(xx are true → y is true)) Glick’s (M1) is a better—a more precise—way of stating that premise than is the way I usually state it, which is: ‘there are modally valid arguments’.