A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other. …
The modal properties of the principle of the causal closure of the physical have traditionally been said to prevent anything outside the physical world from affecting the physical universe and vice versa. This idea has been shown to be relative to the definition of the principle (Gamper 2017). A traditional definition prevents the one universe from affecting any other universe, but with a modified definition, e.g. (ibid.), the causal closure of the physical can be consistent with the possibility of one universe affecting the other universe. Gamper (2017) proved this modal property by implementing interfaces between universes. Interfaces are thus possible, but are they realistic? To answer this question, I propose a two-step process where the second step is scientific research. The first step, however, is to fill the gap between the principles or basic assumptions and science with a consistent theoretical framework that accommodates the modal properties of an ontology that matches the basic assumptions.
Elsewhere, I have categorically and unequivocally denied Heidi Howkins Lockwood's allegation that I groped her in October 2007. Of course, I do not expect people to take my denial at face value. Anyone can deny anything. …
Robert Batterman and others have argued that certain idealizing explanations have an asymptotic form: they account for a state of affairs or behavior by showing that it emerges “in the limit”. Asymptotic idealizations are interesting in many ways, but is there anything special about them as idealizations? To understand their role in science, must we augment our philosophical theories of idealization? This paper uses simple examples of asymptotic idealization in population genetics to argue for an affirmative answer and proposes a general schema for asymptotic idealization, drawing on insights from Batterman’s treatment and from John Norton’s subsequent critique.
According to a standard interpretation, Plato’s conception of our moral psychology evolved over the course of his written dialogues. In his earlier dialogues, notably the Protagoras, Meno, and Gorgias, Plato’s Socrates maintains that we always do what we believe is best. Many commentators infer from this that Socrates holds that the psyche is simple, in the sense that there is only one ultimate source of motivation: reason. By contrast, in the Republic, Phaedrus, and Timaeus, Socrates holds that the psyche is complex, or has three distinct and semi-autonomous sources of motivation, which he calls the reasoning, spirited, and appetitive parts. While the rational part determines what is best overall and motivates us to pursue it, the spirited and appetitive parts incline us toward different objectives, such as victory, honor, and esteem, or the satisfaction of our desires for food, drink, and sex.
Oabout number. Without it, modern economic life would be impossible, science would never have developed, and the complex technology that surrounds us would not exist. Though the full range of human numerical abilities is vast, the positive integers are arguably foundational to the rest of numerical cognition, and they will be our focus here. Many theorists have noted that although animals can represent quantity in some respects, they are unable to represent precise integer values. There has been much speculation about why this is so, but a common answer is that it is because animals lack another characteristic feature of human minds—natural language. In this chapter, we examine the question of whether there is an essential connection between language and number, while looking more broadly at some of the potential innate precursors to the acquisition of the positive integers. A full treatment of this topic would require an extensive review of the empirical literature, something we do not have space for. Instead, we intend to concentrate on the theoretical question of how language may figure in an account of the ontogeny of the positive integers.
Game-theoretic approaches to social norms have flourished in the recent years, and on first inspection theorists seem to agree on the broad lines that such accounts should follow. By contrast, this paper aims to show that the main two interpretations of social norms are at odds on at least one aspect of social norms, and both fail to account for another aspect.
The primary objective of this paper is to introduce a new epistemic paradox that puts pressure on the claim that justification is closed under multi premise deduction. The first part of the paper will consider two well-known paradoxes—the lottery and the preface paradox—and outline two popular strategies for solving the paradoxes without denying closure. The second part will introduce a new, structurally related, paradox that is immune to these closure-preserving solutions. I will call this paradox, The Paradox of the Pill. Seeing that the prominent closure-preserving solutions do not apply to the new paradox, I will argue that it presents a much stronger case against the claim that justification is closed under deduction than its two predecessors. Besides presenting a more robust counterexample to closure, the new paradox also reveals that the strategies that were previously thought to get closure out of trouble are not sufficiently general to achieve this task as they fail to apply to similar closure-threatening paradoxes in the same vicinity.
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are regarded as undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence.
José Ortega y Gasset (1883–1955) was a prolific and
distinguished Spanish philosopher in the twentieth century. In the
course of his career as philosopher, social theorist, essayist,
cultural and aesthetic critic, educator, politician and editor of the
influential journal, Revista de Occidente, he has written on
a broad range of themes and issues. Among his many books are:
Meditations on Quixote (1914), Invertebrate Spain
(1921), The Theme of Our Time (1923), Ideas on the
Novel (1925), The Dehumanization of Art (1925), What
is Philosophy? (1929), The Revolt of the Masses (1930),
En Torno a Galileo [Man and Crisis] (1933),
History as a System (1935), Man and People
(1939–40), The Origin of Philosophy (1943), The
Idea of Principle in Leibnitz and the Evolution of Deductive
According to Ian Hacking, some human kinds are subject to a peculiar type of classificatory instability: individuals change in reaction to being classified, which in turn leads to a revision of our understanding of the kind. Hacking’s claim that these ‘human interactive kinds’ cannot be natural kinds has been vehemently criticised on the grounds that similar patterns of instability occur in paradigmatic examples of natural kinds. I argue that the dialectic of the extant debate misses the core conceptual problem of human interactive kinds. The problem is not that these kinds are particularly unstable but ‘capricious’— their members behave in wayward, unexpected manners which defeats existing theoretical understanding. The reason for that, I argue, is that human interactive kinds are often ‘hybrid kinds’ consisting of a base kind and an associated status, which makes mechanisms that support patterns of change and stability systematically difficult to understand and predict.
When we construct a model of something, we must distinguish those features of the model which represent features of that which we model, from those features which are intrinsic to the model and play no representational role. The latter are artifacts of the model. For example, if we use string to make a model of a polygon, the shape of the model represents a feature of the polygon, and the size of the model may or may not represent a feature of the polygon, but the thickness and three-dimensionality of the string is certainly an artifact of the model.
The previous two chapters have sought to show that the probability calculus cannot serve as a universally applicable logic of inductive inference. We may well wonder whether there might be some other calculus of inductive inference that can be applied universally. It would, perhaps, arise through a weakening of the probability calculus. The principal source of difficulty addressed in those chapters was the additivity of the probability calculus. Such a weakening seems possible as far as additivity is concerned. Something like it is achieved with the Shafer- Dempster theory of belief functions. However there is a second, lingering problem. Bayesian analyses require prior probabilities. As we shall see below, these prior probabilities are never benign. They always make a difference to the final result.
The many chapters of this book all aim to sustain a single conclusion. Inductive inferences are not warranted by formal schemas or rules. They are warranted by background facts. Over the last few years, I have had the opportunity of presenting this thesis and arguments for it in various philosophical forums. The reactions to it have been varied. Some find the idea illuminating and even obvious, once it is made explicit. They are supportive and I am grateful for it. Others are more neutral, reacting with various forms of indifference or incomprehension. Some set aside the question of whether they are or are not convinced by the main claim; or whether there is some way that they could help the speaker advance the project. Rather they hold to the lamentable idea that, no matter what, the job of an audience in a philosophy talk is to try to trip up the speaker with some artful sophistry. Still others are, perhaps, not quite sure of precisely what I am proposing and arguing. But they are nonetheless sure that it is a Very Bad Thing that must be opposed and stopped.
Erich Lehmann 20 November 1917 – 12 September 2009
Erich Lehmann was born 100 years ago today! (20 November 1917 – 12 September 2009). Lehmann was Neyman’s first student at Berkeley (Ph.D 1942), and his framing of Neyman-Pearson (NP) methods has had an enormous influence on the way we typically view them. …
According to Dominic Lopes, expressiveness in pictures should be analyzed solely in terms of “expression looks” of various sorts, namely the look of a figure, a scene and/or a design. But, according to this view, it seems puzzling that expressive pictures should have any emotional effect on their audiences. Yet Lopes explicitly ties his “contour theory” of expression in pictures to empathic responses in spectators. Thus, despite his deflationary account of pictorial expression, he claims that pictures can give us practice in various “empathic skills.” I argue that Lopes’s account of empathic responses to pictures, while interesting and enlightening, nevertheless ignores the most important way in which pictures exercise and enhance our empathic skills, namely, by giving us practice in taking the emotional perspective of another person.
Whereas it appears that direct, or causal, theories dominate philosophy’s theories of reference, and it is widely held that they present an insuperable obstacle for a fictional character’s name to refer, I attempt to show not only that they can be easily made compatible with such theories, but that reference to the fictional fits rather smoothly into the distinctive articles of current theories of direct reference. However, the issues about reference to fictional characters goes well beyond those points, so its compatibility with direct referential theories is not a demonstration that names of fictional things in fact refer. This essay argues only that certain popular objections to fictional reference are unsound. Moreover, if those references were to occur, it would remove a serious self-inflicted conundrum over negative existentials, one from which those raising it seem unable to extract themselves credibly.
This paper argues that one promising recent account of moral worth, Julia Markovits’s Coincident Reasons Thesis, does not afford us the ability to draw a traditional distinction between actions that are performed from immediate inclination and actions that possess genuine moral worth (“Kant’s distinction”). I argue that a right-making reasons account of moral worth such as Markovits’s can in fact accommodate Kant’s distinction. To make this possible, we must go beyond the consideration of instrumental reasons and equip a right-making reasons account with a new, more general, sort of reason: a conditional reason. After introducing the notion of a conditional reason, I argue that one class of these reasons, hedonically conditional reasons, are crucial for understanding the reasons for which we pursue hobbies. Kant’s figure who pursues morally right action from immediate inclination might be understood as a moral hobbyist whose actions plausibly lack moral worth. This grounds an objection to Markovits’s own account as a sufficient condition on morally worthy action.
I argue that the function attributed to episodic memory by Mahr & Csibra (that is, grounding one’s claims to epistemic authority over past events) fails to support the essentially autonoetic character of such memories. I suggest, in contrast, that episodic event-memories are sometimes purely first-order, sometimes autonoetic, depending on relevance in the context.
You aren’t supposed to talk about it. Not really. And certainly not in front of the kids. But that isn’t why you don’t remember it. That isn’t why you don’t remember the way it feels. You don’t remember the way it feels because it doesn’t leave a memory trace to begin with. The facts are retained, but the feeling disappears. What I’m alluding to is the pain of childbirth—hush, don’t let my kids read this, but it did hurt! Yet although I can remember that labor pains hurt, I can’t remember what they felt like. Although I can remember that they were too traumatic to sleep through and that while standing under the shower trying to alleviate the agony, I tore down the soap dish bolted into the wall, I can’t conjure up the sensory experience itself. Although my memory of the events leading up to the birth is pellucid—I remember how the nurses were impressed that I wanted to suffer through it unmedicated and how, when it came down to the wire, my obstetrician started humming Blue Moon—my memory of the bodily sensations is nonexistent. Introspection, here, reveals an utter blank. Contrary to the adage about experience being the best teacher, experience’s pedagogy was an utter failure.
Modern work on context dependence in natural language has emerged from a tradition that combines formal and conceptual work. On the formal side, theorists like Montague ( , 1970), Lewis (1970), and Kamp (1971) developed formal frameworks that successfully modeled the relevant aspects of natural language. On the conceptual side, theorists like Kaplan (1989a,b) and Lewis (1980) proposed ways to link the formal apparatus to notions that have an important role in philosophy of language, like reference and content. David Kaplan’s work in Demonstratives (1989a) has been particularly influential in shaping the way in which both philosophers and linguists think of context. Kaplan defends the idea that the semantics of natural language makes essential use of a context parameter, i.e. (roughly) a set of coordinates that constitutes an abstract representation of the situation of speech. According to Kaplan, this parameter plays two key roles, which set it apart from other coordinates (so-called index coordinates) used in the semantics. First, context contributes to determining the content expressed by an utterance. Second, having a notion of context in the semantics is crucial for defining an appropriate notion of logical consequence.
What is the role of affective experience in explaining how our desires provide us with reasons for action? When we desire that p, we are thereby disposed to feel attracted to the prospect that p, or to feel averse to the prospect that not-p. In this paper, we argue that affective experiences – including feelings of attraction and aversion – provide us with reasons for action in virtue of their phenomenal character. Moreover, we argue that desires provide us with reasons for action only insofar as they are dispositions to have affective experiences. On this account, affective experience has a central role to play in explaining how desires provide reasons for action.
This article develops an account of local epistemic practices on the basis of case studies from ethnobiology. I argue that current debates about objectivity often stand in the way of a more adequate understanding of local knowledge and ethno-biological practices in general. While local knowledge about the biological world often meets criteria for objectivity in philosophy of science, general debates about the objectivity of local knowledge can also obscure their unique epistemic features. In modification of Ian Hacking’s suggestion to discuss “ground level questions” instead of objectivity, I propose an account that focuses on both epistemic virtues and vices of local epistemic practices.
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance.
Rosencrantz and Guildenstern Are Dead, they are betting on coin throws. Rosencrantz has a standing bet on heads, and he keeps winning, pocketing coin after coin. We soon learn that this has been going on for some time, and that no fewer than 76 consecutive heads have been thrown, and counting — a situation which is making Guildenstern increasingly uneasy. The coins don’t appear to be double-headed or weighted or anything like that — just ordinary coins — leading Guildenstern to consider several unsettling explanations: that he is subconsciously willing the coins to land heads in order to cleanse himself of some repressed sin, that they are both trapped reliving the same moment in time over and over again, that the coins are being controlled by some menacing supernatural force. He then proposes a fourth hypothesis, which suggests a change of heart: that nothing surprising is happening at all and no special explanation is needed. He says, “… each individual coin spun individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.” In the end 92 heads are thrown without a single tail, when the characters are interrupted.
Karl Jaspers (1883–1969) began his academic career working as
a psychiatrist and, after a period of transition, he converted to
philosophy in the early 1920s. Throughout the middle decades of the
twentieth century he exercised considerable influence on a number of
areas of philosophical inquiry: especially on epistemology, the
philosophy of religion, and political theory. The influence of Kant over Jaspers is widely acknowledged in the
literature, to the extent that he has been depicted as “The
first and the last Kantian” (Heinrich Barth, quoted in Ehrlich
1975, 211). Usually this evaluation is based on his reliance on the
subjective-experiential transformation of Kantian philosophy, which
reconstructs Kantian transcendentalism as a doctrine of particular
experience and spontaneous freedom, and emphasizes the constitutive
importance of lived existence for authentic knowledge.
The notion of preference has a central role in many disciplines,
including moral philosophy and decision theory. Preferences and their
logical properties also have a central role in rational choice theory,
a subject that in its turn permeates modern economics, as well as
other branches of formalized social science. The notion of preference
and the way it is analysed vary between these disciplines. A treatment
is still lacking that takes into account the needs of all usages and
tries to combine them in a unified approach. This entry surveys the
most important philosophical uses of the preference concept and
investigates their compatibilities and conflicts.
The Problem of Old Evidence is a perennial issue for Bayesian confirmation theory. Garber (1983) famously argues that the problem can be solved by conditionalizing on the proposition that a hypothesis deductively implies the existence of the old evidence. In recent work, Hartmann and Fitelson (2015) and Sprenger (2015) aim for similar, but more general, solutions to the Problem of Old Evidence. These solutions are more general because they allow the explanatory relationship between a new hypothesis and old evidence to be inductive, rather than deductive. In this paper, I argue that these solutions are either unsound or under-motivated, depending on the case of inductive explanation that we have in mind. This lends support to the broader claim that Garber-style Bayesian confirmation cannot capture the sense in which new hypotheses that do not deductively imply old evidence nevertheless seem to be confirmed via old evidence.
Hawking’s area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. This result tightens the conventional wisdom that quantum field theoretic violations of the null energy condition account for why the conclusion of the area theorem can be bypassed in the semi-classical context. Shown here is that violations of the null energy condition, though necessary, are not sufficient to violate the conclusion of the area theorem. As an added benefit, the specific form of the energy condition used here suggests that the area non-decrease behavior described by the area theorem is a quasi-local effect that depends, in large measure, on the energetic character of the relevant fields in the vicinity of the event horizon.
This paper considers states on the Weyl algebra of the canonical commutation relations over the phase space R2n. We show that a state is regular iff its classical limit is a countably additive Borel probability measure on R2n. It follows that one can “reduce” the state space of the Weyl algebra by altering the collection of quantum mechanical observables so that all states are ones whose classical limit is physical.