The question whether Frege’s theory of indirect reference enforces an infinite hierarchy of senses has been hotly debated in the secondary literature. Perhaps the most influential treatment of the issue is that of Burge (1979), who offers an argument for the hierarchy from rather minimal Fregean assumptions. I argue that this argument, endorsed by many, does not itself enforce an infinite hierarchy of senses. I conclude that whether or not the theory of indirect reference can avail itself of only finitely many senses is pending further theoretical development.
Mary in her black and white room knows all that physical science can teach us about the physical facts involved in colour experience. But it does not follow that she knows everything there is to know about these facts. The Russellian monist exploits this gap to defend a form of physicalism – in a very broad sense of that word. Unfortunately, recent developments in the grounding literature cast doubt on that strategy, or so I will argue.
Chris Tweedt proposes that there is no independent concept of contrastive knowledge. He allows that we can meaningfully and in fact helpfully say that a person knows that p rather than q. But this is shorthand for something that can be said in a more traditional way as that the person knows that if p or q then p. I have two worries about this line. First, I do not know how to understand the conditional here. And second, I suspect that the suggested interpretation takes away the motive for using a contrastive idiom in the first place.
We discuss economic environments in which individual choice sets are fixed and the level of a certain parameter that systematically biases the preferences of all agents is determined endogenously to achieve equilibrium. Our equilibrium concept, Biased Preferences Equilibrium, is reminiscent of competitive equilibrium: agents’ choice sets and their preferences are independent of the behavior of other agents, the combined choices have to satisfy overall feasibility constraints and the endogenous adjustment of the equilibrating preference parameter is analogous to the equilibrating price adjustment. The concept is applied to a number of economic examples.
Within ordinary —unitary— quantum mechanics there exist global protocols that allow to verify that no definite event —an outcome to which a probability can be associated— occurs. Instead, states that start in a coherent superposition over possible outcomes always remain as a superposition. We show that, when taking into account fundamental errors in measuring length and time intervals, that have been put forward as a consequence of a conjunction of quantum mechanical and general relativity arguments, there are instances in which such global protocols no longer allow to distinguish whether the state is in a superposition or not. All predictions become identical as if one of the outcomes occurs, with probability determined by the state. We use this as a criteria to define events, as put forward in the Montevideo Interpretation of Quantum Mechanics. We analyze in detail the occurrence of events in the paradigmatic case of a particle in a superposition of two different locations. We argue that our approach provides a consistent (C) single-world (S) picture of the universe, thus allowing an economical way out of the limitations imposed by a recent theorem by Frauchiger and Renner showing that having a self-consistent single-world description of the universe is incompatible with quantum theory. In fact, the main observation of this paper may be stated as follows: If quantum mechanics is extended to include gravitational effects to a QG theory, then QG, S, and C are satisfied.
The CPT theorem states that any causal, Lorentz-invariant, thermodynamically well-behaved quantum field theory must also be invariant under a reflection symmetry that reverses the direction of time (T), flips spatial parity (P), and conjugates charge (C). Although its physical basis remains obscure, CPT symmetry appears to be necessary in order to unify quantum mechanics with relativity. This paper attempts to decipher the physical reasoning behind proofs of the CPT theorem in algebraic quantum field theory. Ultimately, CPT symmetry is linked to a systematic reversal of the C -algebraic Lie product that encodes the generating relationship between observables and symmetries. In any physically reasonable relativistic quantum field theory it is always possible to systematically reverse this generating relationship while preserving the dynamics, spectra, and localization properties of physical systems. Rather than the product of three separate reflections, CPT symmetry is revealed to be a single global reflection of the theory’s state space.
For two centuries, collaborative research has become increasingly widespread. Various explanations of this trend have been proposed. Here, we offer a novel functional explanation of it. It differs from accounts like that of Wray (2002) by the precise socio-epistemic mechanism that grounds the beneficialness of collaboration. Boyer-Kassem and Imbert (2015) show how minor differences in the step-efficiency of collaborative groups can make them much more successful in particular configurations. We investigate this model further, derive robust social patterns concerning the general successfulness of collaborative groups, and argue that these patterns can be used to defend a general functional account.
In this paper, I give a counterexample to a claim made in Norton (2008) that empirically equivalent theories can often be regarded as theoretically equivalent by treating one as having surplus structure, thereby overcoming the problem of underdetermination of theory choice. The case I present is that of Lorentz's ether theory and Einstein's theory of special relativity. I argue that Norton's suggestion that surplus structure is present in Lorentz's theory in the form of the ether state of rest is based on a misunderstanding of the role that the ether plays in Lorentz's theory, and that in general, consideration of the conceptual framework in which a theory is embedded is vital to understanding the relationship between different theories.
Many biologists appeal to the so-called Krogh principle when justifying their choice of experimental organisms. The principle states that “for a large number of problems there will be some animal of choice, or a few such animals, on which it can be most conveniently studied”. Despite its popularity, the principle is often critiqued for implying unwarranted generalizations from optimal models. We argue that the Krogh principle should be interpreted in relation to the historical and scientific contexts in which it has been developed and used. We interpret the Krogh Principle as a heuristic, i.e., as a recommendation to approach biological problems through organisms where a specific trait or physiological mechanism is expected to be most distinctively displayed or most experimentally accessible. We designate these organisms “Krogh organisms.” We clarify the differences between uses of model organisms and non-standard Krogh organisms. Among these is the use of Krogh organisms as “negative models” in biomedical research, where organisms are chosen for their dissimilarity to human physiology. Importantly, the representational scope of Krogh organisms and the generalizability of their characteristics are not fixed or assumed but explored through experimental studies. Research on Krogh organisms is steeped in the comparative method characteristic of zoology and comparative physiology, in which studies of biological variation produce insights into general physiological constraints. Accordingly, we conclude that the Krogh principle exemplifies the advantages of studying biological variation as a strategy to produce generalizable insights.
The plausibility of theories of truth has often been observed to
vary, sometimes extensively, across different domains or regions of
discourse. Because of this variance, the problems internal to each such
theory become salient as they overgeneralize. A natural suggestion is
therefore that not all (declarative) sentences in all domains are true
in exactly the same way. Sentences in mathematics, morals, comedy,
chemistry, politics, and gastronomy may be true in different ways, if
and when they are ever true. ‘Pluralism about truth’ names
the thesis that there is more than one way of being true.
I begin the paper by outlining one classic argument for the guise of the good: that we must think that desires represent their objects favourably in order to explain why they can make actions rational (Quinn 1995; Stampe 1987). But what exactly is the conclusion of this argument? Many have recently formulated the guise of the good as the view that desires are akin to perceptual appearances of the good (Stampe 1987; Oddie 2005; Tenenbaum 2007). But I argue that this view fails to capitalize on the above argument, and that the argument is better understood as favouring a view on which desires are belief-like states. I finish by addressing some countervailing claims made by Avery Archer (2016).
This entry discusses the relationship between disability and wellbeing. Disabilities are commonly thought to be unfortunate, but whether this is true is unclear, and if it is true, it is unclear why it is true. The entry first explains the disability paradox, which is the apparent discrepancy between the level of wellbeing that disabled people self-report, and the level of wellbeing that nondisabled people predict disabled people to have. It then turns to an argument that disabilities must be bad, because it is wrong to cause them in others. Sections 4 and 5 discuss whether disabilities might be intrinsically bad or even bad by definition. The final section turns to discuss the claim that to whatever extent disabilities are bad, this is not because disabilities themselves are harmful because only because society discriminates against people with disabilities.
Timon (c. 320–230 BCE) was the younger contemporary and leading
of Elis. Unlike Pyrrho, he
wrote numerous poems and prose works; fragments of and reports on some
of these have survived, by far the largest number (more than sixty)
being from the Silloi (Lampoons). Several of these
works were devoted to, or at least included, laudatory descriptions of
Pyrrho and his philosophy; the Silloi appears to have
contained some passages in this vein, but consisted largely of
satirical thumbnail sketches of a wide range of other philosophers, all
of whom, in Timon’s estimation, failed wholly or partly to achieve the
ideal outlook exemplified by Pyrrho.
I review the philosophical literature on the question of when two physical theories are equivalent. This includes a discussion of empirical equivalence, which is often taken to be necessary, and sometimes taken to be sufficient, for theoretical equivalence; and “interpretational” equivalence, which is the idea that two theories are equivalent just in case they have the same interpretation. It also includes a discussion of several formal notions of equivalence that have been considered in the recent philosophical literature, including (generalized) definitional equivalence and categorical equivalence. The article concludes with a brief discussion of the relationship between equivalence and duality.
Diodorus was a pioneering logician, and the most celebrated member of
the Dialectical School of the 4th–3rd
c. BCE. His contributions to logic—in particular, definitions
of modal terms and the criteria for a sound conditional—are
covered in the article on the Dialectical School (see also Section 2
of the entry on
fatalism). The present article
adds a conspectus of Diodorus’s other ideas. His use of paradox is at
least as prominent in our ancient sources about him as are those
constructive contributions to logical theory.
Elizabeth Barnes and Robert Williams have developed a theory of metaphysical indeterminacy, via which they defend the theoretical legitimacy of vague objects. In this paper, we argue that while the Barnes-Williams theory supplies a viable account of genuine metaphysical vagueness, it cannot underwrite an account of genuinely vague objects. First we clarify the distinction between these two key theses. Then we argue that the Barnes-Williams theory of metaphysical vagueness not only fails to deliver genuinely vague objects, it in fact provides grounds for rejecting them.
At the time of his death, Max Ferdinand Scheler was one of the most
prominent German intellectuals and most sought after philosophers of
his time. A pioneer in the development of phenomenology in the early
part of the 20th century, Scheler broke new ground in many
areas of philosophy and established himself as perhaps the most
creative of the early phenomenologists. Relative to the attention his
work received and the attention his contemporaries now enjoy, interest
in Scheler’s work and thought has waned considerably. This
decrease in attention is in part due to the suppression of
Scheler’s work by the Nazis from 1933 to 1945, a suppression
stemming from his Jewish heritage and outspoken denunciation of
fascism and National Socialism.
Recent philosophical work has praised the reward structure of science, while recent empirical work has shown that many scientific results may not be reproducible. I argue that the reward structure of science incentivizes scientists to focus on speed and impact at the expense of the reproducibility of their work, thus contributing to the so-called reproducibility crisis. I use a rational choice model to identify a set of sufficient conditions for this problem to arise, and I argue that these conditions plausibly apply to a wide range of research situations. Currently proposed solutions will not fully address this problem. Philosophical commentators should temper their optimism about the reward structure of science.
Scientific models need to be investigated if they are to provide valuable information about the systems they represent. Surprisingly, the epistemological question of what enables this investigation has hardly been investigated. Even authors who consider the inferential role of models as central, like Hughes (1997) or Bueno and Colyvan (2011), content themselves with claiming that models contain mathematical resources that provide inferential power. We claim that these notions require further analysis and argue that mathematical formalisms contribute to this inferential role. We characterize formalisms, illustrate how they extend our mathematical resources, and highlight how distinct formalisms offer various inferential affordances.
Nevertheless it is necessary to remember that there is a wider Teleology, which is not touched by the doctrine of Evolution, but is actually based upon the fundamental proposition of Evolution. That proposition is, that the whole world, living and not living, is the result of the mutual interaction, according to definite laws, of the forces possessed by the molecules of which the primitive nebulosity of the universe was composed. …
The capacity for cognition allows human and nonhuman animals to navigate the physical world effectively and adaptively. For instance, animals can estimate distances, memorize events, track objects in space, detect regularities, discriminate between small sets of objects exactly and between large sets approximately, and make causal inferences. Thus, over the last decades, developmental and comparative research have gained more and more insights into the development of human and nonhuman thinking about the natural world including its entities, regularities, and causal structure (Baillargeon and Carey 2012; Call and Tomasello 2005; Rakoczy 2014; Tomasello 2014).
When evaluating norm transgressions, children begin to show some sensitivity to the agent’s intentionality around preschool age. However, the specific developmental trajectories of different forms of such intent- based judgments and their cognitive underpinnings are still largely unclear. The current studies, therefore, systematically investigated the development of intent- based normative judgments as a function of two crucial factors: (a) the type of the agent’s mental state underlying a normative transgression, and (b) the type of norm transgressed (moral versus conventional). In Study 1, 5- and 7- year- old children as well as adults were presented with vignettes in which an agent transgressed either a moral or a conventional norm. Crucially, she did so either intentionally, accidentally (not intentionally at all) or unknowingly (intentionally, yet based on a false belief regarding the outcome). The results revealed two asymmetries in children’s intent- based judgments. First, all age groups showed greater sensitivity to mental state information for moral compared to conventional transgressions. Second, children’s (but not adults’) normative judgments were more sensitive to the agent’s intention than to her belief. Two subsequent studies investigated this asymmetry in children more closely and found evidence that it is based on performance factors: children are able in principle to take into account an agent’s false belief in much the same way as her intentions, yet do not make belief- based judgments in many existing tasks (like that of Study 1) due to their inferential complexity. Taken together, these findings contribute to a more systematic understanding of the development of intent- based normative judgment.
This paper argues that biological species should be construed as abstract models, rather than biological or even tangible entities. Various (phenetic, cladistic, biological etc.) species concepts are defined as set-theoretic models of formal theories, and their logical connections are illustrated. In this view organisms relate to a species not as instantiations, members, or mereological parts, but rather as phenomena to be represented by the model/species. This sheds new light on the long-standing problems of species and suggests their connection to broader philosophical topics such as model selection, scientific representation, and scientific realism.
Martin Luther affirms his theological position by saying “Here I stand. I can do no other.” Supposing that Luther’s claim is true, he lacks alternative possibilities at the moment of choice. Even so, many libertarians have the intuition that he is morally responsible for his action. One way to make sense of this intuition is to assert that Luther’s action is indirectly free, because his action inherits its freedom and moral responsibility from earlier actions when he had alternative possibilities and those earlier directly free actions formed him into the kind of person who must refrain from recanting. Surprisingly, libertarians have not developed a full account of indirectly free actions. I provide a more developed account. First, I explain the metaphysical nature of indirectly free actions such as Luther’s. Second, I examine the kind of metaphysical and epistemic connections that must occur between past directly free actions and the indirectly free action. Third, I argue that an attractive way to understand the kind of derivative moral responsibility at issue involves affirming the existence of resultant moral luck.
Summary This survey article discusses two basic issues that semantic theories of questions face. The first is how to conceptualise and formally represent the semantic content of questions. This issue arises in particular because the standard truth-conditional notion of meaning, which has been fruitful in the analysis of declarative statements, is not applicable to questions. The second issue is how questions, when embedded in a declarative statement (e.g., in Bill wonders who called ) contribute to the truth-conditional content of that statement. Several ways in which these issues have been addressed in the literature are discussed and compared.
Titus Lucretius Carus (died mid to late 50s BCE) was an Epicurean poet
of the late Roman republican era. His six-book Latin hexameter
poem De rerum natura (DRN for short), variously
translated On the nature of things and On the nature of
the universe, survives virtually intact, although it is disputed
whether he lived to put the finishing touches to it. As well as being
a pioneering figure in the history of philosophical poetry, Lucretius
has come to be our primary source of information on Epicurean physics,
the official topic of his poem. Among numerous other Epicurean
doctrines, the atomic ‘swerve’ is known to us mainly from
Lucretius’ account of it.
Pythagoras, one of the most famous and controversial ancient Greek
philosophers, lived from ca. 570 to ca. 490 BCE. He spent his early
years on the island of Samos, off the coast of modern Turkey. At the
age of forty, however, he emigrated to the city of Croton in southern
Italy and most of his philosophical activity occurred
there. Pythagoras wrote nothing, nor were there any detailed accounts
of his thought written by contemporaries. By the first centuries BCE,
moreover, it became fashionable to present Pythagoras in a largely
unhistorical fashion as a semi-divine figure, who originated all that
was true in the Greek philosophical tradition, including many of
Plato’s and Aristotle’s mature ideas.
Perhaps it would be best if you imagined it as your own fancy bids, assuming it will rise to the occasion, for certainly I cannot suit you all. For instance, how about technology? I think that there would be no cars or helicopters in and above the streets; this follows from the fact that the people of Omelas are happy people. …
Suppose first a countably infinite line of blindfolded people standing on tiles numbered 0,1,2,…, with the ones on a tile whose number is divisible by 10 having a red hat, and the others having blue hats. …
We analyze the flow into inflation for generic “single-clock” systems, by combining an effective field theory approach with a dynamical-systems analysis. In this approach, we construct an expansion for the potential-like term in the effective action as a function of time, rather than specifying a particular functional dependence on a scalar field. We may then identify fixed points in the effective phase space for such systems, order-by-order, as various constraints are placed on the M th time derivative of the potential-like function. For relatively simple systems, we find significant probability for the background spacetime to flow into an inflationary state, and for inflation to persist for at least 60 efolds. Moreover, for systems that are compatible with single-scalar-field realizations, we find a single, universal functional form for the effective potential, V (φ), which is similar to the well-studied potential for power-law inflation. We discuss the compatibility of such dynamical systems with observational constraints.