Trendspotters will have noticed a resurgence of interest in the idea of moral progress. The topic has been resuscitated in no small part by contemporary theorists from a range of disciplines who have improved on earlier theories of moral progress by naturalizing them. A priori pronouncements about the inevitability of moral progress have been replaced by deep dives into the contemporary sciences of the mind and discussions of macroscale economic and public health trends (e.g., Jamieson 2002, 2017; Pinker 2012, 2018; Singer 2011; Moody-Adams 2017).
This paper is about two topics: metaepistemological absolutism and the epistemic principles governing perceptual warrant. Our aim is to highlight – by taking the debate between dogmatists and conservativists about perceptual warrant as a case study – a surprising and hitherto unnoticed problem with metaepistemological absolutism, at least as it has been influentially defended by Paul Boghossian (2006a) as the principal metaepistemological contrast point to relativism. What we find is that the metaepistemological commitments at play on both sides of this dogmatism/conservativism debate do not line up with epistemic relativism nor do they line up with absolutism, at least as Boghossian articulates this position.
Candidates for fundamental physical laws rarely, if ever, employ higher than second time derivatives. Easwaran (2014) sketches an enticing story that purports to explain away this puzzling fact and thereby provides indirect evidence for a particular set of metaphysical theses used in the explanation. I object to both the scope and coherence of Easwaran’s account, before going on to defend an alternative, more metaphysically deflationary explanation: in interacting Lagrangian field theories, it is either impossible or very hard to incorporate higher than second time derivatives without rendering the vacuum state unstable. The so-called Ostrogradski instability represents a powerful constraint on the construction of new field theories and supplies a novel, largely overlooked example of non-causal explanation in physics.
The Hole Argument can be extended to exclude everything. I will argue that there is nothing in the metaphysical commitment of a substantival manifold which makes it especially susceptible to the Hole Argument; other objects are just as susceptible to its terrors. These casualties of the hole demonstrate how critically the Hole Argument hinges on our notion of determinism and not on the diffeomorphic freedom of general relativity (GR). Just as Earman and Norton  argue that we should not let our metaphysics run roughshod over the structure of our physical theories, so I will argue that, in particular, we should not uncritically allow our metaphysics to dictate what our physical theories must determine. The central conviction which drives the arguments of this paper is that deterministic theories are not required to determine for future moments what they cannot determine for any present or past moments.
Some who read Stein’s “Yes, but. . . ” consider his remark that there is “no difference that makes a difference” between realism and instrumentalism a reflection of the paper’s most important lesson for those engaged in the debate. Stanford, for instance, focuses on Stein’s suggestion that “the dispute between realism and instrumentalism is not well joined,” in that there is a convergence of ambitions between a sophisticated realism and a sophisticated instrumentalism [Stanford, 2005, 404-5]. With this lesson in mind, the “no difference” comment seems a natural slogan for the paper’s central takeaway.
Arguing from his "hole" thought experiment, Einstein became convinced that, in cases in which the energy-momentum-tensor source vanishes in a spacetime hole, a solution to his general relativistic field equation cannot be uniquely determined by that source. After reviewing the definition of active diffeomorphisms, this paper uses them to outline a mathematical proof of Einstein’s result. The relativistic field equation is shown to have multiple solutions, just as Einstein thought. But these multiple solutions can be distinguished by the different physical meaning that each metric solution attaches to the local coordinates used to write it. Thus the hole argument, while formally correct, does not prohibit the subsequent rejection of spurious solutions and the selection of a physically unique metric. This conclusion is illustrated using the Schwarzschild metric. It is suggested that the Einstein hole argument therefore cannot be used to argue against substantivalism.
In a recent paper, Juan Garcia has argued that Leibniz is, in an important sense, “a friend of Molinism.”1 For those who are familiar with contemporary versions of Molinism (e.g., Flint), this suggestion is rather surprising, since Leibniz is clearly a theological determinist: he holds that God chooses every detail of the actual world. …
Epistemologists often assume that if rationality is worth pursuing, it must bear some sort of connection to the truth. What exactly this connection amounts to is mysterious, but the thought that there must be such a connection seems to limit our theory of rationality in various ways. For instance, a classic objection to coherentism is that the view seemingly has no safeguards against rational believers who get things very wrong – so, one might think, the demand for a truth-connection favors externalist views over internalist views. In formal epistemology, various understandings of the truth-connection have been used to argue for formal norms such as probabilism and conditionalization. This paper will examine the truth problem as it relates to permissivism. If rationality is a guide to the truth, can it also allow some leeway in how we should respond to our evidence?
For Isaac Newton, space and time were independent, continuous entities. Albert Einstein unified the two into space-time, again a continuous object. With the development of quantum field theory, however, physicists began to take seriously the idea that discreteness might be an essential component of our understanding of space and time. Amit Hagar’s Discrete or Continuous? The Quest for Fundamental Length in Modern Physics takes the reader on an enjoyable journey—by turns historical, philosophical, and physical—in a quest to unravel many of the subtleties that underlie the concept of a minimum length in physics.
On moving spotlight theories, eternalism is true: past, present and future things all exist. But the present is metaphysically special: you have not said all that is to be said about temporal reality if you just said what happens at which times, and how the times are related by relations like earlier-than and simultaneous-with, without having said which time is objectively present. …
We compare three theoretical frameworks for pursuing explanatory integration in psychiatry: a new dimensional framework grounded in the notion of computational phenotype, a mechanistic framework, and a network of symptoms framework. Considering the phenomenon of alcoholism, we argue that the dimensional framework is the best for effectively integrating computational and mechanistic explanations with phenomenological analyses.
mundane and supramundane, even life and death. O’Halloran is gone from this world. Aoki is elderly in Japan. Morton is still learning to enjoy life in this world. They will not meet in this world, and yet they have already met here in these pages, and we are invited to join their stories of oneness. What kind of “oneness” is that? CHAPTER 8
Peer review is often taken to be the main form of quality control on academic writings. Usually this is carried out by journals. Parts of math and physics appear to have now set up a parallel, crowd-sourced model of peer review, where papers are posted on the arXiv to be publicly discussed. In this paper we argue that crowd-sourced peer review is likely to do better than journal-solicited peer review at sorting papers by quality. Our argument rests on two key claims. First, crowd-sourced peer review will lead to there being on average more reviewers per paper than journal-solicited peer review. Second, due to the wisdom of the crowds, more reviewers will tend to make better judgments than fewer. We make the second claim precise by looking at the Condorcet Jury Theorem as well as two related, novel jury theorems developed specifically to apply to the case of peer review.
If law is a system of enforceable rules governing social relations
and legislated by a political system, it might seem obvious that law is
connected to ideology. Ideology refers, in a general sense, to a system
of political ideas, and law and politics seem inextricably intertwined. Just as ideologies are dotted across the political spectrum, so too are
legal systems. Thus we speak of both legal systems and ideologies as
liberal, fascist, communist, and so on, and most people probably assume
that a law is the legal expression of a political ideology. One would
expect the practice and activity of law to be shaped by people’s
political beliefs, so law might seem to emanate from ideology in a
straightforward and uncontroversial way.
Evidence can be misleading: it can rationalize raising one’s confidence in false propositions, and lowering one’s confidence in the truth. But can evidence be predictably misleading? Can a rational agent with some total body of evidence know that it points to (a particular) falsehood? It seems not: plausibly, rational agents believe what their evidence supports. Suppose for reductio that a rational agent can see ahead of time that her evidence is likely to point towards a false belief. Since she is rational, if she can anticipate that her evidence is misleading, then it seems she should avoid being misled. But then she won’t believe what her evidence supports after all. That is to say, if evidence were predictably misleading, it wouldn’t be misleading in the first place. So, it seems, evidence cannot be predictably misleading.
According to one prominent view of exercising abilities (e.g., Millar 2010), a subject, S, counts as exercising an ability to ? if and only if S successfully ?s. Such an ‘exercise-success’ thesis looks initially very plausible for abilities, perhaps even obviously or analytically true. In this paper, however, I will be defending the position that one can in fact exercise an ability to do one thing by doing some entirely distinct thing, and in doing so I’ll highlight various reasons (epistemological, metaphysical and linguistic) that favor the alternative approach I develop over views that hold that the exercise of an ability is a success notion in the sense Millar maintains.
Knowledge is hard to obtain regarding complicated reality and complicated systems. However, complex systems lead to even greater problems of knowledge than complicated ones, even when in important ways complex systems may appear simpler than complicated ones. A complicated system will have many parts that are interconnected in a variety of ways that may not be obvious and may be hard to discern or untangle. However merely complicated systems will “add up” in a reasonably straightforward way. Once one figures out these interconnections and their nature, one can understand the whole relatively easily as it will ultimately be the sum of those parts, which may nevertheless be hard to understand on their own. However, in the case of complex systems, by their nature they usually manifest that phenomenon first identified by Aristotle that the whole may be greater than the sum of the parts. This greater degree of wholeness will often be due to nonlinear relations within the system such as increasing returns to scale or tangled non-monotonic relations. Even though there may be fewer variables and relations, the complex nature of the relations makes knowledge and understanding of the system more difficult (Israel, 2005).
What are the connections between the successful performance of illocutionary acts and audience understanding or uptake of their performance? According to one class of proposals, audience understanding suffices for successful performance. I explain how those proposals emerge from earlier work and seek to clarify some of their interrelations.
that whenever a statement ϕ(a) in the first-order language of set theory is true in the set-theoretic universe V , then it is also true in a proper inner model W V . A stronger principle, the ground-model reflection principle, asserts that any such ϕ(a) true in V is also true in some non-trivial ground model of the universe with respect to set forcing. These principles each express a form of width reflection in contrast to the usual height reflection of the Levy–Montague reflection theorem. They are each equiconsistent with ZFC and indeed Π2-conservative over ZFC, being forceable by class forcing while preserving any desired rank-initial segment of the universe. Furthermore, the inner-model reflection principle is a consequence of the existence of sufficient large cardinals, and lightface formulations of the reflection principles follow from the maximality principle MP and from the inner-model hypothesis IMH. We also consider some questions concerning the expressibility of the principles.
In 1963 Prior proved a theorem that places surprising constraints on the logic of intentional attitudes, like ‘thinks that’, ‘hopes that’, ‘says that’ and ‘fears that’. Paraphrasing it in English, and applying it to ‘thinks’, it states: If, at t, I thought that I didn’t think a truth at t, then there is both a truth and a falsehood I thought at t. In this paper I explore a response to this paradox that exploits the opacity of attitude verbs, exemplified in this case by the operator ‘I thought at t that’, to block Prior’s derivation. According to this picture, both Leibniz’s law and existential generalization fail in opaque contexts. In particular, one cannot infer from the fact that I’m thinking at t that I’m not thinking a truth at t, that there is a particular proposition such that I am thinking it at t. Moreover, unlike some approaches to this paradox (see Bacon et al. ) the failure of existential generalization is not motivated by the idea that certain paradoxical propositions do not exist, for this view maintains that there is a proposition that I’m not thinking a truth at t. Several advantages of this approach over the nonexistence approach are discussed, and models demonstrating the consistency of this theory are provided. Finally, the resulting considerations are applied to the liar paradox, and are used to provide a non-standard justification of a classical gap theory of truth. One of the main challenges for this sort of theory — to explain the point of assertion, if not to assert truths — can be met within this framework.
In this paper two paradoxes of infinity are considered through the lense of counterfactual logic, drawing heavily on a result of Kit Fine . I will argue that a satisfactory resolution of these paradoxes will have wide ranging implications for the logic of counterfactuals. I then situate these puzzles in the context of the wider role of counterfactuals, connecting them to indicative conditionals, probabilities, rationality and the direction of causation, and compare my own resolution of the paradoxes to alternatives inspired by the theories of Lewis and Fine.
This paper defends three claims about concrete or physical models: (i) these models remain important in science and engineering, (ii) they are often essentially idealized, in a sense to be made precise, and (iii) despite these essential idealizations, some of these models may be reliably used for the purpose of causal explanation. This discussion of concrete models is pursued using a detailed case study of some recent models of landslide generated impulse waves. Practitioners show a clear awareness of the idealized character of these models, and yet address these concerns through a number of methods. This paper focuses on experimental arguments that show how certain failures to accurately represent feature X are consistent with accurately representing some causes of feature Y , even when X is causally relevant to Y . To analyze these arguments, the claims generated by a model must be carefully examined and grouped into types. Only some of these types can be endorsed by practitioners, but I argue that these endorsed claims are sufficient for limited forms of causal explanation.
Consider three horses: Alexander's Bucephalus, Gandalf's Shadowfax and Dawn. Here, I am using "Dawn" as the name of a horse that has come into existence just now, so that it is now the first moment of existence for Dawn. …
Massimo Renzo has recently argued in this journal that Allen Buchanan’s account of the ethics of intervention is too permissive.1 Renzo claims that a proper understanding of political self-determination shows that it is often impermissible to intervene in order to establish a regime that leads to more self-determination for a group of people if that group was or would be opposed to the intervention. Renzo’s argument rests on an analogy between individual self-determination and group self-determination. However, the analogy also points to crucial differences between the two kinds of self-determination. To make his argument work, Renzo must come up with a theory of self-determination that accounts for these differences without vitiating his argument, and it is not clear that this can be accomplished. In response to the differences we may in fact be pushed to adopt an account of self-determination that is more permissive with respect to intervention than even Buchanan’s theory.
Parents are typically partial to their own children. They typically treat their own children more favorably, on certain dimensions, than other similarly placed individuals. For example, most parents invest more emotional and financial resources in sheltering, nourishing, educating, and entertaining their children than they invest in others who are equally in need of these goods, and would benefit equally from them. Moreover, most parents take themselves to be justified in engaging in such partiality, and most philosophical theorists of partiality agree.
According to reasons fundamentalism, all normative properties are analyzable in terms of reasons.1 Famously, some of the analyses offered by reasons fundamentalists face the wrong kind of reasons problem. This problem first appeared in the literature on the buck-passing account of value, which says in its simplest form that what it is for something to be valuable is for there to be sufficient reasons to have a pro-attitude toward it.2 This simple view fails, many worry, because there can be reasons for having pro-attitudes toward things that have nothing to do with their value. Contrasting cases like Beauty and Extra Credit provide an illustration: Beauty: Jane is a first-year graduate student in art history. She has loved art all her life, but is just now getting the opportunity to see Europe’s masterpieces through her graduate program. She sees the Mona Lisa in person for the first time. She is enthralled by its symmetry, depth, and enigmatic tone.
Prioritarianism is the distributive view that welfare gains matter more, morally, the worse off you are.1 A common and intuitively compelling objection to prioritarianism is that it wrongly treats cases involving one person (intra-personal cases) like cases involving more than one person (inter-personal cases), when they should be treated differently. In a nutshell, the objection goes as follows. A person is allowed, when faced with an intrapersonal choice between someone else’s possible futures, to reason prudentially when choosing on their behalf.2 She is not required to give special moral weight to the future in which the person for whom she is choosing would be worse off. However, when choosing how to distribute goods between multiple people, prudential reasoning on behalf of the group is not justified, as the claim of the person who is worse off should matter more (call this the Moral Shift). Thus, prioritarianism, which as an aggregative, impersonal view is committed to treating intrapersonal and interpersonal trade-offs similarly, cannot explain the Moral Shift.
It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often thought to be in tension with each other. Most significantly for our purposes, it is obscure how epistemic reasons and practical reasons might interact in the explanation of what one ought to believe. We draw an analogy with a similar distinction between types of reasons for actions in the context of activities. The analogy motivates a two-level account of the structure of normativity that explains the interaction of correctness-based and other reasons. This account relies upon a distinction between normative reasons and authoritatively normative reasons. Only the latter play the reasons role in explaining what state one ought to be in. All and only practical reasons are authoritative reasons. Hence, in one important sense, all reasons for belief are practical reasons. But this account also preserves the autonomy and importance of epistemic reasons. Given the importance of having true beliefs about the world, our epistemic standard typically plays a key role in many cases in explaining what we ought to believe. In addition to reconciling (versions of) evidentialism and pragmatism, this two-level account has implications for a range of important debates in normative theory, including the interaction of right and wrong reasons for actions and other attitudes, the significance of reasons in understanding normativity and authoritative normativity, the distinction between ‘formal’ and ‘substantive’ normativity, and whether there is a unified source of authoritative normativity.
If presentism is true, then vagueness about the exact moment of cessation of existence implies vagueness about existence: for if it is vague whether an object has ceased to exist at t, then at time t it was, is or will be vague whether the object exists. …
It is known that perdurantists, who hold that objects persisting in time are made of infinitely thin temporal slices, have to deny that fundamental particles are simple (i.e., do not have (integral) parts). …