Gratitude is a positive emotion, typically classified with joy, pride, happiness and hope. But unlike those emotions, gratitude often comes with normative strings attached—the so-called “debt of gratitude.” Beneficence gives rise to gratitude, but also, potentially, to accusations of ingratitude. The other positive emotions lack a precise counterpart to “ingratitude”: to say that someone is without joy or hopeless or bereft of pride is not (necessarily) to thereby accuse her of a failing. It is, of course, possible to respond inadequately to what should make one joyful, proud or hopeful, but in the case of gratitude this danger seems to be foregrounded in the emotion itself, in the form of the ‘debt of gratitude’.
In this passage of Elena Ferrante’s My Brilliant Friend, the protagonist Elena describes her frustrations with the fact that she feels consistently bested by her friend Lila. This moves her to consider severing ties with Lila—but instead Elena decides, once again, to scramble to catch up with Lila. Despite the fact that Elena is the one who receives a higher education, she feels that “…I knew little or nothing. She seemed ahead of me in everything, as if she were going to a secret school. I noticed also a tension in her, the desire to prove that she was equal to whatever I was studying.” (p.160) Lila, too, is scrambling to catch up to Elena.
We propose that the phenomenon of definite reduplication in Greek involves use of the definite determiner D as domain restrictor in the sense of Etxeberria & Giannakidou (2009). The use of D as a domain restricting function with quantifiers has been well documented for European languages such as Greek, Basque, Bulgarian and Hungarian and typically results in a partitive-like interpretation of the QP. We propose a unifying analysis that treats domain restriction and D-reduplication as the same phenomenon; and in our analysis, D-reduplication emerges semantically as similar to a partitive structure, a result resonating with earlier claims to this end by Kolliakou (2004). None of the existing accounts of definites can capture the correlations in the use of D with quantifiers and in reduplication that we establish here.
Whatever else liberalism involves, it involves the idea that it is objectionable, and often wrong, for the state, or anyone else, to intervene, in certain ways, in certain choices. This paper aims to evaluate different possible sources of support for this core liberal idea. The result is a pluralistic view. It defends, but also stresses the limits of, some familiar elements: that some illiberal interventions impair valuable activities and that some violate rights against certain kinds of invasion. More speculatively, it points to two further sources of support for liberalism, each of which represents a certain kind of social standing: a self-sovereignty compromised simply by being subject to certain kinds of commands and a relational equality compromised by the condemnation of choices with which one’s group is identified.
Some recent work has challenged two principles thought to govern the logic of the indicative conditional: modus ponens (Kolodny & MacFarlane 2010) and modus tollens (Yalcin 2012). There is a fairly broad consensus in the literature that Kolodny and Mac- Farlane’s challenge can be avoided, if the notion of logical consequence is understood aright (Willer 2012; Yalcin 2012; Bledin 2014). The viability of Yalcin’s counterexample to modus tollens has meanwhile been challenged on the grounds that it fails to take proper account of context-sensitivity (Stojnić forthcoming). This paper describes a new counterexample to modus ponens, and shows that strategies developed for handling extant challenges to modus ponens and modus tollens fail for it. It diagnoses the apparent source of the counterexample: there are bona fide instances of modus ponens that fail to represent deductively reasonable modes of reasoning.
Since the beginning of the millennium, Richard Joyce has made several influential contributions to contemporary metaethics. He has revived moral error theory, championed evolutionary debunking arguments, and developed and defended a position known as ‘moral fictionalism’. The twelve papers in this volume are organized around these themes—error theory, evolution and debunking, and projectivism and fictionalism—with four papers in each of the three categories. All papers but one are previously published. I had read nearly all of them before and I have used many of them in my own work. Needless to say, then, I think highly of Joyce’s work and I benefited from engaging with the material anew. The volume also contains a newly written introductory chapter, which I found helpful.
On the surface, one of the main differences between John Mc- Dowell and Wilfrid Sellars when it comes to their conceptions of intentionality has to do with their respective accounts of meaning. McDowell advocates a relational account of meaning, whereas Sellars holds, on the contrary, that a correct view of intentionality is only possible through a non-relational account of meaning. According to McDowell, Sellars does not consider the possibility of his own relational view because he suffers from a ‘blind spot’. It is implied that if Sellars saw this possibility, he would see the light and embrace something like McDowell’s own account. I would like to argue in this paper that the whole issue goes much deeper than that. Sellars does not suffer from a ‘blind spot’ and showing why this is the case should give us an idea of how far apart these two thinkers really are, appearances notwithstanding. It will also reveal that the heart of their disagreement does not consist in a dispute over whether the correct shape of an account of meaning should be relational or not. Sellars’s and McDowell’s respective outlooks on intentionality differ fundamentally with regards to the concept of objectivity.
It’s often claimed in the philosophical and scientific literature on temporal representation that there is no such thing as a genuine sensory system for time. In this paper, I argue for the opposite – many animals, including all mammals, possess a genuine sensory system for time based in the circadian system. In arguing for this conclusion, I develop a semantics and metasemantics for explaining how the endogenous rhythms of the circadian system provide organisms with a direct information link to the temporal structure of their environment. In doing so, I highlight the role of sensory systems in an information processing architecture.
Reductionist doctrines about normative and evaluative phenomena enjoy serious advantages, such as in explaining how we can come to know about normative reality, in explaining why the normative depends on the non-normative, and in avoiding the specter of Ockham’s razor. Unfortunately, some evaluative phenomena resist reduction. This is true, in my view, of moral and axiological facts. When we say that people ought to be more kind, or that things would be better if they were, it does not appear that we could report these same facts using non-normative, non-evaluative language. But things are different, I believe, when it comes to epistemic facts. When we say that someone is justified in believing something, we can report that same fact using non-normative language. Reductionism in metaepistemology is more plausible than reductionism in metaethics.
The philosophical analysis of mathematical explanations concerns
itself with two different, although connected, areas of investigation. The first area addresses the problem of whether mathematics can play
an explanatory role in the natural and social sciences. The second
deals with the problem of whether mathematical explanations occur
within mathematics itself. Accordingly, this entry surveys the
contributions to both areas, it shows their relevance to the history
of philosophy, mathematics, and science, it articulates their
connection, and points to the philosophical pay-offs to be expected by
deepening our understanding of the topic.
(725–788)[ 1 ]
was one of the most important and pivotal thinkers in the history of
Indian and Tibetan Buddhist
philosophy.[ 2 ]
His contributions to Buddhist thought were particularly noteworthy
due to his historical position as one of the later Indian
interpreters of the Madhyamaka thought of Nāgārjuna
(ca. 1st–2nd c.). This was an historical position which
allowed him to consider many important developments (both inside and
outside the Madhyamaka tradition) that preceded
him.[ 3 ]
The central claim of the Madhyamaka School is that all phenomena are
empty (śūnya) of any intrinsic nature, unchanging
essence, or absolute mode of being.
Pragmatic arguments have often been employed in support of theistic
belief. Theistic pragmatic arguments are not arguments for the
proposition that God exists; they are arguments that believing that
God exists is rational. The most famous theistic pragmatic argument
Pascal’s Wager. Though we touch on this argument briefly below, this entry focuses
primarily on the theistic pragmatic arguments found in William James,
J.S. Mill, and James Beattie. It also explores the logic of pragmatic
arguments in general, and the pragmatic use of moral arguments in
particular. Finally, this entry looks at an important objection to
the employment of pragmatic arguments in belief formation—the
objection that evidence alone should regulate belief.
Humean accounts of laws of nature fail to distinguish between dynamic laws and static initial conditions. But this distinction plays a central role in scientific theorizing and explanation. I motivate the claim that this distinction should matter for the Humean, and show that current views lack the resources to explain it. I then develop a regularity theory which captures this distinction. My view takes empirical accessibility to be one of the primary features of laws, and I identify features laws must have to be empirically accessible. I then argue that laws with these features tend to be dynamic.
To understand human nature is to understand the plastic process of human development and the diversity it produces. Drawing on the framework of developmental systems theory and the idea of developmental niche construction, we argue that human nature is not embodied in only one input to development, such as the genome, and that it should not be confined to universal or typical human characteristics. Both similarities and certain classes of differences are explained by a human developmental system that reaches well out into the ‘environment’. We point to a significant overlap between our account and the ‘life history trait cluster’ account of Grant Ramsey. We defend the developmental systems account against the accusation that trying to encompass developmental plasticity and human diversity leads to an unmanageably complex account of human nature.
The laws of physics have an interesting internal explanatory structure. Some principles explain others; some constraints fall out of the dynamic equations, and others help determine them. This leads to interesting, and non-trivial, questions for metaphysicians of laws. What sort of explanation is this? Which principles are explananda, and which explanandum?
It is often claimed that the social sciences cannot be reduced to a lower-level individualistic science. The standard argument for this position (usually labelled explanatory holism) is the Fodorian multiple realizability argument. Its defenders endorse token-token(s) identities between “higher-level” social objects and pluralities/sums of “lower-level” individuals (a position traditionally called ontological individualism), but they maintain that the properties expressed by social science predicates are often multiply realizable, entailing that type-type identities between social and individualistic properties are ruled out. In this paper I argue that the multiple realizability argument for explanatory holism is unsound. The social sciences are indeed irreducible, but the principled reason for this is that the required token-token(s) identifications cannot in general be carried through. In consequence, paradigmatic social science predicates cannot be taken to apply to the objects quantified over in the lower-level sciences. The result is that typical social science predicates cannot even be held to be co-extensive with individualistic predicates, which means type-type identifications are ruled out too. Multiple realizability has nothing to do with this failure of co-extensiveness, because the relevant social science predicates are not multiply realized in the sense intended by the explanatory holists, a sense which presupposes reductive token-token(s) identifications.
Interpretive analogies between quantum mechanics and statistical mechanics are drawn out by attending to their common probabilistic structure and related to debates about primitive ontology and the measurement problem in quantum mechanics.
I think I can give an example of something that has no reasonable (numerical) epistemic probability. Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). …
A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. …
[The following is Part I in a two-part guest post by Will Bridewell and Alistair M. C. Isaac. — JS]We live in an age of post-truth rhetoric, fake news, and misinformation; consequently, questions of how to accurately identify deceptive communication and to appropriately respond to it have become increasingly important. …
Seth Margolis, Daniel Ozer, Sonja Lyubomirsky, and I have designed a new measure of overall life satisfaction. We believe that this measure improves on the most widely used multi-item measure of life satisfaction, Diener et al. …
Testimony is a source of knowledge. On many occasions, the explanation of one’s knowing that p is that a speaker, S, told one that p. Our testimonial sources— the referents of ‘S’— can be other individuals, and they can be collectives; that is, in addition to learning from individuals, we learn things from committees, commissions, councils, clubs, teams, research groups, departments, administrations, churches, states and other social groups. North Korea might make a declaration about its missile programme, the church about the ordination of women priests, the council about its deficit, the research group about its findings, and so on. We will look at a few examples in detail shortly, but the starting point is that social groups can be a source of testimony, and we can learn things from such collective testimony. The question this paper pursues is, what explains our learning that p from collective testimony to p?
Constitutivists believe that we can derive universally and unconditionally authoritative norms from the conditions of agency. Thus if c is a condition of agency, then you ought to live in conformity with c no matter what your particular ends, projects, or station. Much has been said about the validity of the inference, but that’s not my topic here. I want to assume it is valid and talk about what I take to be the highest ambition of constitutivism: the prospect of grounding moral requirements in the conditions of agency. If this can be done, then we can show that everyone is bound by the demands of morality, and we can do so without the customary entanglements— queer normative entities, an implausibly powerful moral sense, or divine lawgivers. Kant had this ambition (on one reading of his moral metaphysics, anyway). For him the moral law’s universality meant that it had to be a law of freedom, a law that characterized the activity of autonomous wills. It was also the aspiration, in more complicated ways, of post- Kantians like Fichte, Hegel, and Bradley. And it is a project pursued by some contemporary philosophers. But there is something surprising about this final group’s efforts. They begin with a conception of agency that appears highly individualistic, a conception whose conditions don’t explicitly mention other people. This is surprising because presumably the goal of deriving the universal authority of moral requirements from a constitutivist argument will involve demonstrating that other people play some distinctive role in my agency— a role that requires me to honor, respect, or care for them. So if other people are not a party to my agency, it is hard to see how we are supposed to establish this sort of conclusion.
In ancient times, before some point in the second half of the nineteenth century, if you were uncertain how to investigate a topic, epistemologists—philosophers concerned with knowledge and rational belief—would be among the people you would first think of reading and consulting. They had played a large role in the early years of the scientific revolution, mediating the delicate tension between scientific discovery and traditional belief. The last such figure with this kind of influence was John Stuart Mill. But all that has changed. For at least the past hundred years, your first port of call would be a statistician.
Some of our reasons for action are grounded in the fact that the action in question is a means to something else we have reason to do. This raises the question as to which principles govern the transmission of reasons from ends to means. In this paper, we discuss the merits and demerits of a liberal transmission principle, which plays a prominent role in the current literature. The principle states that an agent has an instrumental reason to y whenever y-ing is a means for him to do what he has intrinsic reason to do. We start by discussing the objection that this principle implies counterintuitive reason statements. We argue that attempts to solve this “too many reasons problem” by appealing to pragmatic strategies for debunking intuitions about so-called negative reason existentials are questionable. Subsequently, we discuss three important arguments in favor of Liberal Transmission, and argue that they fail to make a convincing case for this principle. In the course of the discussion, we also provide alternative, less liberal transmission principles. We argue that these alternative principles allow us to accommodate those phenomena that seem to support Liberal Transmission while avoiding its problems.
The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. …
It is natural to think of causes as difference-makers. What exact difference causes make, however, is an open question. In this paper, I argue that the right way of understanding difference-making is in terms of causal processes: causes make a difference to a causal process that leads to the effect. I will show that this way of understanding difference-making nicely captures the distinction between causing an outcome and helping determine how the outcome happens and, thus, explains why causation is not transitive. Moreover, the theory handles tricky cases that are problematic for competing accounts of difference-making.
A certain type of counterfactual is thought to be intimately related to causation, control, and explanation. The time asymmetry of these phenomena therefore plausibly arises from a time asymmetry of counterfactual dependence. But why is counterfactual dependence time asymmetric? The most influential account of the time asymmetry of counterfactual dependence is David Albert’s account, which posits a new, time-asymmetric fundamental physical law, the so-called “past hypothesis.” Albert argues that the time asymmetry of counterfactual dependence arises from holding fixed the past hypothesis when evaluating counterfactuals. In this paper, I argue that Albert’s account misconstrues the time asymmetry of counterfactual dependence.
Our ordinary causal concept seems to fit poorly with how our best physics describes the world. We think of causation as a time-asymmetric dependence relation between relatively local events. Yet fundamental physics describes the world in terms of dynamical laws that are, possible small exceptions aside, time symmetric and that relate global time slices. My goal in this paper is to show why we are successful at using local, time-asymmetric models in causal explanations despite this apparent mismatch with fundamental physics. In particular, I will argue that there is an important connection between time asymmetry and locality, namely: understanding the locality of our causal models is the key to understanding why the physical time asymmetries in our universe give rise to time asymmetry in causal explanation. My theory thus provides a unified account of why causation is local and time asymmetric and thereby enables a reply to Russell’s famous attack on causation.
Plausible assumptions from Cosmology and Statistical Mechanics entail that it is overwhelmingly likely that there will be exact duplicates of us in the distant future long after our deaths. Call such persons “Boltzmann duplicates,” after the great pioneer of Statistical Mechanics. In this paper, I argue that if survival of death is possible at all, then we almost surely will survive our deaths because there almost surely will be Boltzmann duplicates of us in the distant future that stand in appropriate relations to us to guarantee our survival.