Schellenberg claims that God cannot coexist with a non-resistant non-believer, since God being love would ensure that everyone who is non-resistant would be given the conditions necessary for a personal relationship with God. …
An interesting question is whether a prohibition on discrimination with respect to a determinable P by itself prohibits discrimination against groups with respect to patterns or distributions of P in groups. …
This paper develops and articulates a metaphysics of intersectionality, the idea that multiple axes of social oppression cross-cut each other. Though intersectionality is often described through metaphor, rigorous theories of intersectionality can be formulated using the tools of contemporary analytic metaphysics. A central tenet of intersectionality theory, that intersectional identities are inseparable, can be framed in terms of explanatory unity. Further, intersectionality is best understood as metaphysical and explanatory priority of the intersectional category over its constituents, comparable to metaphysical priority of the whole over its parts.
It is generally thought that we cannot forgive people for things they do to others. I cannot forgive you for lying to your mother, for instance. I lack standing to do so. But many people believe that God can forgive us for things we do to others. How is this possible? This is the question I wish to explore. Call it the problem of divine standing. I begin by cataloging the various ways one can have standing to forgive a wrongdoer. I then provide two solutions to the problem of divine standing.
Many people believe that God has forgiven them for the wrong things they have done. What is the nature of God's forgiveness? In this essay, the second in a two‐part series, I explore two further approaches to this question. I conclude by noting a few issues that, in my estimation, should be addressed in future philosophical discussions of the nature of divine forgiveness.
The lesson to be learned from the paradoxical St. Petersburg game and Pascal’s Mugging is that there are situations where expected utility maximizers will needlessly end up (with high probability) poor and on death’s door, and hence we should not be expected utility maximizers. Instead, when it comes to decision-making, for possibilities that have very small probabilities of occurring, we should discount those probabilities down to zero, regardless of the utilities associated with those possibilities.
In recent work, Walton has abandoned his very influential account of the fictionality of p in a fictional work in terms of prescriptions to imagine emanating from it. He offers examples allegedly showing that a prescription to imagine p in a given work of fiction is not sufficient for the fictionality of p in that work. In this paper, both in support and further elaboration of a constitutive-norms speech-act variation on Walton’s account that I have defended previously, I critically discuss his objections. In addition to answering his concerns and developing the account further, I provide additional abductive support for its explanatory virtues vis-à- institutional accounts like Walton’s, and Gricean speech-act proposals. Keywords: fictionality; truth in fiction; fictional worlds; speech acts; norms.
I contrast in this paper the account I favor for how fictions can convey knowledge with Green’s views on the topic. On my account, this obtains because fictional works make assertions and other acts such as conjectures, suppositions, or acts of putting forward contents for our consideration; and the mechanism through which they do it is that of speech act indirection, of which conversational implicatures are a particular case. There are two main points of disagreement with Green in this proposal. First, it requires that assertions can be made indirectly, which Green (2007, 2015) questions on the grounds of the intuitive distinction between lying and misleading. Second, it requires that verbal fiction-making doesn’t consist merely in “acts of speech” (Green 2015), but in straightforward speech acts. Keywords: assertion; implicature; fiction; indirect speech acts.
Here is a suggestive Thomistic line of thought in favor of the essentiality of origins—i.e., the principle that the causes of things are essential to them. Consider two possible cases where a seed is produced in the same apple tree T:
A seed is produced at t because of the tree’s exercise of seed-producing powers together with God’s cooperative exercise of primary causation. …
Let’s start with a puzzle:
Puzzle. You measure the energy and frequency of some laser light trapped in a mirrored box and use quantum mechanics to compute the expected number of photons in the box. Then someone tells you that you used the wrong value of Planck’s constant in your calculation. …
According to virtue epistemology, one should primarily understand knowledge in terms of the relationship between cognitive success and cognitive agency. There are various ways of understanding this thesis. Along one axis, there is the debate about whether we should focus on the agent’s reliable cognitive skills in general, or whether we should instead treat knowledge as primarily concerned with the manifestation of more elevated epistemic standings, like intellectual virtues. Along another axis, there is the debate about whether we should understand knowledge as being exclusively defined in terms of the subject’s cognitive skills (where this category includes the intellectual virtues), or whether there needs to be supplementary conditions in one's theory of knowledge to deal with the problem posed by knowledge-undermining epistemic luck. This paper will explore these topics, and in the process offer an overview of the contemporary debate regarding issues at the nexus of knowledge, skill and virtue epistemology.
I came across an interesting letter in response to the ASA’s Statement on p-values that I hadn’t seen before. It’s by Ionides, Giessing, Ritov and Page, and it’s very much worth reading. I make some comments below. …
Organisms leave a variety of traces in the fossil record. Among these traces, vertebrate and invertebrate paleontologists conventionally recognize a distinction between the remains of an organism’s phenotype (body fossils) and the remains of an organism’s life activities (trace fossils). The same convention recognizes body fossils as biological structures and trace fossils as geological objects. This convention explains some curious practices in the classification, as with the distinction between taxa for trace fossils and for tracemakers. I consider the distinction between “parallel taxonomies,” or parataxonomies, which privileges some kinds of fossil taxa as “natural” and others as “artificial.” The motivations for and consequences of this practice are inconsistent. By comparison, I examine an alternative system of classification used by paleobotanists that regards all fossil taxa as “artificially” split. While this system has the potential to inflate the number of taxa with which paleontologists work, the system offers greater consistency than conventional practices. Weighing the strengths and weaknesses of each system, I recommend that paleontologists should adopt the paleobotanical system more broadly.
Vehicle externalism maintains that the vehicles of our mental representations can be located outside of the head, that is, they need not be instantiated by neurons located inside the brain of the cogniser. But some disagree, insisting that ‘non-derived’, or ‘original’, content is the mark of the cognitive and that only biologically instantiated representational vehicles can have non-derived content, while the contents of all extra-neural representational vehicles are derived and thus lie outside the scope of the cognitive. In this paper we develop one aspect of Menary’s vehicle externalist theory of cognitive integration—the process of enculturation—to respond to this longstanding objection. We offer examples of how expert mathematicians introduce new symbols to represent new mathematical possibilities that are not yet understood, and we argue that these new symbols have genuine non-derived content, that is, content that is not dependent on an act of interpretation by a cognitive agent and that does not derive from conventional associations, as many linguistic representations do.
How can we work out who should be listed as an author of a paper? This problem is pressing: both co-authorship, the number of co-authors are drastically increasing. In May 2015, a paper giving an improved measurement of the mass of the Higgs Boson bringing together the ATLAS and CMS collaborations in CERN was published by Physical Review Letters (Aad et al 2015). This paper listed some 5,154 authors, a significant number of whom were deceased at the time of publication. This list was derived from the members of ATLAS and CMS, many of whom will not have contributed to the research or writing, or have even read the paper. There are a huge number of approaches to co-authorship: Some papers list authors alphabetically, others by order of contribution, others by seniority, while others give special significance to positions (typically first, second, and last positions). Some disciplines (especially in the humanities) typically list only the person who has done most work as an author, whereas others list everyone in the organisation or lab, irrespective of whether they have done any work on the paper. Although some disciplines have clear norms, in many disciplines it is unclear – or simply indeterminate – what the norms for ascribing authorship are.
Much theorizing on intentionality has proceeded on the assumption that intentionality can be fully explained in terms of functional role and causal/informational/correlation-type relations, i.e. that it can be “naturalized”. This assumption remains widespread, but it has come under increased scrutiny over the past couple of decades. Brian Loar (1995) was among the most influential critics of the naturalization project, arguing that intentionality cannot be fully understood without the first-person perspective. He also articulated a picture of the mind on which all intentionality either is or originates from phenomenal intentionality, a kind of intentionality that arises from consciousness alone (2002, 2003). Since Loar’s papers on this topic began circulating in the 90s, many philosophers have joined the phenomenal intentionality camp. His rich and inspiring work has been highly influential, helping to usher into the mainstream what is increasingly being perceived as a main contender for a theory of intentionality.
Physicists Brian Greene and Max Tegmark both make variants of the claim that if the universe is infinite and matter is roughly uniformly distributed that there are infinitely many “people with the same appearance, name and memories as you, who play out every possible permutation of your life choices.” In this paper I argue that, while our current best theories in astrophysics may allow one to conclude that we have infinitely many duplicates whose lives are identical to our own from start to finish, without either further advances in physics or advances in fields like biology, psychology, neuroscience, and philosophy, Greene’s and Tegmark’s claims about the ways in which our duplicates lives will differ from our own are not a consequence of our best current scientific theories.
Followers of this blog will recall my post from October 30, where I solicited ideas about a "Kindness Assignment" for my lower-division philosophy class "Evil". The assignment was to perform ninety minutes of kindness for one or more people, with no formal accountability or reward. …
You know the type. Always quick to blame you for your moral complacency. Always righteously indignant at your moral failings. Always keen to highlight their virtue and your vice. I am talking about moralists, of course. …
Thomists have two stories about how God can act providentially in the world. First, God can work simply miraculously, directly producing an effect that transcends the relevant created causal powers. Second, God can work cooperatively: whenever any finite causal agency is exercised, God intentionally cooperates with it through his primary causation, in such a way that it is up to God which of the causal agents natural effects is produced. …
Modal discourse concerns alternative ways things can be, e.g., what
might be true, what isn’t true but could have been, what should
be done. This entry focuses on counterfactual
modality which concerns what is not, but could or would have
been. What if Martin Luther King had died when he was stabbed in 1958
(Byrne 2005: 1) ? What if the Americas
had never been colonized? What if I were to put that box over here and
this one over there? These modes of thought and speech have been the
subject of extensive study in philosophy, linguistics, psychology,
artificial intelligence, history, and many other allied fields.
Here’s a curious puzzle. Every theist—including the Molinist and the Open Theist—will presumably agree that this conditional is true:
If God were to announce that Trump will freely refrain from tweeting tomorrow, then Trump would freely refrain from tweeting tomorrow. …
Consider this standard bit of dialectic. One gives a Free Will Defense relying on the logical possibility of Trans-World Depravity:
TWD1: In every feasible world, some significantly free creature sins at least once. …
Most philosophical discussions of mindreading stay squarely within the
realm of philosophy of psychology. Theorizing about mindreading plays a role in
debates about the modularity of the mind, the representational theory of mind,
language development, the semantics of ordinary language use, etc. …
A prominent objection against the logicality of second-order logic is the so-called Overgeneration Argument. However, it is far from clear how this argument is to be understood. In the first part of the article, we examine the argument and locate its main source, namely, the alleged entanglement of second-order logic and mathematics. We then identify various reasons why the entanglement may be thought to be problematic. In the second part of the article, we take a metatheoretic perspective on the matter. We prove a number of results establishing that the entanglement is sensitive to the kind of semantics used for second-order logic. These results provide evidence that by moving from the standard set-theoretic semantics for second-order logic to a semantics which makes use of higher-order resources, the entanglement either disappears or may no longer be in conflict with the logicality of second-order logic.
Gender classifications often are controversial. These controversies typically focus on whether gender classifications align with facts about gender kind membership: Could someone really be nonbinary? Is Chris Mosier (a trans man) really a man? I think this is a bad approach. Consider the possibility of ontological oppression, which arises when social kinds operating in a context unjustly constrain the behaviors, concepts, or affect of certain groups. Gender kinds operating in dominant contexts, I argue, oppress trans and nonbinary persons in this way: they marginalize trans men and women, and exclude nonbinary persons. As a result, facts about membership in dominant gender kinds should not settle gender classification practices.
What makes an epistemic norm distinctively epistemic? According to the received view in the literature, if a norm N regulates the epistemic properties required for permissibly phi-ing, then N is an epistemic norm. This paper is involved in conceptual engineering. It has two aims: first, it argues that the received view should be abandoned, in that it fails to identify epistemic and only epistemic requirements, and it misses fit with the general normative landscape. At the same time, I argue, the failure of the received view is no reason for skepticism about ‘the epistemic’ as a sui generis normative domain. This paper’s second and central aim is an ameliorative aim: it proposes a novel approach to individuating epistemic norms. In a nutshell, according to the ameliorative proposal I will develop here, epistemic norms are to be individuated by their association with distinctively epistemic values.
This paper defends a novel view of hermeneutical epistemic injustice (HEI). To this effect, it starts by arguing that Miranda Fricker’s account is too restrictive: hermeneutical epistemic injustice is more ubiquitous than her account allows. That is because, contra Fricker, conceptual ignorance is not necessary for HEI: hermeneutical epistemic injustice essentially involves a failure in concept application rather than in concept possession. Further on, I unpack hermeneutical epistemic injustice as unjustly brought about basing failure. Last, I show that, if this view right, HEI is a form of distributive injustice, and affords the corresponding traditional normative theorizing.
In this post, I argue that Model Theory is a superior account
of the broader conception of mindreading laid out in the previous post. Thus
far, I have refrained from discussing Theory Theory (TT) and Simulation Theory (ST)
even though these theories have been the two main general theories of
mindreading for decades. …
There is a messy little gap in how the Free Will Defense is sometimes thought about—or at least has been thought about by me. First it is argued that the following Trans-World Depravity thesis is logically possible:
(TWD) In every feasible world, every significantly free creature sins at least once. …