[Editor's Note: The following new entry by Timothy O’Connor replaces the
on this topic by the previous authors.]
The world appears to contain diverse kinds of objects and
systems—planets, tornadoes, trees, ant colonies, and human
persons, to name but a few—characterized by distinctive features
and behaviors. This casual impression is deepened by the success of
the special sciences, with their distinctive taxonomies and laws
characterizing astronomical, meteorological, chemical, botanical,
biological, and psychological processes, among others. But
there’s a twist, for part of the success of the special sciences
reflects an effective consensus that the features of the composed
entities they treat do not “float free” of features and
configurations of their components, but are rather in some way(s)
dependent on them.
We present an objection to Beall and Henderson’s recent paper defending a solution to the fundamental problem of conciliar Christology using qua or secundum clauses. We argue that certain claims the acceptance/rejection of which distinguish the Conciliar Christian from others fail to so distinguish on Beall and Henderson’s 0- Qua view. This is because on their 0-Qua account, these claims are either acceptable both to Conciliar Christians as well as those who are not Conciliar Christians or because they are acceptable to neither.
In linguistics, the dominant approach to the semantics of plurals appeals to mereology. However, this approach has received strong criticisms from philosophical logicians who subscribe to an alternative framework based on plural logic. In the first part of the article, we offer a precise characterization of the mereological approach and the semantic background in which the debate can be meaningfully reconstructed. In the second part, we deal with the criticisms and assess their logical, linguistic, and philosophical significance. We identify four main objections and show how each can be addressed. Finally, we compare the strengths and shortcomings of the mereological approach and plural logic. Our conclusion is that the former remains a viable and well-motivated framework for the analysis of plurals.
Proclus of Athens (*412–485 C.E.) was the most authoritative
philosopher of late antiquity and played a crucial role in the
transmission of Platonic philosophy from antiquity to the Middle Ages. For almost fifty years, he was head or ‘successor’
(diadochos, sc. of Plato) of the Platonic
‘Academy’ in Athens. Being an exceptionally productive
writer, he composed commentaries on Aristotle, Euclid and Plato,
systematic treatises in all disciplines of philosophy as it was at
that time (metaphysics and theology, physics, astronomy, mathematics,
ethics) and exegetical works on traditions of religious wisdom
(Orphism and Chaldaean Oracles).
6. We desire love as a function of the relational nature of our being. Ontologically, we are not complete or sufficient unto ourselves. We do not and cannot provide the 'space' (both physical and emotional) we must occupy in order to be what and as we are.
I argue that the science of the soul only covers sublunary living things. Aristotle cannot properly ascribe ψυχή to unmoved movers since they do not have any capacities that are distinct from their activities or any matter to be structured. Heavenly bodies do not have souls in the way that mortal living things do, because their matter is not subject to alteration or generation. These beings do not fit into the hierarchy of soul powers that Aristotle relies on to provide unity to ψυχή. Their living consists in their activities, not in having a capacity for activity.
While controversy about the nature of grounding abounds, our focus is on a question for which a particular answer has attracted something like a consensus. The question concerns the relation between partial grounding and full grounding. The apparent consensus is that the former is to be defined in terms of the latter. We argue that the standard way of doing this faces a significant problem and that we ought to pursue the reverse project of defining full grounding in terms of partial grounding. The guiding idea behind the definition we propose is that full grounding is what happens when partial grounding works in a way that ensures that the grounded is nothing over and above the grounds. We ultimately understand this idea in terms of iterated nothing-over-and-above claims.
Constitutive panpsychism is the doctrine that macro-level consciousness—that is, consciousness of the sort possessed by certain composite things such as humans—is built out of irreducibly mental (or proto-mental) features had by some or all of the basic physical constituents of reality. On constitutive panpsychism, changes in macro-level consciousness amount to changes in either the way that micro-conscious entities ‘bond’ or the way that micro-conscious qualities ‘blend’ (or both). I pose the ‘Selection Problem’ for constitutive panpsychism: the problem of explaining how high-level functional states of the brain ‘select’ micro-conscious qualities for bonding or blending. I argue that there are no empirically plausible solutions to this problem.
Just as different pairs of shoes are useful for different occasions, different masks are useful for different occasions. Here's my collection. Category 1: Likely to be significantly protective
1.1. 3M 6300 half-face mask with 2091 P100 filters
Summary: Extremely protective for inhalation. …
In June 2016, David Chalmers delivered the Petrus Hispanus Lectures at the LanCog research group, University of Lisbon, on the subject of objects, properties, and perception in virtual reality environments. The paper resulting from these lectures was subsequently published in Disputatio as “The Virtual and the Real” (vol. IX, 2017, No. 46, pp. 309–52). In it, Chalmers defends virtual realism, according to which virtual objects are bona fide digital objects with virtual counterparts of perceptible properties such as colour and shape, and perception in virtual reality environments is typically veridical rather than illusory. This special issue collects responses to Chalmers due to Claus Beisbart, Jesper Juul, Peter Ludlow, Neil McDonnell and Nathan Wildman, Alyssa Ney, Eric Schwitzgebel, and Marc Silcox; together with a detailed response by Chalmers to each paper.
In this paper I want to explain, from a physicalist point of view, why so many people are persuaded that consciousness is non—physical. I take there to be good arguments, stemming from the need to integrate conscious events into the causal workings of the world, for identifying conscious states with physical states, and in what follows I shall take these arguments as read. At the same time there is no doubt that many people have strong intuitions that consciousness cannot possibly be physical. My aim will be to explain how these intuitions arise, and why they do not discredit physicalism.
Here I present the several models currently popular for understanding speciation in prokaryotes, in particular bacteria. I will argue that “speciation”, as a process or collection of independent but interacting processes sometimes serving to form genotypic and/or phenotypic clusters, can be studied effectively without any definition of “species” or any requirement that all prokaryotic lineages match such a definition. This has always been so, but formal acknowledgement would have a freeing effect.
The study of de re modality is concerned with facts about the modal profiles of individuals—facts about what could have been true of them and what could not have failed to be true of them—and with the roles of individuals in the theory of possible worlds. What follows is a selective overview of issues that arise in this part of the philosophy of modality.
During the last quarter of a century, a number of philosophers have become attracted to the idea that necessity can be analyzed in terms of a hyperintensional notion of essence. One challenge for proponents of this view is to give a plausible explanation of our modal knowledge. The goal of this paper is to develop a strategy for meeting this challenge.
Counterfactual thought is an important element of our cognitive lives. In making practical decision, we are often led to ask what would happen if we were to carry out a certain action, and we frequently support causal claims by showing that the putative effect depends counterfactually on the supposed cause. It therefore does not come as a surprise that scholars in many disciplines—from philosophy to cognitive and social psychology to computer science to linguistics—have shown a keen interest in understanding counterfactuals. One point of contention is whether causal notions should figure in a semantic account of counterfactuals. A number of philosophers and linguists, motivated by examples like those described in section 1 below, have favored such causal theories of counterfactuals. However, this approach stands opposed to a prominent philosophical tradition, going back at least to David Hume and most prominently defended by David Lewis, that aims to give a reductive analysis of causation in counterfactual terms. The two views advocate for opposite directions of analysis and are consequently mutually exclusive—combining them would lead to circularity.
There was a time when mind-dependence was central to many of the leading approaches to metaphysics, but that time has come and gone. De-fanged successors, such as response-dependence and judgment-dependence, still enjoy currency. But the old idea that the world is somehow custom-built for us and minds like our own has fallen into disrepute. This paper is part of a larger project of mine aimed at resuscitating mind-dependence.
The received concepts of axiomatic theory and axiomatic method, which stem from David Hilbert, need a systematic revision in view of more recent mathematical and scientific axiomatic practices, which do not fully follow in Hilbert’s steps and re-establish some older historical patterns of axiomatic thinking in unexpected new forms. In this work I motivate, formulate and justify such a revised concept of axiomatic theory, which for a variety of reasons I call constructive, and then argue that it can better serve as a formal representational tool in mathematics and science than the received concept.
Last summer my son David and I scraped the bibliographies out of the massive Stanford Encyclopedia of Philosophy to generate a list of the 295 most cited contemporary authors in the Stanford Encyclopedia. …
Pautz (Perceiving the world , 2010) has argued that the most prominent naive realist account of hallucination—negative epistemic disjunctivism—cannot explain how hallucinations enable us to form beliefs about perceptually presented properties. He takes this as grounds to reject both negative epistemic disjunctivism and naive realism. Our aims are two: First, to show that this objection is dialectically ineffective against naive realism, and second, to draw morals from the failure of this objection for the dispute over the nature of perceptual experience at large.
Do the senses represent causation? Many commentators read Malebranche as anticipating Hume’s negative answer to this question. I disagree with this assessment. When a yellow billiard ball strikes a red billiard ball, Malebranche holds that we see the yellow ball as causing the red ball to move. Given Malebranche’s occasionalism, he insists that the visual experience of causal interaction is illusory. Nevertheless, Malebranche holds that the senses (mis)represent finite things as causally efficacious. This experience of creaturely causality explains why Aristotelian philosophers and ordinary folk struggle to recognize occasionalism’s truth.
I argue there are two ways predication relations can hold according to the Categories: they can hold directly or they can hold mediately. The distinction between direct and mediated predication is a distinction between whether or not a given prediction fact holds in virtue of another predication fact’s holding. We can tell Aristotle endorses this distinction from multiple places in the text where he licenses an inference from one predication fact’s holding to another predication fact’s holding. The best explanation for each such inference is that he takes some predication facts to be mediated by others. Once the distinction between direct and mediated predication has been explained and argued for, I show how it can help solve a persistent problem for the traditional view of non-substantial particulars in the Categories—that is, the view that non-substantial particulars are particular in the sense of being non-recurrent. Along with vindicating the traditional view, the direct/mediated predication distinction gives us a distinctive way of understanding what it is for something to be recurrent (or non-recurrent) as well as a better understanding of Aristotle’s broader commitments in the Categories as a whole.
In this essay, my aim is to explain Vātsyāyana’s solution to a problem that arises for his theory of liberation. For him and most Nyāya philosophers after him, liberation consists in the absolute cessation of pain (ātyantika-duḥkha-vimukti). Since this requires freedom from embodied existence, it also results in the absolute cessation of pleasure. How, then, can agents like us (who habitually seek pleasure) be rationally motivated to seek liberation? Vātsyāyana’s solution depends on what I will call the Pain Principle, i.e., the principle that we should treat all aspects of our embodied existence as pain. If we were to follow this advice, we would come to apply the label of pain (duḥkha-saṃjñā) to all aspects of our embodied existence, including pleasure. This would undermine our attachment to our own embodied existence. I show that this fits with Vātsyāyana’s general theory of motivation. According to this theory, by manipulating the labels (saṃjñā) using which we think about the world and ourselves, we can induce radical shifts in our patterns of motivation.
An important part of the influential Humean doctrine in philosophy is the supervenience principle (sometimes referred to as the principle of separability). This principle asserts that the complete state of the world supervenes on the intrinsic properties of its most fundamental components and their spatiotemporal relations (the so-called Humean mosaic). There are well-known arguments in the literature purporting to show that in quantum mechanics the Humean supervenience principle is violated, due to the existence of entangled states.
philosophy. It is uniquely indubitable and uniquely infallible. It is the one and only source of certain knowledge. It is normatively required for assent in a priori disciplines like metaphysics, mathematics, and logic: you’re not supposed to assent to a proposition unless you perceive it clearly and distinctly. Descartes designed an early work, the Rules, to help readers “acquire the habit of intuiting the truth distinctly and clearly” (AT 10:400–1). Likewise, the chief purpose of his masterpiece, the Meditations, is to teach readers how to perceive things clearly and distinctly. As he writes to Mersenne: “We have to form distinct ideas of the things we want to judge about, and this is what most people fail to do and what I have mainly tried to teach by my Meditations” (AT 3:272).
The paper uses the concept of typicality to spell out an argument against Humean supervenience and the best system account of laws. It proves that, in a very general and robust sense, almost all possible Humean worlds have no Humean laws. They are worlds of irreducible complexity that do not allow for any systematization. After explaining typicality reasoning in general, the implications of this result for the metaphysics of laws are discussed in detail.
The purpose of this paper is to show that the dual notions of elements & distinctions are the basic analytical concepts needed to unpack and analyze morphisms, duality, and universal constructions in the Sets, the category of sets and functions. The analysis extends directly to other concrete categories (groups, rings, vector spaces, etc.) where the objects are sets with a certain type of structure and the morphisms are functions that preserve that structure. Then the elements & distinctions-based de…nitions can be abstracted in purely arrow-theoretic way for abstract category theory. In short, the language of elements & distinctions is the conceptual language in which the category of sets is written, and abstract category theory gives the abstract arrows version of those de…nitions.
Throughout the last fifty years two theories have been championed within the mental imagery debate. On the one side, pictorialists like Fodor (1975) and Kosslyn (1980) defended the view that mental representations ressemble non-mental images in that they are both depictive representations. On the other side, descripionalists such as Dennett (1969) and Pylyshyn (1973) argued that mental images represent propositonally through descriptive sentences. During those years many arguments were presented, discussed and refuted. The aim of this paper will be to analyze one of the main arguments that was wielded against the pictorialist view, namely Dennett’s striped tiger objection. The objection holds that the inherent indeterminacy of mental images with respect to visual properties shows that mental representations could not be pictorial, and thus the content of mental representation needs to be different from the content of perception. My purpose will be to show that Dennett’s argument is incorrect and falls prey to what Block (1983) identified as the photographic fallacy. After doing so I will argue that descriptionalists often overlook fundamental features involved in the exercise of our imaginative faculties misconceiving the way in which subjects perceive, imagine and determine phenomenical properties. Eventualy I will align with Nanay’s (2014) defense of the similar content view and the determinability thesis.
Truth pluralists say that the nature of truth varies between domains of discourse: while ordinary descriptive claims or those of the hard sciences might be true in virtue of corresponding to reality, those concerning ethics, mathematics, institutions (or modality, aesthetics, comedy…) might be true in some non-representational or “anti-realist” sense. Despite pluralism attracting increasing amounts of attention, the motivations for the view remain underdeveloped. This paper investigates whether pluralism is well-motivated on ontological grounds: that is, on the basis that different discourses are concerned with different kinds of entities. Arguments that draw on six different ontological contrasts are examined: (i) concrete vs. abstract entities; (ii) mind-independent vs. mind-dependent entities; (iii) sparse vs. merely abundant properties; (iv) objective vs. projected entities; (v) natural vs. non-natural entities; and (vi) ontological pluralism (entities that literally exist in different ways). I argue that the additional premises needed to move from such contrasts to truth pluralism are either implausible or unmotivated, often doing little more than to bifurcate the nature of truth when a more theoretically conservative option is available. If there is a compelling motivation for pluralism, I suggest, it’s likely to lie elsewhere.
Many philosophers are attracted to the interventionist slogan: “No causation without manipulability, no manipulability without causation”. Roughly speaking, on an interventionist account, X is a (type-level) cause of Y with respect to a variable set V if and only if an intervention that changes the value of X would also change the value of Y when all other relevant variables in V are held fixed at some value. The interventionist approach captures an important difference between genuine causation and mere correlation: if X causes Y, a proper intervention that changes X would also change Y; if X is merely correlated with Y, Y would not change under suitable manipulation of X.
The eliminative view of gauge degrees of freedom—the view that they arise solely from descriptive redundancy and are therefore eliminable from the theory— is a lively topic of debate in the philosophy of physics. Recent work attempts to leverage properties of the QCD θYM-term to provide a novel argument against the eliminative view. The argument is based on the claim that the QCD θYM-term changes under “large” gauge transformations. Here we review geometrical propositions about fiber bundles that unequivocally falsify these claims: the θYM-term encodes topological features of the fiber bundle used to represent gauge degrees of freedom, but it is fully gauge-invariant. Nonetheless, within the essentially classical viewpoint pursued here, the physical role of the θYM-term shows the physical importance of bundle topology (or superpositions thereof) and thus weighs against (a naive) eliminativism.