Causality plays an important role in medieval philosophical writing:
the dominant genre of medieval academic writing was the commentary on
an authoritative work, very often a work of Aristotle. Of the works of
Aristotle thus commented on, the Physics plays a central
role. Other of Aristotle’s scientific works – On the
Heavens and the Earth, On Generation and Corruption,
and, of course, the Metaphysics – are also significant
for the study of causation: so there is a rather daunting body of work
to survey. One might, though, be tempted to argue that this concentration on
causality is simply an effect of reading Aristotle, but this would be
Yet another tactic was offered the Negro. He was encouraged to seek unity with the millions of disadvantaged whites of the South, whose basic need for social change paralleled his own. Theoretically, this proposal held a measure of logic, for it is undeniable that great masses of Southern whites exist in conditions scarcely better than those which afflict the Negro. …
The ethical task of becoming a better person requires identifying and fairly assessing one’s motivations. Any ethical theory needs to be consistent with the structure of human motivation. Ethics therefore requires an understanding of how self-deception about motivation is possible. The two main theories of self-deception about motivation are Sigmund Freud’s theory of repression and Jean-Paul Sartre’s theory of bad faith. Freud distinguishes between rationally structured and purely mechanistic aspects of the mind, arguing that repression is a process of preventing oneself from becoming conscious of some mechanistic item. Sartre argues that this explanation fails, since the activity of repression would need to be concealed but cannot be mechanistic. Sartre’s alternative rests on his theory of projects as the ground of motivations. Since projects structure conscious experience, they structure our reflective awareness of our own projects, which allows features of our projects to become hidden from our view. Sartre’s theory is internally coherent and consistent with the view of motivation currently emerging from social psychology. But it is inconsistent with his own theory of radical freedom. It requires instead Simone de Beauvoir’s theory of project sedimentation, which in turn entails a nonpurposive form of self-deception.
In the Gospel of John we are told the story of a Samaritan woman who asks Jesus whether the proper place of worship is on the holy mountain of Samaria or in the Temple of Jerusalem. These referred to two competing, antagonistic, religious institutions. Jesus responds: “Woman, believe Me, an hour is coming when neither in this mountain nor in Jerusalem will you worship the Father . . . an hour is coming, and now is, when the true worshippers will worship in spirit and truth; for such people the Father seeks to be His worshippers. God is spirit, and those who worship Him must worship in spirit and truth” (Jn 4:21-24).
This article uses psychological and neural theories to illuminate the use of analogies in literary allegories. It shows how new theories of neural representation, encompassing both cognitive and emotional aspects, have the potential to make sense of many kinds of literary comparisons including allegories. The main text analyzed is George Orwell’s Animal Farm, whose effectiveness is discussed using the multiconstraint theory of analogy supplemented with observations about neural functioning.
Automated geometry theorem provers start with logic-based formulations of Euclid’s axioms and postulates, and often assume the Cartesian coordinate representation of geometry. That is not how the ancient mathematicians started: for them the axioms and postulates were deep discoveries, not arbitrary postulates. What sorts of reasoning machinery could the ancient mathematicians, and other intelligent species (e.g. crows and squirrels), have used for spatial reasoning? “Diagrams in minds” perhaps? How did natural selection produce such machinery?
George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
Origen (c. 185–c. 253) was a Christian exegete and theologian,
who made copious use of the allegorical method in his commentaries,
and (though later considered a heretic) laid the foundations of
philosophical theology for the church. He was taught by a certain
Ammonius, whom the majority of scholars identify as Ammonius Saccas,
the teacher of Plotinus; many believe, however, that the external
evidence will not allow us to identify him with the Origen whom
Plotinus knew as a colleague. He was certainly well-instructed in
philosophy and made use of it as an ancillary to the exposition and
harmonization of scripture.
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
Rifling through two-hundred-year-old diaries, unfurling bundles of love-letters like flowers, saying every name in an orphanage registry under my breath, getting lost in a farmer’s field, gingerly lifting leaves long folded with perfumey motes, falling asleep in my sunshine chair, drooling spittle puddles onto a crackled map of Nunsmoor. The stories I stumbled across in the archives were often painful, shocking, and occasionally joyous. At first, they seem far away but after a short while they begin to move closer (or maybe it’s we who are moving?) and I begin to comprehend, just barely, a great aliveness.
Jacques Derrida (1930–2004) was the founder of
“deconstruction,” a way of criticizing not only both
literary and philosophical texts but also political institutions. Although Derrida at times expressed regret concerning the fate of the
word “deconstruction,” its popularity indicates the
wide-ranging influence of his thought, in philosophy, in literary
criticism and theory, in art and, in particular, architectural theory,
and in political theory. Indeed, Derrida’s fame nearly reached
the status of a media star, with hundreds of people filling
auditoriums to hear him speak, with films and televisions programs
devoted to him, with countless books and articles devoted to his
The philosophy of Epicurus (341–270 B.C.E.) was a complete and
interdependent system, involving a view of the goal of human life
(happiness, resulting from absence of physical pain and mental
disturbance), an empiricist theory of knowledge (sensations, together with
the perception of pleasure and pain, are infallible criteria), a
description of nature based on atomistic materialism, and a
naturalistic account of evolution, from the formation of the world to
the emergence of human societies. Epicurus believed that, on the basis
of a radical materialism which dispensed with transcendent entities
such as the Platonic Ideas or Forms, he could disprove the possibility
of the soul’s survival after death, and hence the prospect of
punishment in the afterlife.
This is a position paper. It presents a cohesive framework that addresses some of the defining issues of cotemporary metaethics, notably the nature of moral judgment, moral reality, and moral language. The framework is supposed to appeal to philosophers antecedently attracted, on the one hand, to the idea that there are no such mind-independent entities as values, and on the other hand, to the idea that there is still such a thing is substantive moral truth. §1 introduces three prominent divides in contemporary metaethics: between cognitivism and noncognitivism in moral psychology, between moral realism and antirealism in moral metaphysics, and between descriptivism and expressivism in moral semantics. §2 then presents, rather dogmatically, a comprehensive approach to the mind that I call impure intentionalism, which type-individuates mental states in terms of their intentional character, understood as a combination of content and attitude; it also presents a specific framework for understanding the attitudinal aspect of mental states. Finally, applying impure intentionalism to moral psychology, §3 distinguishes between two kinds of moral judgment, one cognitive and one noncognitive, and crafts a moral metaphysics and a moral semantics around this distinction.
In the framework of Brans—Dicke theory, a cosmological model regarding the expanding universe has been formulated by considering an inter—conversion of matter and dark energy. A function of time has been incorporated into the expression of the density of matter to account for the non—conservation of the matter content of the universe. This function is proportional to the matter content of the universe. Its functional form is determined by using empirical expressions of the scale factor and the scalar field in field equations. This scale factor has been chosen to generate a signature flip of the deceleration parameter with time. The matter content is found to decrease with time monotonically, indicating a conversion of matter into dark energy. This study leads us to the expressions of the proportions of matter and dark energy of the universe. Dependence of various cosmological parameters upon the matter content has been explored.
We discuss an article by Steven Weinberg  expressing his discontent with the usual ways to understand quantum mechanics. We examine the two solutions that he considers and criticizes and propose another one, which he does not discuss, the pilot wave theory or Bohmian mechanics, for which his criticisms do not apply.
Beall and Murzi (J Philos 110(3):143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules (over sufficiently expressive base theories). As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field (Notre Dame J Form Log 58(1):1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi (J Philos Log. doi:10.1007/s10992-017-9438-x, 2017), is a generalisation of Kripke’s notion of grounded truth (J Philos 72:690–716, 1975), and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic (nor, more generally, of a revisionary approach to semantic paradox), we take the notion of naïve
In this paper, I make explicit some implicit commitments to realism and conceptualism in recent work in social epistemology exemplied by Miranda Fricker and Charles Mills. I offer a survey of recent writings at the intersection of social epistemology, feminism, and critical race theory, showing that commitments to realism and conceptualism are at once implied yet undertheorized in the existing literature. I go on to offer an explicit defense of these commitments by drawing from the epistemological framework of John McDowell, demonstrating the relevance of the metaphor of the “space of reasons” for theorizing and criticizing instances of epistemic injustice. I then point out how McDowell’s own view requires expansion and revision in light of Mills’ concept of “epistemologies of ignorance.” I conclude that, when their strengths are used to make up for each others’ weaknesses, Mills and McDowell’s positions mutually reinforce one another, producing a powerful model for theorizing instances of systematic ignorance and false belief.
With phenomenal characters, we seem finally to have come face to face with paradigmatic instances of intrinsic properties. The hurtfulness of pain, the acrid smell of sulphur, the taste and flavor of pineapple—these things are intrinsic qualities if anything is.
This article discusses the following issues about space and time: whether they are absolute or relative, whether they depend on minds, what their topological and metrical structures may be, McTaggart’s argument against the reality of time, the ensuing split between static and dynamic theories of time, problems with presentism, and the possibility of time travel. Our opening questions are posed in the following query from Kant: What, then, are space and time? Are they real existences? Are they only determinations or relations of things, yet such as would belong to things even if they were not intuited?
Although Reid never addresses Molyneux’s question by name, he has much to say that bears upon it, particularly in his discussions of the capacities of the blind and the relations of visible to tangible figure. My goal in this essay is to ascertain and evaluate Reid’s answer. On a first reading, it can seem that Reid gives two inconsistent answers. I shall argue, however, that the inconsistency goes away once we distinguish different versions of what is being asked. I shall also argue that Reid’s answer of yes to one important Molyneux question is more plausible than Berkeley’s answer of no.
The rise of medically unexplained conditions like fibromyalgia and chronic fatigue syndrome in the United States looks remarkably similar to the explosion of neurasthenia diagnoses in the late nineteenth century. In this paper, I argue the historical connection between neurasthenia and today’s medically unexplained conditions hinges largely on the uncritical acceptance of naturalism in medicine. I show how this cultural acceptance shapes the way in which we interpret and make sense of nervous distress while, at the same time, neglecting the unique social and historical forces that continue to produce it. I draw on the methods of hermeneutic philosophy to expose the limits of naturalism and forward an account of health and illness that acknowledges the extent to which we are always embedded in contexts of meaning that determine how we experience and understand our suffering.
Antiochus, who was active in the latter part of the second and the
early part of the first centuries B.C.E., was a member of the Academy,
Plato’s school, during its skeptical phase. After espousing skepticism
himself, he became a dogmatist. He defended an epistemological theory
essentially the same as the Stoics’ and an ethical theory which
synthesized elements from the Stoa and Plato and Aristotle. In both
areas he claimed to be reviving the doctrines of the Old Academy of
Plato and his earliest successors and
“Affirmative action” means positive steps taken to
increase the representation of women and minorities in areas of
employment, education, and culture from which they have been
historically excluded. When those steps involve preferential
selection—selection on the basis of race, gender, or
ethnicity—affirmative action generates intense controversy. The development, defense, and contestation of preferential affirmative
action has proceeded along two paths. One has been legal and
administrative as courts, legislatures, and executive departments of
government have made and applied rules requiring affirmative action.
Stoicism was one of the new philosophical movements of the Hellenistic
period. The name derives from the porch (stoa poikilê)
in the Agora at Athens decorated with mural paintings, where the
members of the school congregated, and their lectures were held. Unlike ‘epicurean,’ the sense of the English adjective
‘stoical’ is not utterly misleading with regard to its
philosophical origins. The Stoics did, in fact, hold that emotions
like fear or envy (or impassioned sexual attachments, or passionate
love of anything whatsoever) either were, or arose from, false
judgements and that the sage – a person who had attained moral and
intellectual perfection – would not undergo them.
Taking literally the concept of emotional truth requires breaking the monopoly on truth of belief-like states. To this end, I look to perceptions for a model of non-propositional states that might be true or false, and to desires for a model of propositional attitudes the norm of which is other than the semantic satisfaction of their propositional object. Those models inspire a conception of generic truth, which can admit of degrees for analogue representations such as emotions; belief-like states, by contrast, are digital representations. I argue that the gravest problem—objectivity—is not insurmountable.
You may very well know the Five Books website, where a wide-ranging cast of contributors are asked “to make book recommendations in their area of work and explain their choices in an interview”. The recommendations are often quirky, sometimes even slightly bizarre, but rarely without interest. …
Chinese philosophy was developed on the basis of ontological,
epistemological and metaphysical paradigms that differ from those of
Western theoretical discourses. The concepts and categories used in
Chinese philosophy cannot be easily transferred from one
socio-cultural context into another, and it is often difficult to
understand this philosophy through the lens of traditional Western
thought. The exclusive application of Western methods can thus lead
to severe misunderstandings and false interpretations of Chinese
discourses. It is therefore important to use caution so as not to
diminish the richness and depth of Chinese thought or turn it into
a weak version of Western philosophical thought.
Empirical research into moral decision-making is often taken to have normative implications. For instance, in his recent book, Joshua Greene (2013) relies on empirical findings to establish utilitarianism as a superior normative ethical theory. Kantian ethics, and deontological ethics more generally, is a rival view that Greene attacks. At the heart of Greene’s argument against deontology is the claim that deontological moral judgments are the product of certain emotions and not of reason.
Any successful account of the metaphysics of mechanistic causation must satisfy at least five key desiderata. In this paper, I lay out these five desiderata and explain why existing accounts of the metaphysics of mechanistic causation fail to satisfy them. I then present an alternative account which does satisfy the five desiderata. According to this alternative account, we must resort to a type of ontological entity that is new to metaphysics, but not to science: constraints. In this paper, I explain how a constraints-based metaphysics fits best with the emerging consensus on the nature of mechanistic explanation.
Process theism typically refers to a family of theological ideas
originating in, inspired by, or in agreement with the metaphysical
orientation of the English philosopher-mathematician Alfred North
Whitehead (1861–1947) and the American philosopher-ornithologist
Charles Hartshorne (1897–2000). For both Whitehead and
Hartshorne, it is an essential attribute of God to be fully involved
in and affected by temporal processes. This idea contrasts neatly with
traditional forms of theism that hold God to be or at least conceived
as being, in all respects non-temporal (eternal), unchanging
(immutable,) and unaffected by the world (impassible).