Augustine is commonly interpreted as endorsing an extramission theory of perception in De quantitate animae. A close examination of the text shows, instead, that he is committed to its rejection. I end with some remarks about what it takes for an account of perception to be an extramission theory and with a review of the strength of evidence for attributing the extramission theory to Augustine on the basis of his other works.
John Locke (b. 1632, d. 1704) was a British philosopher, Oxford
academic and medical researcher. Locke’s monumental An Essay
Concerning Human Understanding (1689) is one of the first great
defenses of modern empiricism and concerns itself with determining the
limits of human understanding in respect to a wide spectrum of topics. It thus tells us in some detail what one can legitimately claim to
know and what one cannot. Locke’s association with Anthony Ashley
Cooper (later the First Earl of Shaftesbury) led him to become
successively a government official charged with collecting information
about trade and colonies, economic writer, opposition political
activist, and finally a revolutionary whose cause ultimately triumphed
in the Glorious Revolution of 1688.
Antoine Arnauld (1612–1694) was a powerful figure in the
intellectual life of seventeenth-century Europe. He had a long and
highly controversial career as a theologian, and was an able and
influential philosopher. His writings were published and widely read
over a period of more than fifty years and were assembled in
1775–1782 in forty-two large folio volumes. Evaluations of Arnauld’s work as a theologian vary. Ian Hacking,
for example, says that Arnauld was “perhaps the most brilliant
theologian of his time” (Hacking 1975a, 25). Ronald Knox, on the
other hand, says, “It was the fashion among the Jansenists to
represent Antoine Arnauld as a great theologian; he should be
remembered, rather as a great controversialist… A theologian by
trade, Arnauld was a barrister by instinct” (Knox 1950, 196).
Consider this Thomistic-style doctrine:
God’s believing that a contingent entity x exists is the cause of x’s existing. Let B be God’s believing that I exist. Then, either
B exists in all possible worlds
B exists in all and only the worlds where I exist. …
Common-sense and traditional metaphysics alike accord shadows a secondary status in the order of things, relegating them from the first rank of genuine substances. Recall, for example, Shirley’s famous lyric: “The Glories of our blood and state // Are shadows, not substantial things”. Or how in Shakespeare’s play, Marcus Andronicus, bemoans of his brother, Titus, that “grief has so wrought on him, He takes false shadows for true substances” (III.ii.79-80).
Johann Christoph Friedrich Schiller (1759–1805) is best known
for his immense influence on German literature. In his relatively
short life, he authored an extraordinary series of dramas, including
The Robbers, Maria Stuart, and the trilogy
Wallenstein. He was also a prodigious poet, composing perhaps
most famously the “Ode to Joy” featured in the culmination
of Beethoven’s Ninth Symphony and enshrined, some two centuries
later, in the European
Hymn.[ 1 ]
In part through his celebrated friendship with Goethe, he edited
epoch-defining literary journals and exerted lasting influence on
German stage production.
Analogical reasoning addresses the question how evidence from various phenomena can be amalgamated and made relevant for theory development and prediction. In the first part of my contribution, I review some influential accounts of analogical reasoning, both historical and contemporary, focusing in particular on Keynes, Carnap, Hesse, and more recently Bartha. In the second part, I sketch a general framework. To this purpose, a distinction between a predictive and a conceptual type of analogical reasoning is introduced. I then take up a common intuition according to which (predictive) analogical inferences hold if the differences between source and target concern only irrelevant circumstances. I attempt to make this idea more precise by addressing possible objections and in particular by specifying a notion of causal irrelevance based on difference making in homogeneous contexts.
Today’s Virtual Colloquium is “God’s Standing to Forgive” by Brandon Warmke. Dr. Warmke received his PhD in philosophy from the University of Arizona in 2014 and is currently Assistant Professor of Philosophy at Bowling Green State University in Ohio. …
There are four modal paradigms in ancient philosophy: the frequency
interpretation of modality, the model of possibility as a potency, the
model of antecedent necessities and possibilities with respect to a
certain moment of time (diachronic modalities), and the model of
possibility as non-contradictoriness. None of these conceptions, which
were well known to early medieval thinkers through the works of
Boethius, was based on the idea of modality as involving reference to
simultaneous alternatives. This new paradigm was introduced into
Western thought in early twelfth-century discussions influenced by
Augustine’s theological conception of God as acting by choice
between alternative histories.
Here are two technical problems with consciousness causes collapse (ccc) interpretations of quantum mechanics. In both, suppose a quantum experiment with two possible outcomes, A and B, of equal probability 1/2. …
An odd dissensus between confident metaphysicians and neo-pragmatist antimetaphysicians pervades early twenty-first century analytic philosophy. Each faction is convinced their side has won the day, but both are mistaken about the philosophical legacy of the twentieth century. More historical awareness is needed to overcome the current dissensus. Lewis and his possible-world system are lionised by metaphysicians; Quine’s pragmatist scruples about heavy-duty metaphysics inspire anti-metaphysicians. But Lewis developed his system under the influence of his teacher Quine, inheriting from him his empiricism, his physicalism, his meta-ontology, and, I will show in this paper, also his Humeanism. Using published as well as never-before-seen unpublished sources, I will make apparent that both heavy-duty metaphysicians and neo-pragmatist anti-metaphysicians are wrong about the roles Quine and Lewis played in the development of twentieth-century philosophy. The two are much more alike than is commonly supposed, and Quine much more instrumental to the pedigree of current metaphysics.
A Greek philosopher of 1st and early 2nd
centuries C.E., and an exponent of Stoic ethics notable for the
consistency and power of his ethical thought and for effective methods
of teaching. Epictetus’s chief concerns are with integrity,
self-management, and personal freedom, which he advocates by demanding
of his students a thorough examination of two central ideas, the
capacity he terms ‘volition’ (prohairesis) and the
correct use of impressions (chrēsis tōn
phantasiōn). Heartfelt and satirical by turns, Epictetus has
had significant influence on the popular moralistic tradition, but he
is more than a moralizer; his lucid resystematization and challenging
application of Stoic ethics qualify him as an important philosopher in
his own right.
Relational egalitarians hold what matters for justice is that all members of a society “stand in relations of equality to others.” The idea that all human beings are moral equals is widely shared: it underlies the Universal Declaration of Human Rights and many national constitutions. …
In his discussion of the four causes, Aristotle claims that ‘the hypotheses are material causes of the conclusion’ (Physics 2. 3, Metaphysics Δ 2). This claim has puzzled commentators since antiquity. It is usually taken to mean that the premisses of any deduction are material causes of the conclusion. By contrast, I argue that the claim does not apply to deductions in general but only to scientific demonstrations. For Aristotle, the theorems of a given science are composites consisting of the indemonstrable premisses from which they are demonstrated. Accordingly, these premisses are elements, and hence material causes, of the theorems. In this way, Aristotle’s claim can be shown to be well-motivated and illuminating.
Fatalism is the thesis that human acts occur by necessity and hence
are unfree. Theological fatalism is the thesis that infallible
foreknowledge of a human act makes the act necessary and hence unfree. If there is a being who knows the entire future infallibly, then no
human act is free. Fatalism seems to be entailed by infallible foreknowledge by the
following informal line of reasoning:
For any future act you will perform, if some being
infallibly believed in the past that the act would occur, there is
nothing you can do now about the fact that he believed what he
believed since nobody has any control over past events; nor can you
make him mistaken in his belief, given that he is
There are different kinds of studies of Berkeley. Some focus on specific areas of his thought; some provide overviews.1 Of the overviews, some are arranged according to the chronology of his individual works; others are arranged according to topics.2 Internal, analytic studies examine the cogency of his arguments and show how dif— ferent interpretations of his texts handle criticisms raised by recent commentators; historical studies describe the background assumptions that inform his thinking.
This entry traces the historical development of the Square of
Opposition, a collection of logical relationships traditionally
embodied in a square diagram. This body of doctrine provided a
foundation for work in logic for over two millenia. For most of this
history, logicians assumed that negative particular propositions
(“Some S is not P”) are vacuously true
if their subjects are empty. This validates the logical laws embodied
in the diagram, and preserves the doctrine against modern
criticisms. Certain additional principles
(“contraposition” and “obversion”) were
sometimes adopted along with the Square, and they genuinely yielded
It is a standard understanding that we live in time. In fact, the whole physical world as described in sciences is based on the idea of objective (not absolute) time. For centuries we have defined time ever so minutely, basing them on finer and finer event measurements (uncoiling springs to atomic clocks) that we do not even notice that we have made an inductive leap when it comes to time - we can measure time, so we experience time. In the current work I wish to critique this inductive leap and examine what it means to experience time. We are embodied and embedded cognitive agents, constrained by our body as well as in continuous interaction with our environment. So another way to ask the question of temporal experience would be - how embodied is time? I posit that experience of time spoken of in general literature is a linguistic construct, in that, the idea of experience of time overshadows the actual phenomenal contents of time perception. Moreover, time perception itself comes from a post-facto judgment of events. It has also been observed that the order of events in time can be altered to create an illusion of violation of causality itself. This points to the possibility that events are arranged in a temporal map that can be read off by higher cognitive substrates. In the current work we go on to explore the nature of such a map as it emerges from an embodied mind.
Chapter 10 of Idealism and Christian Theology is “Idealism and Participating in the Body of Christ” by James Arcadi. This article is very clearly written and handles both philosophy and theology well. …
Karl Marx (1818–1883) is best known not as a philosopher but as
a revolutionary, whose works inspired the foundation of many communist
regimes in the twentieth century. It is hard to think of many who have
had as much influence in the creation of the modern world. Trained as
a philosopher, Marx turned away from philosophy in his mid-twenties,
towards economics and politics. However, in addition to his overtly
philosophical early work, his later writings have many points of
contact with contemporary philosophical debates, especially in the
philosophy of history and the social sciences, and in moral and
At least since Aristotle’s famous ‘sea-battle’ passages in On Interpretation 9, some substantial minority of philosophers has been attracted to what we might call the doctrine of the open future. Open future views (of the sort in question) maintain that future contingent statements— roughly, statements saying of causally undetermined events that they will happen—are never true. Some such views have it that future contingents are neither true nor false; others maintain that they are instead simply false. Both views, however, face a problem: prima facie, they seem inconsistent with what John MacFarlane has called the determinacy intuition—the intuition, roughly, that if something has happened, then (looking backwards) it was the case that it would happen (MacFarlane 2014: Ch. 9). According to MacFarlane, the indeterminacy intuition has it that, looking forwards, future contingents are never true—but the determinacy intuition has it that, looking backwards, they were. This tension forms, in large part, what might be called the problem of future contingents.
The demarcation between science and pseudoscience is part of the
larger task of determining which beliefs are epistemically warranted. This entry clarifies the specific nature of pseudoscience in relation
to other categories of non-scientific doctrines and practices,
including science denial(ism) and resistance to the facts. The major
proposed demarcation criteria for pseudo-science are discussed and
some of their weaknesses are pointed out. In conclusion, it is
emphasized that there is much more agreement on particular cases of
demarcation than on the general criteria that such judgments should be
This paper reflects on metametaphysics and as such develops a metametameta-physical view: that quietist metametaphysics requires dialetheism, and in turn a paraconsistent logic. I demonstrate this using Carnap’s metametaphysical position in his ‘Empiricism, Semantics and Ontology’ (1950) as an example, with regard to how it exhibits self-reference and results in inconsistency. I show how applying Carnap’s position to itself produces a dilemma, both horns of which lead to a contradiction. Such inconsistency commonly arises from meta-theories with global scope, as the ‘meta’ approach aims to transcend the scope of that which it is theorizing about, whilst the global nature will place itself back within the scope of that which it is theorizing about, which together result in the theory referring to itself whilst refuting itself. I argue that any global metametaphysical theory that draws a limit to thought will face self-reference problems leading to contradictory realms. My conclusion is conditional: If we want to meta-philosophize in such a way and treat quietist meta-theories as being true, then we need to be dialetheist and utilize a paraconsistent logic in order to accommodate the contradictions that result from such theorizing.
A theorem from Archimedes on the area of a circle is proved in a setting where some inconsistency is permissible, by using paraconsistent reasoning. The new proof emphasizes that the famous method of exhaustion gives approximations of areas closer than any consistent quantity. This is equivalent to the classical theorem in a classical context, but not in a context where it is possible that there are inconsistent infinitesimals. The area of the circle is taken ‘up to inconsistency’. The fact that the core of Archimedes’s proof still works in a weaker logic is evidence that the integral calculus and analysis more generally are still practicable even in the event of inconsistency.
Albert Camus (1913–1960) was a journalist, editor and
editorialist, playwright and director, novelist and author of short
stories, political essayist and activist—and, although he more
than once denied it, a philosopher. He ignored or opposed systematic
philosophy, had little faith in rationalism, asserted rather than
argued many of his main ideas, presented others in metaphors, was
preoccupied with immediate and personal experience, and brooded over
such questions as the meaning of life in the face of death. Although
he forcefully separated himself from existentialism, Camus posed one
of the twentieth century’s best-known existentialist questions, which
launches The Myth of Sisyphus: “There is only one
really serious philosophical question, and that is suicide”
The recent philosophy of Quantum Bayesianism, or QBism, represents an attempt to solve the traditional puzzles in the foundations of quantum theory by denying the objective reality of the quantum state. Einstein had hoped to remove the spectre of nonlocality in the theory by also assigning an epistemic status to the quantum state, but his version of this doctrine was recently proved to be inconsistent with the predictions of quantum mechanics. In this essay, I present plausibility arguments, old and new, for the reality of the quantum state, and expose what I think are weaknesses in QBism as a philosophy of science.
On views on which lying is sometimes permissible, lying to save one’s life from unjust persecution is a paradigm case of permissible lying. But Peter’s lies about his connection to Jesus—his famous three-fold denial of Jesus—fall precisely under that head. …
I argue that perceptual consciousness is constituted by a mental activity. The mental activity in question is the activity of employing perceptual capacities, such as discriminatory, selective capacities. This is a radical view, but I hope to make it plausible. In arguing for this mental activist view, I reject orthodox views on which perceptual consciousness is analyzed in terms of (sensory awareness relations to) peculiar entities, such as, phenomenal properties, external mind-independent properties, propositions, sense-data, qualia, or intentional objects.
Mainstream economic models of financial markets have long been criticized on the grounds that they fail to accurately account for the frequency of extreme events, including market crashes. Mandelbrot and Hudson (2004) put the point starkly in their discussion of the August 1998 crash: “By the conventional wisdom, August 1998 simply should never have happened. The standard theories estimate the odds of that final, August 31, collapse, at one in 20 million, an event that, if you traded daily for nearly 100,000 years, you would not expect to see even once. The odds of getting three such declines in the same month were even more minute: about one in 500 billion" (p. 4). Similar critiques have been mounted in connection with the October 1987 “Black Monday” crash and the October 1997 “mini-crash”, as well as other large drawdowns over the last thirty years. By the lights of ordinary economic reasoning, such events simply should not occur.
One of the central logical ideas in Wittgenstein’s Tractatus logicophilosophicus is the elimination of the identity sign in favor of the so-called “exclusive interpretation” of names and quantifiers requiring different names to refer to different objects and (roughly) different variables to take different values. In this paper, we examine a recent development of these ideas in papers by Kai Wehmeier. We diagnose two main problems of Wehmeier’s account, the first concerning the treatment of individual constants, the second concerning so-called “pseudo-propositions” (Scheinsatze) of classical logic such as a = a or a = b ∧ b = c → a = c. We argue that overcoming these problems requires two fairly drastic departures from Wehmeier’s account: (1) Not every formula of classical first-order logic will be translatable into a single formula of Wittgenstein’s exclusive notation. Instead, there will often be a multiplicity of possible translations, revealing the original “inclusive” formulas to be ambiguous. (2) Certain formulas of first-order logic such as a = a will not be translatable into Wittgenstein’s notation at all, being thereby revealed as nonsensical pseudo-propositions which should be excluded from a “correct” conceptual notation. We provide translation procedures from inclusive quantifier-free logic into the exclusive notation that take these modifications into account and define a notion of logical equivalence suitable for assessing these translations.