The sense and role of defaults in the semantics/pragmatics landscape
is changing swiftly and dynamically. First, it is changing due to the
progression in the debates concerning the delimitation of explicit
content (Jaszczolt 2009a, 2016a). Second, it is propelled by the
debates concerning the literal/nonliteral vis-à-vis salient/nonsalient
distinction (Giora & Givoni 2015; Ariel 2016). Next, it is
influenced by computational linguistics that develops statistical
models for learning compositional meaning using ‘big data’
(Jurafsky & Martin 2017 [Other Internet Resources]; Liang &
Causality plays an important role in medieval philosophical writing:
the dominant genre of medieval academic writing was the commentary on
an authoritative work, very often a work of Aristotle. Of the works of
Aristotle thus commented on, the Physics plays a central
role. Other of Aristotle’s scientific works – On the
Heavens and the Earth, On Generation and Corruption,
and, of course, the Metaphysics – are also significant
for the study of causation: so there is a rather daunting body of work
to survey. One might, though, be tempted to argue that this concentration on
causality is simply an effect of reading Aristotle, but this would be
Language gives structure to our thoughts. When I say “I saw that bird”, I convey a different thought than when I say “that bird saw me”. The different order of the words enables me to express the different roles of the players in these similar thoughts. What we see is a tight connection between linguistic ordering of words and mental life. This connection between form and meaning is present in the most common sentences that we use every day. In this way, language helps us to express important parts of our mental life and convey them to others.
Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception.
In an old paper, I argued that we do not hallucinate impossibilia: if we perceive something, the thing we perceive is possible, even if it is not actual. Consequently, if anyone has a perception—veridical or not—of a perfect being, a perfect being is possible. …
Yet another tactic was offered the Negro. He was encouraged to seek unity with the millions of disadvantaged whites of the South, whose basic need for social change paralleled his own. Theoretically, this proposal held a measure of logic, for it is undeniable that great masses of Southern whites exist in conditions scarcely better than those which afflict the Negro. …
In Our Knowledge of the Internal World, Robert Stalnaker describes two opposed perspectives on the relation between the internal and the external. According to one, the internal world is taken as given and the external world as problematic, and according to the other, the external world is taken as given and the internal world as problematic. Analytic philosophy moved from the former to the latter, from problems of world-construction to problems of self-locating beliefs. I argue in this paper that these problems are equivalent: both arise because experience and objective, external facts jointly underdetermine their relation. Both can be seen as a problem of expressive completeness; of the internal language in the former case, and of the non-indexical language in the second.
Symposium on Del Pinal and Spaulding, “Conceptual Centrality and Implicit Bias” Robert Briscoe April 23, 2018 Mind & Language Symposia / Philosophy of Mind / Psychology / Social CognitionI’m very glad to announce our latest Mind & Language symposium on Guillermo Del Pinal and Shannon Spaulding’s “Conceptual Centrality and Implicit Bias” from the journal’s February 2018 issue. …
Early on Saturday, 14 April, it was announced that the US, UK and France had conducted targeted strikes on three targets in Syria – a chemical weapons and storage facility, a research centre and a military bunker – in response to Assad’s (alleged) use of chemical weapons in Douma. …
I consider a problem from pragmatics for the radical interpretation project, relying on the principle of charity. If a speaker X in a context c manifests the attitude of holding a sentence s true, this might be because of believing, not the content of s in c, but what results from a pragmatic enrichment of that content. In this case, the connection between the holding-true attitude and the meaning of s might be too loose for charity to confirm the correct interpretation hypothesis. To solve this problem, I apply the coherence raising account of pragmatic enrichment developed in Pagin 2014. The result is that in upward entailing linguistic contexts, the enriched content entails the prior content, and so charity prevails: the speaker also believes the prior content. In downward entailing contexts this would not hold, but I argue that enrichments tend not to occur in downward entailing contexts.
Distinguish the following kinds of "offsetting" behaviour:
Preventative offsetting -- when potential harms depend on just the global amount of something (say, greenhouse gas emissions), it seems that one can prevent the potential harm done by one's contributions by "offsetting" or paying to reduce others' contributions, so that the net effect of one's behaviour leaves the global magnitudes unchanged. …
Theories of truth can hardly avoid taking into account how truth is expressed in natural language. Existing theories of truth have generally focused on true occurring with that- clauses. This paper takes a closer look at predicates of truth (and related notions) when they apply to objects as the referents of referential noun phrases, focusing on what I call the ‘core’ of language. It argues that truth predicates and their variants, predicates of correctness, satisfaction and validity, do not apply to propositions (not even with that-clauses), but to a range of attitudinal and modal objects, objects we refer to as ‘claims’, ‘beliefs’, ‘judgments’, ‘demands’, ‘promises, ‘obligations’ etc. As such natural language reflects a notion of truth that is primarily a normative notion conveyed by correct. This normative notion, however, is not action-guiding, but rather constitutive of representational objects (in the sense of Jarvis 2012), independently of any actions that may go along with them. The paper furthermore argues that the predicate true is part of a larger class of satisfaction predicates (satisfied, realized, taken up, etc). The semantic differences among different satisfaction predicates, the paper will argue, are best accounted for in terms of a truthmaker theory along the lines of Fine’s (to appear) truthmaker semantics. Truthmaker semantics also provides a notion of partial content applicable to attitudinal and modal objects, which may exhibit partial correctness, partial satisfaction, and partial validity.
In the posthumously published ‘Truth and Probability’ (1926), Ramsey sets out an influential account of the nature, measurement, and norms of partial belief. The essay is a foundational work on subjectivist interpretations of probability, according to which probabilities can be interpreted as rational degrees of belief (see entry on Interpretations of Probability). Many of its key ideas and arguments have since featured in other foundational works within the subjectivist tradition (e.g., Savage 1954, Jeffrey 1965). Ramsey’s central claim in ‘Truth and Probability’ is that the laws of probability supply us with a ‘logic of partial belief’. That is, the laws specify what would need to be true of any consistent set of partial beliefs, in a manner analogous to how the laws of classical logic might be taken to generate necessary conditions on any consistent set of full beliefs. His case for this is based on a novel account of what partial beliefs are and how they can be measured.
Recent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays in general do not traverse null geodesics. Having presented this result, we discuss its philosophical significance, both for the clock hypothesis (and, in particular, a recent purported proof thereof for light clocks), and for the operational meaning of the metric field.
The ethical task of becoming a better person requires identifying and fairly assessing one’s motivations. Any ethical theory needs to be consistent with the structure of human motivation. Ethics therefore requires an understanding of how self-deception about motivation is possible. The two main theories of self-deception about motivation are Sigmund Freud’s theory of repression and Jean-Paul Sartre’s theory of bad faith. Freud distinguishes between rationally structured and purely mechanistic aspects of the mind, arguing that repression is a process of preventing oneself from becoming conscious of some mechanistic item. Sartre argues that this explanation fails, since the activity of repression would need to be concealed but cannot be mechanistic. Sartre’s alternative rests on his theory of projects as the ground of motivations. Since projects structure conscious experience, they structure our reflective awareness of our own projects, which allows features of our projects to become hidden from our view. Sartre’s theory is internally coherent and consistent with the view of motivation currently emerging from social psychology. But it is inconsistent with his own theory of radical freedom. It requires instead Simone de Beauvoir’s theory of project sedimentation, which in turn entails a nonpurposive form of self-deception.
The term ‘contractualism’ can be used in a broad
sense—to indicate the view that morality is based on contract or
agreement—or in a narrow sense—to refer to a particular
view developed in recent years by the Harvard philosopher T. M.
Scanlon, especially in his book What We Owe to Each Other. This essay takes ‘contractualism’ in the narrower sense. We begin with a brief summary of Scanlon’s contractualism, and
then situate his view in relation both to other social contract
theories and to its main rival among impartial accounts of
morality—namely, utilitarianism. Our discussion is then
organised around a series of challenges to the contractualist
In the Gospel of John we are told the story of a Samaritan woman who asks Jesus whether the proper place of worship is on the holy mountain of Samaria or in the Temple of Jerusalem. These referred to two competing, antagonistic, religious institutions. Jesus responds: “Woman, believe Me, an hour is coming when neither in this mountain nor in Jerusalem will you worship the Father . . . an hour is coming, and now is, when the true worshippers will worship in spirit and truth; for such people the Father seeks to be His worshippers. God is spirit, and those who worship Him must worship in spirit and truth” (Jn 4:21-24).
In the obituary of her mentor Bill Hamilton, the American entomologist and evolutionary biologist Marlene Zuk wrote that the difference between Hamilton and everyone else was “not the quality of his ideas, but their sheer abundance” (Zuk 2000). The proportion of his ideas that were actually good was about the same as anyone else, “the difference between Bill and most other people was that he had a total of over one hundred ideas, with the result that at least ten of them were brilliant, whereas the rest of us have only four or five ideas as long as we live, with the result that none of them are”. Hamilton indeed had many good ideas. Over the years he made substantial contributions to the study of the origin of sex, genetic conflicts, and the evolution of senescence (Ågren 2013). His best idea, and the one that bears his name, is about the evolution of social behaviour, especially altruism. Hamilton’s Rule, and the related concepts of inclusive fitness and kin selection, have been the bedrock of the study of social evolution for the past half century (Figure 1).
A good surgeon knows how to perform a surgery; a good architect knows how to design a house. We value their know-how. We ordinarily look for it. What makes it so valuable? A natural response is that know-how is valuable because it explains success. A surgeon’s know-how explains her success at performing a surgery. And an architect’s know-how explains his success at designing houses that stand up. We value know-how because of its special explanatory link to success. But in virtue of what is know-how explanatorily linked to success? This essay defends the thesis that know-how’s special link to success is to be explained at least in part in terms of its being, or involving, a doxastic attitude that is epistemically alike propositional knowledge. If its explanatory link to success is what makes know-how valuable, an upshot of my argument is that the value of know-how is due, to a considerable extent, to its being, or involving, propositional knowledge.
This article uses psychological and neural theories to illuminate the use of analogies in literary allegories. It shows how new theories of neural representation, encompassing both cognitive and emotional aspects, have the potential to make sense of many kinds of literary comparisons including allegories. The main text analyzed is George Orwell’s Animal Farm, whose effectiveness is discussed using the multiconstraint theory of analogy supplemented with observations about neural functioning.
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced.
Automated geometry theorem provers start with logic-based formulations of Euclid’s axioms and postulates, and often assume the Cartesian coordinate representation of geometry. That is not how the ancient mathematicians started: for them the axioms and postulates were deep discoveries, not arbitrary postulates. What sorts of reasoning machinery could the ancient mathematicians, and other intelligent species (e.g. crows and squirrels), have used for spatial reasoning? “Diagrams in minds” perhaps? How did natural selection produce such machinery?
George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
In this paper I consider an argument for the possibility of intending at will, and its relationship to an argument about the possibility of believing at will. I argue that although we have good reason to think we sometimes intend at will, we lack good reason to think this in the case of believing. Instead of believing at will, agents like us often suppose at will.
Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non- Von Neumann computational models they use share many characteristics with biological computation.
You are morally permitted to save your friend at the expense of a few strangers, but not at the expense of very many. However, there seems no number of strangers that marks a precise upper bound here. Consequently, there are borderline cases of groups at the expense of which you are permitted to save your friend. This essay discusses the question of what explains ethical vagueness like this, arguing that there are interesting metaethical consequences of various explanations.
Origen (c. 185–c. 253) was a Christian exegete and theologian,
who made copious use of the allegorical method in his commentaries,
and (though later considered a heretic) laid the foundations of
philosophical theology for the church. He was taught by a certain
Ammonius, whom the majority of scholars identify as Ammonius Saccas,
the teacher of Plotinus; many believe, however, that the external
evidence will not allow us to identify him with the Origen whom
Plotinus knew as a colleague. He was certainly well-instructed in
philosophy and made use of it as an ancillary to the exposition and
harmonization of scripture.
This paper defends a challenge, inspired by arguments drawn from contemporary ordinary language philosophy and grounded in experimental data, to certain forms of standard philosophical practice. There has been a resurgence of philosophers who describe themselves as practicing “ordinary language philosophy”. The resurgence can be divided into constructive and critical approaches. The critical approach to neo-ordinary language philosophy has been forcefully developed by Baz (2012a,b, 2014, 2015, 2016, forthcoming), who attempts to show that a substantial chunk of contemporary philosophy is fundamentally misguided. I describe Baz’s project and argue that while there is reason to be skeptical of its radical conclusion, it conveys an important truth about discontinuities between ordinary uses of philosophically significant expressions (“know”, e.g.) and their use in philosophical thought experiments. I discuss some evidence from experimental psychology and behavioral economics indicating that there is a risk of overlooking important aspects of meaning or misinterpreting experimental results by focusing only on abstract experimental scenarios, rather than employing more diverse and more ecologically valid experimental designs. I conclude by presenting a revised version of the critical argument from ordinary language.
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
Actualists hold that contrary-to-duty scenarios give rise to deontic dilemmas and provide counterexamples to the transmission principle, according to which we ought to take the necessary means to actions we ought to perform. In an earlier article, I have argued, contrary to actualism, that the notion of ‘ought’ that figures in conclusions of practical deliberation does not allow for deontic dilemmas and validates the transmission principle. Here I defend these claims, together with my possibilist account of contrary-to-duty scenarios, against Stephen White’s recent criticism.