A few links
Posted on Wednesday, 25 Apr 2018
I stumbled across a few interesting free books in the last few days. 1. Tony Roy has a 1051 page introduction
to logic on his homepage, which slowly and evenly proceeds from
formalising ordinary-language arguments all the way to proving
Gödel's second incompleteness theorem. …
I think some Aristotelian philosophers are inclined to think that our nature is to be rational animals, so that all rational animals would be of the same metaphysical species. Here is a problem with this. …
The problem of evil consists of three main parts:
The problem of suffering. The problem of evil choices. The problem of hiddenness (which is an evil at most conditionally on God’s existing). The theist has trouble explaining why there is so much suffering. …
Philippe de Champaigne Vanitas - Life, Death, Time
[I dedicate this post to the memory of my beloved sister, Sarah Danaher Greene (1974-2018). Sarah was diagnosed with terminal cancer in March 2018 and died, suddenly and unexpectedly quickly, just three weeks later. …
Experimental philosophy is the name for a recent movement whose participants use the methods of experimental psychology to probe the way people think about philosophical issues and then examine how the results of such studies bear on traditional philosophical debates. Given both the breadth of the research being carried out by experimental philosophers and the controversial nature of some of their central methodological assumptions, it is of no surprise that their work has recently come under attack. In this paper we respond to some criticisms of experimental philosophy that have recently been put forward by Antti Kauppinen. Unlike the critics of experimental philosophy, we do not think the fledgling movement either will or should fall before it has even had a chance to rise up to explain what it is, what it seeks to do (and not to do), and exactly how it plans to do it. Filling in some of the salient details is the main goal of the present paper.
When a noun is modified by an adnominal, as in blue door, short giraffe, or book on the table, the modifier is often analyzed as applying to an argument of the predicate that the noun denotes. We call this argument the referential argument of the noun (Williams 1981,Higginbotham 1985). Consider for instance (1).
This paper argues that multiple coordinations like tall, thin and happy are interpreted in a “flat” iterative process, but using “nested” recursive application of binary coordination operators in the compositional meaning derivation. Ample motivation for flat interpretation is shown by contrasting such coordinations with nested, syntactically ambiguous, coordinate structures like tall and thin and happy. However, new evidence coming from type shifting and predicate distribution with verb phrases show motivation for an independent hierarchical ingredient in the compositional semantics of multiple coordination with no parallel hierarchy in the syntax. This establishes a contrast between operations at the syntax-semantics interface and compositional semantic mechanisms. At the same time, such evidence motivate the treatment of operations like type shifting and distributivity as purely semantic.
This chapter overviews Hume’s thoughts on the nature and the role of imagining and how the two are linked to the relevant contemporary discussions, with an almost exclusive focus on the first book of the Treatise of Human Nature. Over the course of this text, Hume draws and discusses three important distinctions among our conscious mental episodes (or what he calls “perceptions’). First, he divides them into “impressions” and “ideas” – or, as he also says, into “feelings” and “thoughts” (1.1.1). The former comprise “sensations, passions and emotions, as they make their first appearance in the soul” (22.214.171.124) – that is, perceptual experiences, bodily sensations, and ba sic feelings of desire and emo tion. By contrast, the latter include “the faint images of [impressions] in thinking and reasoning” (ibid.) – such as memories, occurent beliefs and, indeed, imaginings.
The sense and role of defaults in the semantics/pragmatics landscape
is changing swiftly and dynamically. First, it is changing due to the
progression in the debates concerning the delimitation of explicit
content (Jaszczolt 2009a, 2016a). Second, it is propelled by the
debates concerning the literal/nonliteral vis-à-vis salient/nonsalient
distinction (Giora & Givoni 2015; Ariel 2016). Next, it is
influenced by computational linguistics that develops statistical
models for learning compositional meaning using ‘big data’
(Jurafsky & Martin 2017 [Other Internet Resources]; Liang &
Causality plays an important role in medieval philosophical writing:
the dominant genre of medieval academic writing was the commentary on
an authoritative work, very often a work of Aristotle. Of the works of
Aristotle thus commented on, the Physics plays a central
role. Other of Aristotle’s scientific works – On the
Heavens and the Earth, On Generation and Corruption,
and, of course, the Metaphysics – are also significant
for the study of causation: so there is a rather daunting body of work
to survey. One might, though, be tempted to argue that this concentration on
causality is simply an effect of reading Aristotle, but this would be
Language gives structure to our thoughts. When I say “I saw that bird”, I convey a different thought than when I say “that bird saw me”. The different order of the words enables me to express the different roles of the players in these similar thoughts. What we see is a tight connection between linguistic ordering of words and mental life. This connection between form and meaning is present in the most common sentences that we use every day. In this way, language helps us to express important parts of our mental life and convey them to others.
Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception.
In an old paper, I argued that we do not hallucinate impossibilia: if we perceive something, the thing we perceive is possible, even if it is not actual. Consequently, if anyone has a perception—veridical or not—of a perfect being, a perfect being is possible. …
Yet another tactic was offered the Negro. He was encouraged to seek unity with the millions of disadvantaged whites of the South, whose basic need for social change paralleled his own. Theoretically, this proposal held a measure of logic, for it is undeniable that great masses of Southern whites exist in conditions scarcely better than those which afflict the Negro. …
In Our Knowledge of the Internal World, Robert Stalnaker describes two opposed perspectives on the relation between the internal and the external. According to one, the internal world is taken as given and the external world as problematic, and according to the other, the external world is taken as given and the internal world as problematic. Analytic philosophy moved from the former to the latter, from problems of world-construction to problems of self-locating beliefs. I argue in this paper that these problems are equivalent: both arise because experience and objective, external facts jointly underdetermine their relation. Both can be seen as a problem of expressive completeness; of the internal language in the former case, and of the non-indexical language in the second.
Symposium on Del Pinal and Spaulding, “Conceptual Centrality and Implicit Bias” Robert Briscoe April 23, 2018 Mind & Language Symposia / Philosophy of Mind / Psychology / Social CognitionI’m very glad to announce our latest Mind & Language symposium on Guillermo Del Pinal and Shannon Spaulding’s “Conceptual Centrality and Implicit Bias” from the journal’s February 2018 issue. …
Early on Saturday, 14 April, it was announced that the US, UK and France had conducted targeted strikes on three targets in Syria – a chemical weapons and storage facility, a research centre and a military bunker – in response to Assad’s (alleged) use of chemical weapons in Douma. …
I consider a problem from pragmatics for the radical interpretation project, relying on the principle of charity. If a speaker X in a context c manifests the attitude of holding a sentence s true, this might be because of believing, not the content of s in c, but what results from a pragmatic enrichment of that content. In this case, the connection between the holding-true attitude and the meaning of s might be too loose for charity to confirm the correct interpretation hypothesis. To solve this problem, I apply the coherence raising account of pragmatic enrichment developed in Pagin 2014. The result is that in upward entailing linguistic contexts, the enriched content entails the prior content, and so charity prevails: the speaker also believes the prior content. In downward entailing contexts this would not hold, but I argue that enrichments tend not to occur in downward entailing contexts.
Distinguish the following kinds of "offsetting" behaviour:
Preventative offsetting -- when potential harms depend on just the global amount of something (say, greenhouse gas emissions), it seems that one can prevent the potential harm done by one's contributions by "offsetting" or paying to reduce others' contributions, so that the net effect of one's behaviour leaves the global magnitudes unchanged. …
Theories of truth can hardly avoid taking into account how truth is expressed in natural language. Existing theories of truth have generally focused on true occurring with that- clauses. This paper takes a closer look at predicates of truth (and related notions) when they apply to objects as the referents of referential noun phrases, focusing on what I call the ‘core’ of language. It argues that truth predicates and their variants, predicates of correctness, satisfaction and validity, do not apply to propositions (not even with that-clauses), but to a range of attitudinal and modal objects, objects we refer to as ‘claims’, ‘beliefs’, ‘judgments’, ‘demands’, ‘promises, ‘obligations’ etc. As such natural language reflects a notion of truth that is primarily a normative notion conveyed by correct. This normative notion, however, is not action-guiding, but rather constitutive of representational objects (in the sense of Jarvis 2012), independently of any actions that may go along with them. The paper furthermore argues that the predicate true is part of a larger class of satisfaction predicates (satisfied, realized, taken up, etc). The semantic differences among different satisfaction predicates, the paper will argue, are best accounted for in terms of a truthmaker theory along the lines of Fine’s (to appear) truthmaker semantics. Truthmaker semantics also provides a notion of partial content applicable to attitudinal and modal objects, which may exhibit partial correctness, partial satisfaction, and partial validity.
In the posthumously published ‘Truth and Probability’ (1926), Ramsey sets out an influential account of the nature, measurement, and norms of partial belief. The essay is a foundational work on subjectivist interpretations of probability, according to which probabilities can be interpreted as rational degrees of belief (see entry on Interpretations of Probability). Many of its key ideas and arguments have since featured in other foundational works within the subjectivist tradition (e.g., Savage 1954, Jeffrey 1965). Ramsey’s central claim in ‘Truth and Probability’ is that the laws of probability supply us with a ‘logic of partial belief’. That is, the laws specify what would need to be true of any consistent set of partial beliefs, in a manner analogous to how the laws of classical logic might be taken to generate necessary conditions on any consistent set of full beliefs. His case for this is based on a novel account of what partial beliefs are and how they can be measured.
Recent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays in general do not traverse null geodesics. Having presented this result, we discuss its philosophical significance, both for the clock hypothesis (and, in particular, a recent purported proof thereof for light clocks), and for the operational meaning of the metric field.
The ethical task of becoming a better person requires identifying and fairly assessing one’s motivations. Any ethical theory needs to be consistent with the structure of human motivation. Ethics therefore requires an understanding of how self-deception about motivation is possible. The two main theories of self-deception about motivation are Sigmund Freud’s theory of repression and Jean-Paul Sartre’s theory of bad faith. Freud distinguishes between rationally structured and purely mechanistic aspects of the mind, arguing that repression is a process of preventing oneself from becoming conscious of some mechanistic item. Sartre argues that this explanation fails, since the activity of repression would need to be concealed but cannot be mechanistic. Sartre’s alternative rests on his theory of projects as the ground of motivations. Since projects structure conscious experience, they structure our reflective awareness of our own projects, which allows features of our projects to become hidden from our view. Sartre’s theory is internally coherent and consistent with the view of motivation currently emerging from social psychology. But it is inconsistent with his own theory of radical freedom. It requires instead Simone de Beauvoir’s theory of project sedimentation, which in turn entails a nonpurposive form of self-deception.
The term ‘contractualism’ can be used in a broad
sense—to indicate the view that morality is based on contract or
agreement—or in a narrow sense—to refer to a particular
view developed in recent years by the Harvard philosopher T. M.
Scanlon, especially in his book What We Owe to Each Other. This essay takes ‘contractualism’ in the narrower sense. We begin with a brief summary of Scanlon’s contractualism, and
then situate his view in relation both to other social contract
theories and to its main rival among impartial accounts of
morality—namely, utilitarianism. Our discussion is then
organised around a series of challenges to the contractualist
In the Gospel of John we are told the story of a Samaritan woman who asks Jesus whether the proper place of worship is on the holy mountain of Samaria or in the Temple of Jerusalem. These referred to two competing, antagonistic, religious institutions. Jesus responds: “Woman, believe Me, an hour is coming when neither in this mountain nor in Jerusalem will you worship the Father . . . an hour is coming, and now is, when the true worshippers will worship in spirit and truth; for such people the Father seeks to be His worshippers. God is spirit, and those who worship Him must worship in spirit and truth” (Jn 4:21-24).
In the obituary of her mentor Bill Hamilton, the American entomologist and evolutionary biologist Marlene Zuk wrote that the difference between Hamilton and everyone else was “not the quality of his ideas, but their sheer abundance” (Zuk 2000). The proportion of his ideas that were actually good was about the same as anyone else, “the difference between Bill and most other people was that he had a total of over one hundred ideas, with the result that at least ten of them were brilliant, whereas the rest of us have only four or five ideas as long as we live, with the result that none of them are”. Hamilton indeed had many good ideas. Over the years he made substantial contributions to the study of the origin of sex, genetic conflicts, and the evolution of senescence (Ågren 2013). His best idea, and the one that bears his name, is about the evolution of social behaviour, especially altruism. Hamilton’s Rule, and the related concepts of inclusive fitness and kin selection, have been the bedrock of the study of social evolution for the past half century (Figure 1).
A good surgeon knows how to perform a surgery; a good architect knows how to design a house. We value their know-how. We ordinarily look for it. What makes it so valuable? A natural response is that know-how is valuable because it explains success. A surgeon’s know-how explains her success at performing a surgery. And an architect’s know-how explains his success at designing houses that stand up. We value know-how because of its special explanatory link to success. But in virtue of what is know-how explanatorily linked to success? This essay defends the thesis that know-how’s special link to success is to be explained at least in part in terms of its being, or involving, a doxastic attitude that is epistemically alike propositional knowledge. If its explanatory link to success is what makes know-how valuable, an upshot of my argument is that the value of know-how is due, to a considerable extent, to its being, or involving, propositional knowledge.
This article uses psychological and neural theories to illuminate the use of analogies in literary allegories. It shows how new theories of neural representation, encompassing both cognitive and emotional aspects, have the potential to make sense of many kinds of literary comparisons including allegories. The main text analyzed is George Orwell’s Animal Farm, whose effectiveness is discussed using the multiconstraint theory of analogy supplemented with observations about neural functioning.
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced.
Automated geometry theorem provers start with logic-based formulations of Euclid’s axioms and postulates, and often assume the Cartesian coordinate representation of geometry. That is not how the ancient mathematicians started: for them the axioms and postulates were deep discoveries, not arbitrary postulates. What sorts of reasoning machinery could the ancient mathematicians, and other intelligent species (e.g. crows and squirrels), have used for spatial reasoning? “Diagrams in minds” perhaps? How did natural selection produce such machinery?