Augustine famously claims every word is a name. Some readers take Augustine to thereby maintain a purely referentialist semantic account according to which every word is a referential expression whose meaning is its extension. Other readers think that Augustine is no referentialist and is merely claiming that every word has some meaning. In this paper, I clarify Augustine’s arguments to the effect that every word is a name and argue that ‘every word is a name’ amounts to the claim that for any word, there exist tokens of that word which are autonymous nouns. Augustine takes this to be the result of universal lexical ambiguity or equivocity (that is, the fact that every word has more than one literal meaning) and I clarify how Augustine’s account of metalinguistic discourse, which is one of the most detailed to have survived from antiquity, differs from some ancient and modern theories.
In this paper, I motivate the addition of an actuality operator to relevant logics. Straightforward ways of doing this are in tension with standard motivations for relevant logics, but I show how to add the operator in a way that permits one to maintain the intuitions behind relevant logics. I close by exploring some of the philosophical consequences of the addition.
I am indebted to my own teachers, Prof. Peter Koepke and Prof. Stefan Geschke, who taught me everything in these notes. Prof. Geschke’s scriptum for Einführung in die Logik und Modelltheorie (Bonn, Summer 2010) provided an invaluable basis for the compilation of these notes.
The vocabulary of human languages has been argued to support efficient communication by optimizing the trade-off between complexity and informativeness (Kemp & Regier 2012). The argument has been based on cross-linguistic analyses of vocabulary in semantic domains of content words such as kinship, color, and number terms. The present work extends this analysis to a category of function words: indefinite pronouns (e.g. someone, anyone, no-one, cf. Haspelmath 2001). We build on previous work to establish the meaning space and featural make-up for indefinite pronouns, and show that indefinite pronoun systems across languages optimize the complexity/informativeness trade-off. This demonstrates that pressures for efficient communication shape both content and function word categories, thus tying in with the conclusions of recent work on quantifiers by Steinert-Threlkeld (2019). Furthermore, we argue that the trade-off may explain some of the universal properties of indefinite pronouns, thus reducing the explanatory load for linguistic theories.
Utterances of simple sentences containing taste predicates (e.g. delicious, fun, frightening) typically imply that the speaker has had a particular sort of firsthand experience with the object of predication. For example, an utterance of The carrot cake is delicious would typically imply that the speaker had actually tasted the cake in question, and is not, for example, merely basing her judgment on the testimony of others. According to one approach, this acquaintance inference is essentially an implicature, one generated by the Maxim of Quality together with a certain principle concerning the epistemology of taste (Ninan 2014). We first discuss some problems for this approach, problems that arise in connection with disjunction and generalized quantifiers. Then, after stating a conjecture concerning which operators ‘obviate’ the acquaintance inference and which do not, we build on Anand & Korotkova 2018 and Willer & Kennedy Forthcoming by developing a theory that treats the acquaintance requirement as a presupposition, albeit one that can be obviated by certain operators.
The TL;DR summary of what follows is that we should quantify the conventionality of a regularity (David-Lewis-style) as follows:
A regularity R in the behaviour of population P in a recurring situation S, is a convention of depth x, breadth y and degree z when there is a recurring situation T that refines S, and in each instance of T there is a subpopulation K of P, such that it’s true and common knowledge among K in that instance that:(A) BEHAVIOUR CONDITION: everyone in K conforms to R (B) EXPECTATION CONDITION: everyone in K expects everyone else in K to conform to R (C) SPECIAL PREFERENCE CONDITION: everyone in K prefers that they conform to R conditionally on everyone else in K conforming to R. where x (depth) is the fraction of S-situations which are T, y (breadth) is the fraction of all Ps involved who are Ks in this instance, and z is the degree to which (A-C) obtaining resembles a coordination equilibrium that solves a coordination problem among the Ks. …
In this post I reflect on the failures of nonsense-policing and ordinary language philosophy, and the fact that notwithstanding these failures, paying critical attention to semantic issues is of central importance in philosophy, and in metaphysics as well as philosophy of language. …
Multiple ontology languages have been developed over the years, which brings afore two key components: how to select the appropriate language for the task at hand and language design itself. This engineering step entails examining the ontological ‘commitments’ embedded into the language, which, in turn, demands for an insight into what the effects of philosophical viewpoints may be on the design of a representation language. But what are the sort of commitments one should be able to choose from that have an underlying philosophical point of view, and which philosophical stances have a knock-on effect on the specification or selection of an ontology language? In this paper, we provide a first step towards answering these questions. We identify and analyse ontological commitments embedded in logics, or that could be, and show that they have been taken in well-known ontology languages. This contributes to reflecting on the language as enabler or inhibitor to formally characterising an ontology or an ontological investigation, as well as the design of new ontology languages following the proposed design process.
(1700 words; 8 minute read.) What rational polarization looks like. It’s September 21, 2020. Justice Ruth Bader Ginsburg has just died. Republicans are moving to fill her seat; Democrats are crying foul.Fox News publishes an op-ed by Ted Cruz arguing that the Senate has a duty to fill her seat before the election. …
Consider an utterance of ‘Fish sticks are tasty’ as made by a speaker who likes fish sticks. How will the speaker assess this claim when, at some later point in her life, she comes to dislike fish sticks? As true or as false? Will she retract her earlier statement or stand by it? More generally, will she use her present taste standard in assessing the claim or the standard she had at the time of the original utterance? The answer to this question is of vital importance for the recent discussion on the semantics and pragmatics of so-called “predicates of personal taste” (e.g. “tasty” and “fun”).
Do representational pictures have propositional contents? The current paper argues that the characteristic contents of pictures are predicative rather than propositional: pictures characterise things as looking certain ways, and they thereby express properties of visual perspectives. The paper argues that the characteristic predicative contents of pictures are nonetheless able to feature in fully-fledged propositional contents once they are combined with contents of other suitable sorts. Various facts about communicative uses of pictures are then explained. The paper concludes by considering the bearing of its conclusions upon questions about the relationships between linguistic representation and pictorial representation.
We are most grateful to Eugene Koonin for having accepted to write for Biology & Philosophy a target paper on such a major topic in current biology as CRISPR- Cas (CRISPR-Cas stands for “Clustered Regularly Interspaced Short Palindromic Repeats”). There is indeed little doubt that the characterization of the CRISPR-Cas systems and their mechanisms constitutes a ground-breaking discovery in recent biological and biomedical sciences, from basic microbiology to technological applications (Doudna and Charpentier 2014). One sign of recognition among many has come from the leading journal Science, which chose CRISPR-Cas as its 2015 “breakthrough of the year” (McNutt 2015), described as “poised to revolutionize research” because of its role in genome editing.
A novel account of semantic information is proposed. The gist is that structural correspondence, analyzed in terms of similarity, underlies an important kind of semantic information. In contrast to extant accounts of semantic information, it does not rely on correlation, covariation, causation, natural laws, or logical inference. Instead, it relies on structural similarity, defined in terms of correspondence between classifications of tokens into types. This account elucidates many existing uses of the notion of information, for example, in the context of scientific models and structural representations in cognitive science. It is poised to open a new research program concerned with various kinds of semantic information, its functions, and its measurement.
Conceptual engineers seek to revise or replace the devices we use to speak and think. If this amounts to an effort to change what natural language expressions mean, conceptual engineers will have a hard time. It is largely unfeasible to change the meaning of e.g. ‘cause’ in English. Conceptual engineers may therefore seem unable to make the changes they aim to make. This is what I ‘the implementation problem’. In this paper, I argue that the call implementation problem dissolves if we expand our view of how conceptual engineers could implement the products of their work. I describe four implementation options: Standing Meaning, Meaning Modulation, Speaker- Meaning and Different Language. I query the feasibility and worth of pursuing these options. Unless each option fails because it is unfeasible or not worthwhile, conceptual engineers do not face an implementation problem worth worrying about. I argue that some of the options are feasible and worthwhile, and therefore, that conceptual engineers do not face an implementation problem worth worrying about.
In the Foundations of Mathematics (1925), Ramsey attempted to amend Principia Mathematica’s logicism to meet serious objections raised against it. While Ramsey’s paper is well known, some questions concerning Ramsey’s motivations to write it and its reception still remain. This paper considers these questions afresh. First, an account is provided for why Ramsey decided to work on his paper instead of simply accepting Wittgenstein’s account of mathematics as presented in the Tractatus. Secondly, evidence is given supporting that Wittgenstein was not moved by Ramsey’s objection against the Tractarian account of arithmetic, and a suggestion is made to explain why Wittgenstein reconsidered Ramsey’s account in the early thirties on several occasions. Finally, a reading is formulated to understand the basis on which Wittgenstein argues against Ramsey’s definition of identity in his 1927 letter to Ramsey.
We introduce a general theory of functions called Flow. We prove ZF, non-well founded ZF and ZFC can be immersed within Flow as a natural consequence from our framework. The existence of strongly inaccessible cardinals is entailed from our axioms. And our first important application is the introduction of a model of Zermelo-Fraenkel set theory where the Partition Principle (PP) holds but not the Axiom of Choice (AC). So, Flow allows us to answer to the oldest open problem in set theory: if PP entails AC.
This paper defends Priorianism, a theory in the philosophy of time which combines three theses: first, that there is a metaphysical distinction between the present time and non-present times; second, that there are temporary propositions, that is, propositions that change in truth-value simpliciter over time; and third, that there is change over time only if there are temporary propositions. Priorianism is accepted by many Presentists, Growing Block Theorists, and Moving Spotlight Theorists. However, it is difficult to defend the view without appealing to premises that those who reject the view find controversial. My aim in this paper is to defend Priorianism in a way that largely avoids appealing to such premises. I do three things: first (Section 1), I describe the component theses of Priorianism and the relations between them. Next (Section 2), I show how Priorians can respond to the argument that the B-theory implies that there are temporary propositions, and therefore satisfies the Priorian condition for there being change over time. Finally (Section 3), I defend the Priorian thesis that there is change over time only if there are temporary propositions against an alternative principle of change defended by Ross Cameron (The Moving Spotlight, 2015).
The paradox of pain refers to the idea that the folk concept of pain is paradoxical, treating pains as simultaneously mental states and bodily states (e.g. Hill 2005, 2017; Borg et al. 2020). By taking a close look at our pain terms, this paper argues that there is no paradox of pain. The air of paradox dissolves once we recognise that pain terms are polysemous and that there are two separate but related concepts of pain rather than one.
John Hyman insists that Frege-style cases for depiction show that any sound theory of depiction must distinguish between the ‘sense’ and the ‘reference’ of a picture. I argue that this rests on a mistake. Making sense of the cases does not require the distinction. In ‘Depiction’, John Hyman (2012) makes an observation about how people ordinarily ascribe representational content to pictures. Some uses of the verb ‘to depict’ express a relation, whereas other uses do not. He suggests that in turn this distinction is reflected in the more familiar difference between a genre picture (a picture with a generic content) and a portrait (a picture that portrays some individual or other). For example, when I say of a painting that it depicts a queen, I may be saying of the Queen that the painting depicts her (portrayal), or I may be speaking exclusively of the picture itself, meaning to say that it depicts a woman wearing a royal crown (genre). In the first case my use of ‘depicts’ expresses a relation, in the second case it does not. Hyman claims that these superficial differences reveal a distinction of fundamental importance to the theory of depiction. Any sound theory of depiction must distinguish between the ‘sense’ and the ‘reference’ of pictures. This is bluff. I will argue that the superficial differences do not require endorsing the distinction.
This chapter is an introduction to how the combination of two views – semantic minimalism and speech act pluralism (‘SM+SAP’, for short) – can be used explain some aspects of our practice of making knowledge attributions. SM+SAP wasn’t developed to account for issues in epistemology in particular. It was proposed as a solution to a very general linguistic phenomenon – a phenomenon that also happens to be exhibited by sentences containing ‘knows’. The chapter is structured as follows: • I first outline the general linguistic phenomenon/puzzle: how to resolve a tension between inter‐contextual stability and variability, and I show how that puzzle arises with respect to sentences containing ‘knows’. • The next section outlines speech act pluralism and the arguments for it. • I then outline semantic minimalism and the arguments for it. • I show how SM+SAP explains the data/puzzle we started with. • The final section outlines how SM+SAP has been used to defend skepticism.
A half-truth may be defined as a sentence that is true in one sense, but that fails to be true in another, hence as a sentence only true to some extent. This paper discusses some aspects in which the Liar may be considered a half-truth. Talk of half-truths, like talk of half-full containers, implies that truth is gradable, and moreover that some sentences can be true without being perfectly true. I review some evidence for the view that “true” and “false” are absolute gradable adjectives, and argue that both are moreover systematically ambiguous between a total and a partial interpretation supporting the strict-tolerant distinction. I use this evidence to revisit the strict-tolerant account of truth and the Liar. While the strict-tolerant account was originally conceived for vague predicates, its extension to the semantic paradoxes assumed that assertion, but not truth, comes in different degrees. I reconsider this claim, and argue that we get a more unified picture by treating “true” as a special type of vague predicate.
(2000 words; 9 minute read.) So far, I’ve laid the foundations for a story of rational polarization. I’ve argued that we have reason to explain polarization through rational mechanisms; showed that ambiguous evidence is necessary to do so; and described an experiment illustrating this possibility.Today, I’ll conclude the core theoretical argument. …
Epistemic contextualism is a recent and controversial position, according to which what is expressed by a knowledge attribution – a statement to the effect that someone “knows that p” – depends partly on facts about the speaker’s context. After clarifying just what the position involves, this entry describes the major theoretical motivations for contextualism and some main objections and alternatives to it.
This paper outlines an account of conditionals, the evidential account, which rests on the idea that a conditional is true just in case its antecedent supports its consequent. As we will show, the evidential account exhibits some distinctive logical features that deserve careful consideration. On the one hand, it departs from the material reading of ‘if then’ exactly in the way we would like it to depart from that reading. On the other, it significantly differs from the non-material accounts which hinge on the Ramsey Test, advocated by Adams, Stalnaker, Lewis, and others.
Why should we make our beliefs consistent or, more generally, probabilistically coherent? That it will prevent sure losses in betting and that it will maximize one’s chances of having accurate beliefs are popular answers. However, these justifications are self-centered, focused on the consequences of our coherence for ourselves. I argue that incoherence has consequences for others because it is liable to mislead others, to false beliefs about one’s beliefs and false expectations about one’s behavior. I argue that the moral obligation of truthfulness thus constrains us to either conform to the logic our audience assumes we use, educate them in a new logic, or give notice that one will do neither. This does not show that probabilistic coherence is uniquely suited to making truthful communication possible, but I argue that classical probabilistic coherence is superior to other logics for maximizing efficiency in communication.
Many in quantum foundations seek a principle explanation of Bell state entanglement. While reconstructions of quantum mechanics (QM) have been produced, the community does not find them compelling. Herein we offer a principle explanation for Bell state entanglement, i.e., conservation per no preferred reference frame (NPRF), such that NPRF unifies Bell state entanglement with length contraction and time dilation from special relativity (SR). What makes this a principle explanation is that it’s grounded directly in phenomenology, it is an adynamical and acausal explanation that involves adynamical global constraints as opposed to dynamical laws or causal mechanisms, and it’s unifying with respect to QM and SR.
The term “meaning holism” is generally applied to views
that treat the meanings of all of the words in a language as
interdependent. Holism draws much of its appeal from the way in which
the usage of all our words seems interconnected, and runs into many
problems because the resultant view can seem to conflict with (among
other things) the intuition that the meanings of individual words are
by and large shared and stable. This entry will examine the strengths of the arguments for and against
Forgiving wrongdoers who neither apologized, nor sought to make amends in any way, is controversial. Even defenders of the practice agree with critics that such “unilateral” forgiveness involves giving up on the meaningful redress that victims otherwise justifiably demand from their wrongdoers: apology, reparations, repentance, and so on. Against that view, I argue here that when a victim of wrongdoing sets out to grant forgiveness to her offender, and he in turn accepts her forgiveness, he thereby serves some important ends of apology and reparation, no matter what else he did – or did not do – by way of repair. Although much overlooked, the simple act of accepting forgiveness joins victim and offender in affirming and acting upon some important shared background assumptions, including many of those expressed in standard apologies. Perhaps more surprisingly, I argue that accepting forgiveness also fulfills the essential duty to counteract any concrete harm wrongfully inflicted. The argument helps explain some otherwise puzzling features of forgiveness, including that a victim can change her offender’s normative status, making him a less fitting target of the resentment, indignation and shunning of others, and even his own guilt pangs, simply by forgiving him.
In particular, we show how to replace the file-metaphor with two theses: one semantic and one metasemantic. We argue that the metaphor of mental files can be cashed out in terms of relational representational facts (viz. facts about the coordination of mental representations) and a metasemantic thesis about the role that information-relations to objects play in grounding coordination.
This paper addresses a task in Interactive Task Learning (Laird et al. IEEE Intell Syst 32:6–21, 2017). The agent must learn to build towers which are constrained by rules, and whenever the agent performs an action which violates a rule the teacher provides verbal corrective feedback: e.g. “No, red blocks should be on blue blocks”. The agent must learn to build rule compliant towers from these corrections and the context in which they were given. The agent is not only ignorant of the rules at the start of the learning process, but it also has a deficient domain model, which lacks the concepts in which the rules are expressed. Therefore an agent that takes advantage of the linguistic evidence must learn the denotations of neologisms and adapt its conceptualisation of the planning domain to incorporate those denotations. We show that by incorporating constraints on interpretation that are imposed by discourse coherence into the models for learning (Hobbs in On the coherence and structure of discourse, Stanford University, Stanford, 1985; Asher et al. in Logics of conversation, Cambridge University Press, Cambridge, 2003), an agent which utilizes linguistic evidence outperforms a strong baseline which does not.