In this article, I defend an account of linguistic comprehension on which meaning is not cognized, or on which we do not tacitly know our language's semantics. On this view, sentence comprehension is explained instead by our capacity to translate sentences into the language of thought. I explain how this view can explain our capacity to correctly interpret novel utterances, and then I defend it against several standing objections.
While Classical Logic (CL) used to be the gold standard for evaluating the rationality of human reasoning, certain non-theorems of CL—like Aristotle’s (∼(? → ∼?)) and Boethius’ theses ((? → ?) → ∼(? → ∼?))—appear intuitively rational and plausible. Connexive logics have been developed to capture the underlying intuition that conditionals whose antecedents contradict their consequents, should be false. We present results of two experiments (total ? = 72), the first to investigate connexive principles and related formulae systematically. Our data suggest that connexive logics provide more plausible rationality frameworks for human reasoning compared to CL. Moreover, we experimentally investigate two approaches for validating connexive principles within the framework of coherence-based probability logic . Overall, we observed good agreement between our predictions and the data, but especially for Approach 2.
Denić (2021) observes that the availability of distributive inferences — for sentences with disjunction embedded in the scope of a universal quantifier — depends on the size of the domain quantified over as it relates to the number of disjuncts. Based on her observations, she argues that probabilistic considerations play a role in the computation of implicatures. In this paper we explore a different possibility. We argue for a modification of Denić’s generalization, and provide an explanation that is based on intricate logical computations but is blind to probabilities. The explanation is based on the observation that when the domain size is no larger than the number of disjuncts, universal and existential alternatives are equivalent if distributive inferences are obtained. We argue that under such conditions a general ban on ‘fatal competition’ (Magri 2009a,b; Spector 2014) is activated thereby predicting distributive inferences to be unavailable.
There are two things called contexts that play important but distinct roles in standard accounts of language and communication. The first—call these compositional contexts—feature in a semantic theory. Compositional contexts are sequences of parameters that play a role in characterizing compositional semantic values for a given language, and in characterizing how such compositional semantic values determine a proposition expressed by a given sentence. The second—call these context sets—feature in a pragmatic theory. Context sets are abstract representations of the conversational states that serve to determine the compositional contexts relevant for interpreting a speech-act and that such speech-acts act upon. In this paper, I’ll consider how, given mutual knowledge of the information codified in a compositional semantic theory, an assertion of a sentence serves to update the context set. There is a standard account of how such conversational updating occurs. However, while this account has much to recommend it, I’ll argue that it needs to be revised in light of certain natural discourses.
In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one another. The second category consists of those deepfakes that direct an illocutionary speech act—such as a request, injunction, invitation, or promise—to an addressee who is located outside of the recording. For instance, fake footage of a company director instructing their employee to make a payment, or of a military official urging the populace to flee for safety. Whereas the former category may deceive an audience by giving rise to false beliefs, the latter can more directly manipulate an agent’s actions: the speech act’s addressee may be moved to accept an invitation or a summons, follow a command, or heed a warning, and in doing so further a deceiver’s unethical ends.
We can divide medieval discussions of the insolubles— logical paradoxes such as the Liar—into two main periods, before and after Bradwardine, who wrote his treatise on Insolubles in Oxford in the early 1320s. Bradwardine’s aim was to develop a solution to the insolubles which, unlike the then dominant theories, restrictio and cassatio, placed no restriction on self-reference or the theory of truth. He claimed to be able to prove that insolubles signify not only that they are false but also that they are true, and so are false. Few subsequent writers on insolubles followed him completely.
Walter de Segrave was at Merton College, Oxford from 1321 until at least 1338. Segrave’s ‘Insolubles’ is his only known work, which appears to have been composed at Oxford in the late 1320s or early 1330s, consistent with the fact that it is clearly a response to Bradwardine’s own ‘Insolubles’, composed when Bradwardine was regent master at Balliol College, that is, from 1321-23, before he moved to Merton in 1323. The dominant theory at the time Bradwardine was writing was restrictivism, the claim that a part cannot supposit for the whole of which it is part (and consequently, for its contradictory or anything convertible with it), at least in the presence of a privative term, in particular, privative alethic and epistemic terms such as ‘false’ and ‘unknown’.
Forty years ago, Niels Green-Pedersen listed five different accounts of valid consequence, variously promoted by logicians in the early fourteenth century and discussed by Niels Drukken of Denmark in his commentary on Aristotle’s Prior Analytics, written in Paris in the late 1330s. Two of these arguably fail to give defining conditions: truth preservation was shown by Buridan and others to be neither necessary nor sufficient; incompatibility of the opposite of the conclusion with the premises is merely circular if incompatibility is analysed in terms of consequence. Buridan was perhaps the first to define consequence in terms of preservation of what we might dub verification, that is, signifying as things are. John Mair pinpointed a sophism which threatens to undermine this proposal. Bradwardine turned it around: he suggested that a necessary condition on consequence was that the premises signify everything the conclusion signifies. Dumbleton gave counterexamples to Bradwardine’s postulates in which the conclusion arguably signifies more than, or even completely differently from the premises. Yet a long-standing tradition held that some species of validity depend on the conclusion being in some way contained in the premises. We explore the connection between signification and consequence and its role in solving the insolubles.
Human languages vary in terms of which meanings they lexicalize, but there are important constraints on this variation. It has been argued that languages are under the pressure to be simple (e.g., to have a small lexicon size) and to allow for an informative (i.e., precise) communication with their lexical items, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures ([ ] and much subsequent work). However, in certain semantic domains, it is possible to reach very high levels of informativeness even if very few meanings from that domain are lexicalized. This is due to productive morphosyntax, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey any natural number meaning using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages are (near-)optimal solutions to a different kind of trade-off problem: the trade-off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances).
I have now had a chance to read the first part of Greg Restall and Shawn Sandefer’s Logical Methods, some 113 pages on propositional logic.I enjoyed this well enough but I am, to be frank, a bit puzzled about the intended readership. …
Cynthia rises from the couch to go get that beer. If we accept industrial-strength representationalism, in particular the Kinematics and Specificity theses, then there must be a fact of the matter exactly which representations caused this behavior. …
This paper identifies a type of linguistic phenomenon new to feminist philosophy of language: biased evaluative descriptions. Biased evaluative descriptions (BEDs) are descriptions whose well-intended positive surface meanings are inflected with implicitly biased content. Biased evaluative descriptions are characterized by three main features: (i) they have roots in implicit bias or benevolent sexism, (ii) their application is counterfactually unstable across dominant and subordinate social groups, and (iii) they encode stereotypes. After giving several different kinds of examples of biased evaluative descriptions, I distinguish them from similar linguistic concepts, including backhanded compliments, slurs, insults, epithets, pejoratives, and dog-whistles. I suggest that the framework of traditional Gricean implicature cannot account for BEDs. I discuss some challenges to the distinctiveness and evaluability of BEDs, including intersectional social identities. I conclude by discussing the social significance and moral status of BEDs. Identifying BEDs is important for a variety of social contexts, from the very general and broad (political speeches) to the very particular and small (bias in academic hiring).
Communication can be risky. Like other kinds of actions, it comes with potential costs. For instance, an utterance can be embarrassing, offensive, or downright illegal. In the face of such risks, speakers tend to act strategically and seek ‘plausible deni-ability’. In this paper, we propose an account of the notion of deniability at issue. On our account, deniability is an epistemic phenomenon. A speaker has deniability if she can make it epistemically irrational for her audience to reason in certain ways. To avoid predictable confusion, we distinguish deniability from a practical correlate we call ‘untouchability’. Roughly, a speaker has untouchability if she can make it practically irrational for her audience to act in certain ways. These accounts shed light on the nature of strategic speech and suggest countermeasures against strategic speech.
Sher on the weight of reasons
Posted on Friday, 13 Jan 2023. A few thoughts on Sher (2019), which I found advertised in Nair (2021). This (long and rich) paper presents a formal model of reasons and their weight, with the aim of clarifying how different reasons for or against an act combine. …
Assertions, so Stalnaker’s (1978) familiar narrative goes, express propositions and are made in context; in fact, context and what is said frequently affect each other. Since language has context-sensitive expressions, which proposition some given assertion expresses may depend on the context in which it is made. Assertions, in turn, affect the context, and they do so by adding the proposition expressed by that assertion to the context.
The proper translation of “unless” into intuitionistic formalisms is examined. After a brief examination of intuitionistic writings on “unless”, and on translation in general, and a close examination of Dummett’s use of “unless” in Elements of Intuitionism (1975b), I argue that the correct intuitionistic translation of “A unless B” is no stronger than “¬? → ?”. In particular, “unless” is demonstrably weaker than disjunction. I conclude with some observations regarding how this shows that one’s choice of logic is methodologically prior to translation from informal natural language to formal systems.
A New Vision is the sequel to Soames’ The Analytic Tradition in Philosophy, Volume I: The Founding Giants (Princeton UP, 2014). Founding Giants covered Frege, Moore and Russell. New Vision covers Wittgenstein’s Tractatus, the rise of logical empiricism and its downfall, the advances in logic due to Gödel, Tarski, Church and Turing, Tarski’s theory of truth, and contrasting approaches to ethics and meta-ethics in the 1930s. Soames describes his goal as being to identify major insights and achievements, distinguishing them from major errors or disappointments. His declared focus is explication and evaluation of arguments in the texts of Wittgenstein, Carnap et al. Thereby Soames conceives of himself as “arguing with the greats” rather than historians of analytic philosophy. He thereby seeks to avoid the perils of antiquarianism which besets history of philosophy when it is bowed down by too much attention to historical-textual detail, whilst his engagement with the secondary literature is sparse.
A puzzling inversion has taken place in the reception of the work of John Austin. In his own day, he was understood to be Oxford’s counterpart to Ludwig Wittgenstein, and, like Wittgenstein, he was described as an ‘ordinary language philosopher.’ That term itself reflected a confused outsiders’ take on the enterprise that it was meant to label; nevertheless, at the time it would have been no trouble at all to elicit from the philosopher-on-the-street various on-target characterizations of that enterprise: for instance, that it was deeply antitheoretical; that its objective was to bring philosophy to an end by exposing philosophical tenets as grammatical confusions; that if one wanted to find out what came under that heading, one could look to Austin, who had provided a number of exemplary treatments.
This paper provides an account of co-identification with fictional names, the way in which a fictional name can be used to talk about the same fictional character on disparate occasions. I develop a version of the view that fictional characters are roles understood as sets of properties couched within a dynamic understanding of fictional discourse. I argue that this view captures what is right about so-called name-centric approaches to co-identification with fictional names. I show how the dynamic view in addition accounts for a number of ways of using fictional names.
« Happy 40th Birthday Dana! Cargo Cult Quantum Factoring
Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids. …
In The Contradictory Christ, Jc Beall argues that paraconsistent logic provides a way to show how the central claims of Christology can all be true, despite their paradoxical appearances. For Beall, claims such as “Christ is peccable” and “Christ is impeccable” are both true, with no change of subject matter or ambiguity of meaning of any term involved in each claim. Since to say that Christ is impeccable is to say that Christ is not peccable, these two claims are contradictory, and so, for Beall the conjunction “Christ is peccable and Christ is not peccable” is a true contradiction. This is a radical and original view of the incarnation, and a revisionary view of what is permissible for theological reasoning.
Guessing is a familiar activity, one we engage in when we are uncertain of the answer to a question under discussion. It is also an activity that lends itself to normative evaluation: some guesses are better than others. The question that interests me here is what makes for a good guess. In recent work, Dorst and Mandelkern have argued that good guesses are distinguished from bad ones by how well they optimize a tradeoff between accuracy and specificity. Here, I argue that Dorst and Mandelkern’s implementation of this idea fails to satisfy some plausible constraints on good guesses, and I develop an alternative implementation that satisfies the relevant constraints. The result is a new account of good guesses which retains the positive aspects of Dorst and Mandelkern’s proposal, but without the drawbacks.
There are some names which cannot be spoken and others which cannot be written, at least on certain very natural ways of conceiving of them. Interestingly, this observation proves to be in tension with a range of natural views about what names are. Prima facie, this looks like a problem for predicativists. Ultima facie, it turns out to be equally problematic for Millians. For either sort of theorist, resolving this tension requires embracing a revisionary account of the metaphysics of names. Revisionary Millianism, I argue, offers some important advantages over its predicativist competitor.
Chs 1 to 7 of MLC, as we’ve seen, give us a high-level and often rather challenging introduction to core first-order logic with a quite strongly proof-theoretic flavour. Now moving on, the next three chapters are on arithmetics — Ch. …
How should we understand the Confucian doctrine of the rectification of names (zhengming): what does it mean that an object’s name must be in accordance with its reality, and why does it matter? The aim of this paper is to answer this question by advocating a novel interpretation of the later Confucian, Xunzi’s account of the doctrine. Xunzi claims that sage-kings ascribe names and values to objects by convention, and since they are sages, they know the truth. When we misuse names, we are departing from a sagely convention of naming. As sagely convention determines moral truth, departure from the linguistic convention of the sages is a departure from moral truth. On my interpretation of Xunzi, the rectification of names is not a doctrine about what is true, but a doctrine about how we aim at truth. We are aiming at descriptive truth when our language conforms to the correct name of an object according to what I call ‘Confucian conventionalism’. When we correctly aim at descriptive truth we can aim at moral truth. Therefore, I claim that the doctrine of the rectification of names is concerned with discerning the literal accordance of language with an object (what is descriptively, linguistically true), to determine what is normatively, or morally, true. According to Xunzi, moral truth is grounded in linguistic truth.
If you dissect a square into n similar rectangles, what proportions can these rectangles have? Folks on Mathstodon figured this out for n ≤ 7, and I blogged about it here recently. But I was left feeling that some deeper structure governed this problem. …
Monroe Beardsley (1915–1985) was born and raised in Bridgeport,
Connecticut, and educated at Yale University (B.A. 1936, Ph.D. 1939). He taught at a number of colleges and universities, including Mt. Holyoke College and Yale University, but most of his career was spent
at Swarthmore College (22 years) and Temple University (16 years). Beardsley is best known for his work in aesthetics—and this
article will deal exclusively with his work in that area—but he
was an extremely intellectually curious man, and published articles in
a number of areas, including the philosophy of history, action theory,
and the history of modern philosophy.
Previous works used comparative sentences like Sue has more gold/ diamonds than Dan to study the mass/count distinction, observing that mass nouns like gold trigger non-discrete comparative measurement, while count nouns like diamonds trigger counting. These works have not studied comparatives like Sue has more gold than diamonds, which combine a mass noun and a count noun. We show that naturally appearing examples of such ‘mixed comparatives’ usually invoke non-discrete measurement. We analyze the semantics of this effect and other coercisons of count nouns into mass-like meanings: pseudo-partitives (20kg of books), degree interpretations of counting-based denominal adjectives (more bilingual), ‘grinding’ contexts (bicycle all over the place) and number unspecified determiners (most, a lot of ). Based of this analysis we propose a revised system of Roth-stein’s context-driven counting. In the proposed account, ‘impure’ semantic atoms replace the role of contextual indices in Rothstein’s account. The effacing/grinding ambiguity in Rothstein’s system is replaced by one general count-to-mass mapping. The common rock-like mass/count polysemy is used as emblematic for this count-to-mass mapping instead of the rather rare carpet/ing alternation in Rothstein’s proposal. We show advantages of this revised system in treating count-to-mass phenomena, including the unacceptability of mixed comparatives like #more rock than rocks.
The goal of this paper is to analyse the role of convention in interpreting physical theories—and, in particular, how the distinction between the conventional and the non-conventional interacts with judgments of equivalence. We will begin with a discussion of what, if anything, distinguishes those statements of a theory that might be dubbed “conventions”. This will lead us to consider the conventions that are not themselves part of a theory’s content, but are rather applied to the theory in interpreting it. Finally, we will consider the idea that what conventions to adopt might, itself, be regarded as a matter of convention.
How exactly does natural language permit reference to properties, and what notions of a property does it permit reference to? These are questions of descriptive metaphysics, more specifically natural language ontology. When such questions are pursued, further, metaontological questions arise, namely how notions of a property that are implicit in the ontology of natural language relate to the ‘technical’ notions of a property that figure in philosophy and formal semantics. We will see that there are significant discrepancies which raise questions about a core-periphery distinction in the ontology of natural language, with core ontology being part of universal grammar.