Self-locating beliefs are beliefs about one’s position or
situation in the world, as opposed to beliefs about how the world is
in itself. Section 1 of this entry introduces self-locating beliefs. Section 2 presents several distinct arguments that self-locating
beliefs constitute a theoretically distinctive category. These
arguments are driven by central examples from the literature; we
categorize the examples by the arguments to which they contribute. (Some examples serve multiple strands of argument at once.) Section 3
examines positive proposals for modeling self-locating belief,
focusing on the two most prominent proposals, due to Lewis and Perry.
Genre discourse is widespread in appreciative practice, whether that is about hip-hop music, romance novels, or film noir. It should be no surprise then, that philosophers of art have also been interested in genres. Whether they are giving accounts of genres as such or of particular genres, genre talk abounds in philosophy as much as it does the popular discourse. As a result, theories of genre proliferate as well. However, in their accounts, philosophers have so far focused on capturing all of the categories of art that we think of as genres and have focused less on ensuring that only the categories we think are genres are captured by those theories. Each of these theories populates the world with far too many genres because they call a wide class of mere categories of art genres. I call this the problem of genre explosion. In this paper, I survey the existing accounts of genre and describe the kinds of considerations they employ in determining whether a work is a work of a given genre. After this, I demonstrate the ways in which the problem of genre explosion arises for all of these theories and discuss some solutions those theories could adopt that will ultimately not work. Finally, I argue that the problem of genre explosion is best solved by adopting a social view of genres, which can capture the difference between genres and mere categories of art.
In a series of recent papers, I presented a puzzle and theory of definition.2 I did not, however, indicate how the theory resolves the puzzle. This was an oversight, on my part, and one I hope to correct. My aim here is to provide that resolution: to demonstrate that my theory can consistently embrace the principles I prove to be inconsistent. To the best of my knowledge, this theory is the only one capable of this embrace—which marks yet another advantage it has over competitors.
A metainference is usually understood as a pair consisting of a collection of inferences, called premises, and a single inference, called conclusion. In the last few years, much attention has been paid to the study of metainferences—and, in particular, to the question of what are the valid metainferences of a given logic. So far, however, this study has been done in quite a poor language. Our usual sequent calculi have no way to represent, e.g. negations, disjunctions or conjunctions of inferences. In this paper we tackle this expressive issue. We assume some background sentential language as given and define what we call an inferential language, that is, a language whose atomic formulas are inferences. We provide a model-theoretic characterization of validity for this language—relative to some given characterization of validity for the background sentential language—and provide a proof-theoretic analysis of validity. We argue that our novel language has fruitful philosophical applications. Lastly, we generalize some of our definitions and results to arbitrary metainferential levels.
In this chapter we summarize results obtained in five studies (n = 1027) using an open format self-report procedure aimed at collecting naturally occurring inner speech in young adults. We look at existing inner speech measures as well as their respective strengths and limitations, emphasizing the appropriateness of an open format self-report method for our purpose. We describe the coding scheme used to organize inner speech instances produced by our participants. We present results in terms of the most frequently self-reported inner speech topics, which sheds light on the typical perceived content and functions of inner speech use. Some of these are: negative emotions, problem solving/thinking, planning/time management, self-motivating speech, emotional control, and self-reflection. These results are consistent with the self-regulatory and self-reflective functions of inner speech discussed in the literature, as well as with what several existing questionnaires aim to measure. However, our results also show that young adults in our samples talk to themselves about various topics and for multiple functions not captured by current research on inner speech. We conclude with a brief discussion regarding the relevance of our results for education.
I've been teaching Classical Rhetoric this semester, and I have become convinced of something I have long believed. Not just convinced, but really discovered that for anyone who studies this stuff, it seems to be an obvious truth (so obvious in the literature, I almost decided not to write this post). …
According to Mercier and Sperber (2009, 2011, 2017), people have an immediate and intuitive feeling about the strength of an argument. These intuitive evaluations are not captured by current evaluation methods of argument strength, yet they could be important to predict the extent to which people accept the claim supported by the argument. In an exploratory study, therefore, a newly developed intuitive evaluation method to assess argument strength was compared to an explicit argument strength evaluation method (the PAS scale; Zhao et al., 2011), on their ability to predict claim acceptance (predictive validity) and on their sensitivity to differences in the manipulated quality of arguments (construct validity). An experimental study showed that the explicit argument strength evaluation performed well on the two validity measures. The intuitive evaluation measure, on the other hand, was not found to be valid. Suggestions for other ways of constructing and testing intuitive evaluation measures are presented.
Peirce’s diagrammatic system of Existential Graphs (EGα) is a logical proof system corresponding to the Propositional Calculus (P L). Most known proofs of soundness and completeness for EGα depend upon a translation of Peirce’s diagrammatic syntax into that of a suitable Frege-style system. In this paper, drawing upon standard results but using the native diagrammatic notational framework of the graphs, we present a purely syntactic proof of soundness, and hence consistency, for EGα, along with two separate completeness proofs that are constructive in the sense that we provide an algorithm in each case to construct an EGα formal proof starting from the empty Sheet of Assertion, given any expression that is in fact a tautology according to the standard semantics of the system.
Many philosophers have suggested that claims of need play a special normative role in ethical thought and talk. But what do such claims mean? What does this special role amount to? Progress on these questions can be made by attending to a puzzle concerning some linguistic differences between two types of ‘need’ sentence: one where ‘need’ occurs as a verb, and where it occurs as a noun. I argue that the resources developed to solve the puzzle advance our understanding of the metaphysics of need, the meaning of ‘need’ sentences, and the function of claims of need in ethical discourse.
Bare numerals present an interesting challenge to formal semantics and pragmatics: they seem to be ambiguous between various readings (‘at least’, ‘exactly’ and ‘at most’ readings), and the choice of a particular reading seems to depend on complex interactions between contextual factors and linguistic structure. The goal of this article is to present and discuss some of the current approaches to the interpretation of bare numerals in formal semantics and pragmatics. It discusses four approaches to the interpretation of bare numerals, which can be summarized as follows: 1. The neo-Gricean view (e.g. Horn 1972; van Rooij & Schulz 2006): the basic, literal meaning, of numerals amounts to an ‘at least-interpretation’, and the ‘exactly n’-reading results from a pragmatic enrichment of the literal reading, i.e. is accounted for in terms of scalar implicatures.
According to the standard analysis of degree questions (see, among others, Rullmann 1995 and Beck and Rullmann 1997), a degree question’s LF contains a variable that ranges over individual degrees and is bound by the degree-question operator how. In contrast with this, we claim that the variable bound by the degree-question operator how does not range over individual degrees but over intervals of degrees, by analogy with Schwarzschild and Wilkinson’s (2002) proposal regarding the semantics of comparative clauses. Not only does the interval-based semantics predict the existence of certain readings that are not predicted under the standard view, it is also able, together with other natural assumptions, to account for the sensitivity of degree questions to negative-islands, as well as for the fact, uncovered by Fox and Hackl (2007), that negative islands can be obviated by some properly placed modals. Like Fox and Hackl (2007), we characterize negative island effects as arising from the fact that the relevant question, due to its meaning alone, can never have a maximally informative answer. Contrary to Fox and Hackl (2007), however, we do not need to assume that scales are universally dense, nor that the notion of maximal informativity responsible for negative islands is blind to contextual parameters.
A sentence such as ‘John has four children’ can be interpreted as meaning either that John has at least four children (weak reading), or that John has exactly four children (strong reading).On the classical neo-Gricean view,this ambiguity is similarto the ambiguitygenerated by scalar terms such as‘some’, forwhich both weakreading (i.e., some or all) and a strongreading (i.e., some but not all) are available.On this view, the strong reading of numerals, just like the strong reading of ‘some’, is derived as a scalar implicature, taking the weak reading as semantically given. However, more recent studies have found substantial differences between the two phenomena. For instance, the syntactic distribution of the strong reading is not the same in both cases, and young children’s performance in certain specific tasks has suggested that they acquire the strong readingof numerals before they acquire the strongreading of standard scalar items. Using a dual task approach, we provide evidence for another type of difference between numerals and standard scalar items.We show that tapping memory resources has opposite effects on bare numerals and on ‘some’. Under high cognitive load, participants report fewer implicatures for sentences involving ‘some’ (compared to low cognitive load conditions), but they report more strong readings for sentences involving bare numerals. We discuss the implications of this result for current theoretical debates regarding the semantics and pragmatics of numerals. © 2013 Elsevier B.V. All rights reserved.
It is sometimes said there are two ways of formulating Newtonian gravitation theory. On the first, matter gives rise to a gravitational field deflecting bodies from inertial motion within flat spacetime. On the second, matter’s accelerative effects are encoded in dynamical space-time structure exhibiting curvature and the field is ‘geometrized away’. Are these two accounts of Newtonian gravitation theoretically equivalent? Conventional wisdom within the philosophy of physics is that they are, and recently several philosophers have made this claim explicit. In this paper I develop an alternative approach to Newtonian gravitation on which the equivalence claim fails, and in the process identify an important but largely overlooked consideration for interpreting physical theories. I then apply this analysis to (a) put limits on the uses of Newtonian gravitation within the methodology of science, and (b) defend the interpretive approach to theoretical equivalence against formal approaches, including the recently popular criterion of categorical equivalence.
Antirealists who hold the knowability thesis, namely that all truths are knowable, have been put on the defensive by the Church-Fitch paradox of knowability. Rejecting the non-factivity of the concept of knowability used in that paradox, Edgington has adopted a factive notion of knowability, according to which only actual truths are knowable. She has used this new notion to reformulate the knowability thesis. The result has been argued to be immune against the Church-Fitch paradox, but it has encountered several other triviality objections. Schlöder in a forthcoming paper defends the general approach taken by Edgington, but amends it to save it in turn from the triviality objections. In this paper I will argue, first, that Schlöder’s justification for the factivity of his version of the concept of knowability is vulnerable to criticism, but I will also offer an improved justification that is in the same spirit as his. To the extent that some philosophers are right about our intuitive concept of knowability being a factive one, it is important to explore factive concepts of knowability that are made formally precise. I will subsequently argue that Schlöder’s version of the knowability thesis overgenerates knowledge or, in other words, it leads to attributions of knowledge where there is ignorance. This fits a general pattern for the research programme initiated by Edgington. This paper also contains preliminary investigations into the internal and logical structure of lines of inquiries, which raise interesting research questions.
Suppose you are visiting a hospital and you see Bob, a nurse,
sneaking into Alice’s hospital room. Unnoticed, you look at what is
going on, and you see that Bob is about to add a lethal drug to Alice’s
IV, a drug that would undetectably kill Alice while leaving her organs
A common objection to the very idea of conceptual engineering is the topic continuity problem: whenever one tries to “reengineer” a concept, one only shifts attention away from one concept to another. Put differently, there is no such thing as conceptual revision: there’s only conceptual replacement. Here, I show that topic continuity is compatible with conceptual replacement. Whether the topic is preserved in an act of conceptual replacement simply depends on what is being replaced (a conceptual tool or a conceptual role) and what the topic under discussion is. Thus, the topic continuity problem only arises from a failure to specify these two things.
This paper considers the debate between teams of skilled contributors versus a genius by focusing on a specific case: a team project to overturn some remarks by Wittgenstein on Frazer’s The Golden Bough. In theory, there can be a team which does this, but in actual practice, such a team seems unlikely to arise.
Schlenker 2009, 2010a,b provides an algorithm for deriving the presupposition projection properties of an expression from that expression’s classical semantics. In this paper, we consider the predictions of Schlenker’s algorithm as applied to attitude verbs. More specifically, we compare Schlenker’s theory with a prominent view which maintains that attitudes exhibit belief projection, so that presupposition triggers in their scope imply that the attitude holder believes the presupposition (Kartunnen, 1974; Heim, 1992; Sudo, 2014). We show that Schlenker’s theory does not predict belief projection, and discuss several consequences of this result.
Although it has few adherents today, logical atomism was once a
leading movement of early twentieth-century analytic philosophy. Different, though related, versions of the view were developed by
Bertrand Russell and Ludwig Wittgenstein. Russell’s logical
atomism is set forth chiefly in his 1918 work “The Philosophy of
Logical Atomism” (Russell 1956), Wittgenstein’s in his
Tractatus Logico-Philosophicus of 1921 (Wittgenstein 1981). The core tenets of Wittgenstein’s logical atomism may be stated
as follows: (i) Every proposition has a unique final analysis which
reveals it to be a truth-function of elementary propositions
(Tractatus 3.25, 4.221, 4.51, 5); (ii) These elementary
propositions assert the existence of atomic states of affairs (3.25,
4.21); (iii) Elementary propositions are mutually independent —
each one can be true or false independently of the truth or falsity of
the others (4.211, 5.134); (iv) Elementary propositions are immediate
combinations of semantically
The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a dense vector space, which is expected to model grammatical relations among them. This representation may in turn be used to map ‘period|on|t|ist’ (English) to ‘par|od|ont|iste’ (French). Thus, instead of being modeled at the lexical level, translation is reformulated more generally as the task of learning the best bilingual mapping between the sequences of subword segments of two languages; and sometimes even between pure character sequences: ‘p|e|r|i|o|d|o|n|t|i|s|t|’ → ‘p|a|r|o|d|o|n|t|i|s|t|e’. Such subword segmentations and alignments are at work in highly efficient end-to-end machine translation systems, despite their allegedly opaque nature. The computational value of such processes is unquestionable. But do they have any linguistic or philosophical plausibility? I attempt to cast light on this question by reviewing the relevant details of the subword segmentation algorithms and by relating them to important philosophical and linguistic debates, in the spirit of making artificial intelligence more transparent and explainable.
One of the main criticisms of the theory of collections of indiscernible objects is that once we quantify over one of them, we are quantifying over all of them since they cannot be discerned from one another. In this way, we would call the collapse of quantifiers: ‘There exists one x such as P ’ would entail ‘All x are P ’. In this paper we argue that there are situations (quantum theory is the sample case) where we do refer to a certain quantum entity, saying that it has a certain property, even without committing all other indistinguishable entities with the considered property. Mathematically, within the realm of the theory of quasi-sets Q, we can give sense to this claim. We show that the above-mentioned ‘collapse of quantifiers’ depends on the interpretation of the quantifiers and on the mathematical background where they are ranging. In this way, we hope to strengthen the idea that quantification over indiscernibles, in particular in the quantum domain, does not conform with quantification in the standard sense of classical logic. Keywords: quantification, quantum logic, indiscernibility, identity, in- discernible objects.
The rapid development of natural language processing in the last three decades has drastically changed the way professional translators do their work. Nowadays most of them use computer-assisted translation (CAT) or translation memory (TM) tools whose evolution has been overshadowed by the much more sensational development of machine translation (MT) systems, with which TM tools are sometimes confused. These two language technologies now interact in mutually enhancing ways, and their increasing role in human translation has become a subject of behavioral studies. Philosophers and linguists, however, have been slow in coming to grips with these important developments. The present paper seeks to fill in this lacuna. I focus on the semantic aspects of the highly distributed human–computer interaction in the CAT process which presents an interesting case of an extended cognitive system involving a human translator, a TM tool, an MT engine, and sometimes other human translators or editors. Considered as a whole, such a system is engaged in representing the linguistic meaning of the source document in the target language. But the roles played by its various components, natural as well as artificial, are far from trivial, and the division of linguistic labor between them throws new light on the familiar notions that were initially inspired by rather different phenomena in the philosophy of language, mind, and cognitive science.
Cross-posted here.A true sentence like ‘John is here in this room’, and its Twin Earth counterpart, express different propositions, since they are about distinct people. And that means that propositions sometimes constitutively involve particular external things that they are about.What, in light of this, should we say about how, if at all, what propositions there are—what claims exist—varies across possible worlds?One side of this issue is: could propositions like the ones expressed by a normal true use of ‘John is here in this room’ have failed to exist? …
Gluck & Bower, 1988), stages of learning (Rumelhart & McClelland, 1987), and, in more recent work patterns, in categorization of individuals with autism spectrum disorder (Dovgopoly & Mercado, 2013). Recent work on deep learning, involving ANNs with many layers, shows that there are interesting connections between cognitive processing and the representations formed at various depth in ANNs (Guest & Love, 2019). Most importantly for the present purposes, connectionism predicts a correlation between the effort required to learn a category by ANNs and by humans (Bartos, 2002; Kruschke, 1991).
Some years ago, Charles Petzold published his The Annotated Turing which, as its subtitle tells us, provides a guided tour through Alan Turing’s epoch-making 1936 paper. I was prompted at the time to wonder about putting together a similar book, with an English version of Gödel’s 1931 paper interspersed with explanatory comments and asides. …
In particular, I am convinced that discourse is structured in the way she claims, that prominence is managed within and between sentences by conventional rules, and that pronoun meanings select entities that are at the centre of attention. I agree, too, with the important consequence she and her coauthors draw (2017: 529, fn. 23) that communication with context-sensitive expressions doesn’t have to be a matter of “mind-reading” the intentions of the speaker, and could be accomplished by shallow statistical processing (as in computer models of word sense selection).
In this thesis, I develop and investigate various novel semantic frameworks for deontic logic. Deontic logic concerns the logical aspects of normative reasoning. In particular, it concerns reasoning about what is required, allowed and forbidden. I focus on two main issues: free-choice reasoning and the role of norms in deontic logic. Free-choice reasoning concerns permissions and obligations that offer choices between different actions. Such permissions and obligations are typically expressed by a disjunctive clause in the scope of a deontic operator. For instance, the sentence "Jane may take an apple or a pear" intuitively offers Jane a choice between two permitted courses of action: she may take an apple, and she may take a pear. In the first part of the thesis, I develop semantic frameworks for deontic logic that account for free-choice reasoning. I show that the resulting logics avoid problems that arise for other logical accounts of free-choice reasoning. The main technical contributions are completeness results for axiomatizations of the different logics.
We assess Meyer’s formalization of arithmetic in his , based on the strong relevant logic R and compare this with arithmetic based on a suitable logic of meaning containment, which was developed in Brady . We argue in favour of the latter as it better captures the key logical concepts of meaning and truth in arithmetic. We also contrast the two approaches to classical recapture, again favouring our approach in . We then consider our previous development of Peano arithmetic including primitive recursive functions, finally extending this work to that of general recursion.
In this paper we introduce a novel way of building arithmetics whose background logic is R. The purpose of doing this is to point in the direction of a novel family of systems that could be candidates for being the infamous R#12 that Meyer suggested we look for.
On his train ride home from the 1928 Harvard–Yale football game, Eben Byers fell and suffered a minor injury. To return himself to his pre-injury vigor, Byers followed the recommendation of his physician and started taking Radithor, a tonic of radium salts dissolved in water. Like many in the early 20th century, Byers and his physician may have heard assertions of sentences like (1) made by promoters of radioactive cure-alls like Radithor.