According to the Desiderative Lockean Thesis, there are necessary and sufficient conditions, stated in the terms of decision theory, for when one is truly said to want. I advance a new Desiderative Lockean view. My view is distinctive in being doubly context-sensitive. What a person is truly said to want varies by context, a fact that others attempt to capture by positing a single context-sensitive parameter to evaluate want ascriptions; I posit two. Only with a doubly context-sensitive view can we explain a range of facts that go unexplained by all other Desiderative Lockean views.
Inspired by the early Wittgenstein’s concept of nonsense (meaning that which lies beyond the limits of language), we define two different, yet complementary, types of nonsense: formal nonsense and pragmatic nonsense. The simpler notion of formal nonsense is initially defined within Tarski’s semantic theory of truth; the notion of pragmatic nonsense, by its turn, is formulated within the context of the theory of pragmatic truth, also known as quasi-truth, as formalized by da Costa and his collaborators. While an expression will be considered formally nonsensical if the formal criteria required for the assignment of any truth-value (whether true, false, pragmatically true, or pragmatically false) to such sentence are not met, a (well-formed) formula will be considered pragmatically nonsensical if the pragmatic criteria (inscribed within the context of scientific practice) required for the assignment of any truth-value to such sentence are not met. Thus, in the context of the theory of pragmatic truth, any (well-formed) formula of a formal language interpreted on a simple pragmatic structure will be considered pragmatically nonsensical if the set of primary sentences of such structure is not well-built, that is, if it does not include the relevant observational data and/or theoretical results, or if it does include sentences that are inconsistent with such data.
It has been argued that moral assertions involve the possession, on the part of the speaker, of appropriate non-cognitive attitudes. Thus, uttering ‘murder is wrong’ invites an inference that the speaker disapproves of murder. In this paper, we present the result of 4 empirical studies concerning this phenomenon. We assess the acceptability of constructions in which that inference is explicitly canceled, such as ‘murder is wrong but I don’t disapprove of it’; and we compare them to similar constructions involving ‘think’ instead of ‘disapprove’—that is, Moore paradoxes (‘murder is wrong but I don’t think that it is wrong’). Our results indicate that the former type of constructions are largely infelicitous, although not as infelicitous as their Moorean counterparts.
can solve a wide array of problems, and the models and proofs of unsatisfiability emitted by SAT solvers can be checked by verified software. In this way, the SAT toolchain is trustworthy. However, many applications are not expressed natively in SAT and must instead be encoded into SAT. These encodings are often subtle, and implementations are error-prone. Formal correctness proofs are needed to ensure that implementations are bug-free. In this paper, we present a library for formally verifying SAT encodings, written using the Lean interactive theorem prover. Our library currently contains verified encodings for the parity, at-most-one, and at-most-k constraints. It also contains methods of generating fresh variable names and combining sub-encodings to form more complex ones, such as one for encoding a valid Sudoku board. The proofs in our library are general, and so this library serves as a basis for future encoding efforts.
The alienation constraint on theories of well-being has been influentially expressed thus: ‘what is intrinsically valuable for a person must have a connection with what he would find in some degree compelling or attractive …. It would be an intolerably alienated conception of someone’s good to imagine that it might fail in any such way to engage him’ (Railton 1986: 9). Many agree this claim expresses something true, but there is little consensus on how exactly the constraint is to be understood. Here, I clarify the sense in which the quote offers a basic constraint on theories of well-being—a constraint that should be adopted by (e.g.) hedonists, desire satisfactionists, and objective list theorists alike. This constraint focuses on affective engagement, or positive affective stances in connection with a proposed good. I show that the constraint explains a near-universal intuition, and rules out a number of well-known theories of well-being.
Conditional statements are ubiquitous, from promises and threats to reasoning and decision making. By now, logicians have studied them from many different angles, both semantic and proof-theoretic. This paper suggests two more perspectives on the meaning of conditionals, one dynamic and one geometric, that may throw yet more light on a familiar and yet in some ways surprisingly elusive and many-faceted notion.
Explicating the concept of coherence and establishing a measure for assessing the coherence of an information set are two of the most important tasks of coherentist epistemology. To this end, several principles have been proposed to guide the specification of a measure of coherence. We depart from this prevailing path by challenging two well-established and prima facie plausible principles: Agreement and Dependence. Instead, we propose a new probabilistic measure of coherence that combines basic intuitions of both principles, but without strictly satisfying either of them. It is then shown that the new measure outperforms alternative measures in terms of its truth-tracking properties. We consider this feature to be central and argue that coherence matters because it is likely to be our best available guide to truth, at least when more direct evidence is unavailable.
I learned a lot from the comments on Part 3 and also this related thread on the Category Theory Community Server:
• Coalgebras, operational semantics and the Giry monad. I’d like to thank Matteo Cappucci, David Egolf, Tobias Fritz, Tom Hirschowitz, David Jaz Myers, Mike Shulman, Nathaniel Virgo and many others for help. …
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
In relevant logic, a conditional is provable only if its antecedent is relevant to its consequent—the provability of conditionals must respect relevance. What exactly respecting relevance amounts to has, however, been up for debate essentially ever since the notion was introduced. One of the more promising approaches to formally explicating respect for relevance is to articulate it in terms of variable sharing results. There are four such results in the extant literature, viz. ordinary variable sharing, strong variable sharing, depth relevance, and strong depth relevance. We’ll have more to say about these below. What’s important to note is that (a) each of these codifies a different way logics can be (or perhaps, can be said to be) relevant and (b) most of the results concerning variable-sharing have roughly the form ‘thus and so logics enjoy such and such a variable-sharing property’. What has been lacking until recently was a serious exploration of the mechanisms by which logics come to have these properties. And—apart from the tantalizing-but-tentative results in the work of Gemma Robles and Jos´e M´endez, (see e.g. Robles and M´endez 2011; M´endez and Robles 2012)—there’s been little to no exploration of what the broadest possible class of logics enjoying any of these properties might be.
Anna Alexandrova has written a very important book about the philosophy and science of well-being. Many parts are illuminating, but I shall concentrate on what she has to say about philosophy. Alexandrova is highly critical of philosophy of well-being. Yet perhaps surprisingly (because I am a philosopher of well-being), I am generally sympathetic to her concerns. My qualms are with some of the stronger conclusions she draws from philosophy’s failings, conclusions that seem insufficiently motivated. I shall focus exclusively on her discussion of language and concepts in Chapter 1, where she argues that the language of well-being is not nearly as unified or coherent as philosophers assume, and where she ultimately defends what she calls a contextualist account of the meaning of well-being terms.
According to widely accepted views in metasemantics, the outputs of chatbots and other artificial text generators should be meaningless. They aren’t produced with communicative intentions and the systems producing them are not following linguistic conventions. Nevertheless, chatbots have assumed roles in customer service and healthcare, they are spreading information and disinformation and, in some cases, it may be more rational to trust the outputs of bots than those of our fellow human beings. To account for the epistemic role of chatbots in our society, we need to reconcile these observations. This paper argues that our engagement with chat-bots should be understood as a form of prop-oriented make-believe; the outputs of chatbots are literally meaningless but fictionally meaningful. With the make-believe approach, we can understand how chatbots can provide us with knowledge of the world through quasi-testimony while preserving our metasemantic theories. This account also helps to connect the study of chatbots with the epistemology of scientific instruments.
Epistemicism associates vagueness with ignorance produced by semantic plasticity: the shiftiness of intensions in our language resulting from small changes in usage. The recent literature (Caie 2012; Magidor 2018; Yli-Vakkuri 2016) points to a missing piece in the epistemicist theory of vagueness, namely a clear account of the semantics of the definiteness operator Δ. The fundamentals of the epistemicist theory are well understood. However, the technical work of defining the definiteness operator has proven difficult. There are several desiderata that we would like Δ to satisfy. For instance, we would like the epistemicist notion of ‘definiteness’ to interact well with modal operators and validate intuitive principles like ‘necessarily, if φ is definitely true, then φ is true’. Providing an account that would meet all such desiderata has eluded the epistemicists so far. In this paper, I present a novel version of a multidimensional model inspired by the work of Robert Stalnaker and David Kaplan. Using this model, I provide an account of epistemicist definiteness that meets our desiderata.
I argue for the Cooperative Warrant Thesis (CWT), according to which the determinants of testimonial contents in communication are given by the practical requirements of cooperative action. This thesis distances itself from conventionalist views, according to which testimony must be strictly bounded by conventions of speech. CWT proves explanatorily better than conventionalism on several accounts. It offers a principled and accurate criterion to distinguish between testimonial and non-testimonial communication. In being goal-sensitive, this criterion captures the role of weak and robust cooperation in determining the contents to which speakers testify or fail to testify. And, finally, it yields a principled explanation of why testimony entails the epistemic commitments that distinguish it as an epistemic source.
I argue that emojis are essentially little pictures, rather than words, gestures, expressives, or diagrams. means that the world looks like that, from some viewpoint. I flesh out a pictorial semantics in terms of geometric projection with abstraction and stylization. Since such a semantics delivers only very minimal contents I add an account of pragmatic enrichment, driven by coherence and non-literal interpretation. The apparent semantic distinction between emojis depicting entities (like ) and those depicting facial expressions (like ) I analyze as a difference between truth-conditional and use-conditional pictorial content: depicts what the world of evaluation looks like, while depicts what the utterance context looks like.
In this article, I argue that if tacit knowledge of grammar is analyzable in functional-computational terms, then it cannot ground linguistic meaning, structure, or sound. If to know or cognize a grammar is to be in a certain computational state playing a certain functional role, there can be no unique grammar cognized. Satisfying the functional conditions for cognizing a grammar G entails satisfying those for cognizing many grammars disagreeing with G about expressions’ semantic, phonetic, and syntactic values. This threatens the Chomskyan view that expressions have such values for speakers because they cognize grammars assigning them those values. For if this is true, semantics, syntax, and phonology must be indeterminate, thanks to the indeterminacy of grammarcognizing (qua functional-computational state). So, the fact that a speaker cognizes a grammar cannot explain the determinate character of their language.
Over the past decade, an inspiring and potent new idea has taken root in philosophy, whose implications for epistemology and for our understanding of the mind we are only just beginning to appreciate. I’m talking about the notion that the mind is not a passive data churner but an active and searching inquirer, driven by curiosity and wonder. The thoughts and views populating this inquiring mind are shaped as much by the questions that give rise to them as by the information that they carry. Information is no longer the sole currency of thought: the mind is abuzz with questions.
Among the huge variety of many-valued logics in the literature, some of the recent developments are related to a family of many-valued logics falling under the label of infectious logics. The typical examples are three-valued logics discussed since the 1930s, by Dmitri Bochvar in , followed by Soren Hallden in  and Stephen Cole Kleene in [ ], and the label infectious is justified by the fact that these systems share a feature of having one of the truth values being infectious, or contaminating, with respect to negation, conjunction, and disjunction. In this paper, we are interested in and motivated by a new interpretation of Weak Kleene logic (WK, hereafter), advanced by Beall . For the purpose of clarifying the aim of this paper, let us briefly recall Beall’s interpretation.
It has now been robustly and cross-linguistically established that semantically, the future tense is not the dual of the past tense (see among many others,Enç 1987, Bertinetto 1979, Copley 2009, Mari 2009a,b, 2010, De Saussure and Morency 2012, Giannakidou and Mari 2013b, 2018a, Frana and Menéndez-Benito 2019, Ippolito and Farkas 2019, Escandell-Vidal 2021, among many others; pace Prior 1962, Kissine 2008.) Unlike the past, the future is open, and, even if we were to consider the future as metaphysically settled, we cannot deny that we cannot know the future. A sentence in the future tense is perceived as a prediction that could turn out to be true or false (Huddleston and Pullum 2005, MacFarlane 2003).
In Part 2 we saw there are some choices required when trying to pick a 12-note scale in just intonation:
|major 2nd||10/9 or 9/8|
|tritone||25/18 or 45/32 or 64/45 or 36/25|
|minor 7th||16/9 or 9/5|
Theoretically, there are
A recent view about disagreement (Karczewska 2021) takes it to consist in the tension arising from proposals and refusals of these proposals to impose certain commitments on the interlocutors in a conversation. This view has been proposed with the aim of solving the problem that “faultless disagreement” – a situation in which two interlocutors are intuited to be both in disagreement and not at fault – poses for contextualism about predicates of taste.
I haven’t really come to bury Quine but to praise him. He did have the idea. He did make some unfortunate mistakes which are good examples of mistakes others can make working with this theory. I don’t think NF(U) is so terribly difficult: but it requires a kind of discipline which is not needed in ordinary set theory.
Generic generalizations (a neutral term for what Krifka et al.  call characterizing (generic) sentences) are thought in the literature to encode information about essence, definition, normality, function, stability, etc., in relation to a particular population or range of instances. But on the surface, generic generalizations seem more innocuous. They combine (e.g.) a bare plural NP with a predicate that properly applies to atomic individuals in that NPs extension. They seem to specify (roughly speaking) a set along with a predicate that applies to members of that set. A generalization is intended, but its precise scope or extent is not specified (using all, most, etc.).
In this document, I will reveal the exact equivalence of Russell’s ramified theory of types from Principia to a far simpler system, the simple typed theory of sets with a predicativity restriction, and I will demonstrate that the axiom of reducibility in this context is exactly equivalent to something much simpler, namely, the axiom of set union.
Let me consider the case of disjunction in the antecedent: 1 (1) a. If Thad struck this match and it had been dry or wet, it would have lit. b. If I had struck this match, it would have lit. Assume the contextual domain contains only worlds in which the match is dry, but worlds in which I strike it and worlds in which I don’t. Then, the antecedents in (la&b), which are after all equivalent, will both be compatible with the contextual domain. So, there would be no need for context change in either case. The context-dynamic account, as developed here, seems just as clueless about the apparent non-equivalence of these two conditionals as the static semantic account. But I presented (1) earlier as prima facie evidence that the context is easily shifted. It now seems that it shifts even more easily than predicted by my own account. What to do? I will discuss two options: one cautiously appeals to pragmatics, the other tries to tie things down technically. Let me sketch the technical option first.2
Neil Tennant’s core logic is a type of bilateralist natural deduction system based on proofs and refutations. We present a proof system for propositional core logic, explain its connections to bilateralism, and explore the possibility of using it as a type theory, in the same kind of way intuitionistic logic is often used as a type theory. Our proof system is not Tennant’s own, but it is very closely related. The difference matters for our purposes, and we discuss this. We then turn to the question of strong normalization, showing that although Tennant’s proof system for core logic is not strongly normalizing, our modified system is.
Partial reasons are considerations in favor of something that, taken individually, are not sufficient to establish an obligation. I consider to what extent partial reasons are reasons, and why they cannot be reduced to or identified with pro tanto reasons. I lay out two approaches to the content of reasons, the flat theory and the structured theory. I argue that parts of reasons are not partial reasons, by showing that natural ways to represent reason parts in the flat theory and the structured theory lead to overgeneration problems with regard to partial reasons. I then formulate two notions of partial reasons: one based on a notion of partial support, which is in turn captured by the notions of full support and partial content, and one based on the notion of inexact verification. I prove under which conditions the two notions of partial reasons (based on partial content, and based on inexact verification) coincide.
Now let’s dive into the beauties of 5-limit tuning—that is, tuning systems with frequency ratios that are products of powers of only the primes 2, 3 and 5:
We’ve already tackled 3-limit tuning, where we only got to use the primes 2 and 3. …
According to a well-established view of desire satisfaction, a desire that p is satisfied iff p obtains. Call this the ‘standard view’. The standard view is purely semantic, which means the satisfaction condition of desires is placed in the truth of the embedded proposition that indicates the content of the desire. This paper aims to defend the standard view against two frequently discussed problems: the problem of underspecification and desires conditional on their own persistence. The former holds that the standard view cannot capture the specific ways of desire satisfaction. The latter holds that the standard view does not provide sufficient conditions for the satisfaction of desires conditional on their own persistence.
This paper is an essay in what J. L. Austin called “linguistic phenomenology”— a philosophical investigation of ordinary language in which we are meant to be “looking not merely at words … but also at the realities we use the words to talk about” (Austin 1956-7, p. 8). Its focus is the nature of causation, and in particular the variety of forms of causation that there can (be said to) be. Here is how I will proceed.