This essay examines the philosophical significance of Ω-logic in Zermelo- Fraenkel set theory with choice (ZFC). The categorical duality between coalgebra and algebra permits Boolean-valued algebraic models of ZFC to be interpreted as coalgebras. The hyperintensional profile of Ω-logical validity can then be countenanced within a coalgebraic logic. I argue that the philosophical significance of the foregoing is two-fold. First, because the epistemic and modal and hyperintensional profiles of Ω-logical validity correspond to those of second-order logical consequence, Ω-logical validity is genuinely logical. Second, the foregoing provides a hyperintensional account of the interpretation of mathematical vocabulary.
Citation: Ellerman, D. A New Logic, a New Information Measure, and a New Information-Based Approach to Interpreting Quantum Mechanics.
In this paper, we discuss J. Michael Dunn’s foundational work on the semantics for First Degree Entailment logic (FDE), also known as Belnap–Dunn logic (or Sanjaya–Belnap–Smiley–Dunn Four-valued Logic, as suggested by Dunn himself). More specifically, by building on the framework due to Dunn, we sketch a broad picture towards a systematic understanding of contra-classicality. Our focus will be on a simple propositional language with negation, conjunction, and disjunction, and we will systematically explore variants of FDE, K3, and LP by tweaking the falsity condition for negation.
In this paper, we apply a Herzberger-style semantics to deal with the question: is the de Finetti conditional a conditional? The question is pressing, in view of the inferential behavior of the de Finetti conditional: it allows for inferences that seem quite unexpected for a conditional. The semantics we advance here for the de Finetti conditional is simply the classical semantics for material conditional, with a further dimension whose understanding depends on the kind of application one has in mind. We discuss such possible applications and how they cover ground already advanced in the literature.
Connexive logic is a topic in non-classical logic that is receiving a lot of attention these days; however, that is not to say that it is a recent topic by any means. Some have claimed that it has its historical roots in antiquity, others have disputed this. Much less controversial is the important role of two connexive logical systems in the much more recent history of the subject. The first is the system CC1 that is due to Richard Angell  and Storrs McCall  and marks, in many ways, the modern inception of connexive logic as a unified topic in logical research. While that topic never quite disappeared after the seminal work by Angell and McCall, it was only after Heinrich Wansing started making his contributions some forty years later that it truly started to blossom. For that reason alone, the first connexive logic Wansing introduced in , which is called C, has an indisputably important place in the history of connexivity. Also, it offered one of the most elegant semantics as well as proof systems for connexive logics to date (see also [22, p.178]).
In his “The simple argument for subclassical logic,” Jc Beall advances an argument that led him to take FDE as the one true logic (the latter point is explicitly made clear in his “FDE as the One True Logic”). The aim of this article is to point out that if we follow Beall’s line of reasoning for endorsing FDE, there are at least two additional reasons to consider that FDE is too strong for Beall’s purposes. In fact, we claim that Beall should consider another weaker subclassical logic as the logic adequate for his project. To this end, we first briefly present Beall’s argument for FDE. Then, we discuss two specific topics that seem to motivate us to weaken FDE. We then introduce a subsystem that will enjoy all the benefits of Beall’s suggestion.
A BSTRACT. S ören Halld´en’s logic of nonsense is one of the most well-known many-valued logics available in the literature. In this paper, we discuss Peter Woodruff’s as yet rather unexplored attempt to advance a version of such a logic built on the top of a constructive logical basis. We start by recalling the basics of Woodruff’s system and by bringing to light some of its notable features. We then go on to elaborate on some of the difficulties attached to it; on our way to offer a possible solution to such difficulties, we discuss the relation between Woodruff’s system and two-dimensional semantics for many-valued logics, as developed by Hans Herzberger. Keywords: Peter Woodruff, logic of nonsense, constructive logic, Hans Herzberger, two-dimensional semantics.
It is often assumed that concepts from the formal sciences, such as mathematics and logic, have to be treated differently from concepts from non-formal sciences. This is especially relevant in cases of concept defectiveness, as in the empirical sciences defectiveness is an essential component of lager disruptive or transformative processes such as concept change or concept fragmentation.
This paper explores how, given a proof, we can systematically transform it into a proof that contains no irrelevancies and which is as strong as possible. I define a weaker and stronger notion of what counts as a proof with no irrelevancies, calling them perfect proofs and gaunt proofs, respectively. Using classical core logic to study classical validities and core logic to study intuitionistic validities, I show that every core proof or classical core proof can be transformed into a perfect proof. In a sequel paper, I show how proofs in core logic can also be transformed into gaunt proofs and I observe that this property fails for classical core logic.
This paper is the second part of a series exploring how, given a proof, we can inductively transform it into a proof that contains no irrelevancies and is as strong as possible. In the prequel paper, I defined a weaker and a stronger notion of what counts as a proof with no irrelevancies, calling them perfect proofs and gaunt proofs, respectively. There, I showed how proofs in core logic and classical core logic can be transformed into perfect proofs. In this paper I study gaunt proofs. I show how proofs in core logic can be inductively transformed into gaunt core proofs, but that this property fails for the natural deduction system of classical core logic.
Set-theoretic potentialism is the view that the universe of sets is potentially infinite: it can always, necessarily, be expanded to a more inclusive universe of sets. One version of this view is that the set-theoretic universe can always be expanded to a forcing extension in particular. This view has primarily been studied from a technical point of view, however; in this chapter I explore what philosophical conceptions of set theory might motivate forcing potentialism. I begin by raising an explanatory challenge for any form of width potentialism based on the iterative conception of set, and then sharpen this challenge to argue that any broadly iterative conception of set is inconsistent with the claim that the possible width extension of a universe are exactly its forcing extensions. Finally, I suggest one possible way meeting the explanatory challenge by disentangling the iterative conception of set-formation from the combinatorial conception of sethood. This makes room for what I call the iterative logical conception of set. I sketch a toy model of a potentialist system which is both height- and width-potentialist, and where the width extension include all (but not only) the forcing extensions.
Being relevant to some topic can be informally understood as making a difference or having something to contribute. Given a sequent ∆ ⇒ Γ, we can say, somewhat schematically, that a component of the sequent is relevant to the sequent when it contributes to the validity of the sequent. Different ways of making precise the idea of contributing to the validity and different understandings of the components of a sequent lead to a hierarchy of explications of relevance. I identify four key explications, called gaunt validity, perfect validity, relevant validity, and perfectibility. Each is shown to enjoy an interesting variable sharing property. Furthermore, if we begin with a standard sequent calculus for classical logic and introduce some simple constraints on the rules, the result is a fragment of classical logic that proves exactly the gauntly valid sequents.
Relevant logics, in the tradition coming out of the work of Anderson and Belnap (1975), are concerned with implication. Relevant logics constitute a large family with great variety, even restricting attention to the comparatively well known logics. Perhaps unsurprisingly, different philosophical motivations have been given for relevant logics, targeted at different subfamilies of the broader group. In this article I will survey the different philosophical views motivating relevant logics, indicating how they secure relevance, what logics are most clearly supported by those views, and the presentation of the logic most naturally supported by the view.
I shall discuss here the topics of existence and nonexistence, of what it is for an individual to be actual and what it is for an individual not to be actual. What I shall have to say about these matters offers little toward our primordial need to discover the Meaning of Existence, but I hope to say some things that will satisfy the more modest ambition of those of us who wish to know the meaning of ‘existence’. I shall also say some things that bear on issues in the grandest traditions of Philosophy.
My goal in this paper is, to tentatively sketch and try defend some observations regarding the ontological dignity of object references, as they may be used from within in a formalized language.
I investigate whether Wittgenstein’s “weakly exclusive” Tractarian semantics (as reconstructed by Rogers and Wehmeier 2012) is compositional. In both Tarskian and Wittgensteinian semantics, one has the choice of either working exclusively with total variable assignments or allowing partial assignments; the choice has no bearing on the compositionality of Tarskian semantics, but turns out to make a difference in the Wittgensteinian case. Some philosophical ramifications of this observation are discussed.
Finite set theory FST consists of all of the axioms of ZF without the axiom of infinity and has the model HF of hereditarily finite sets as its canonical model. In this paper, we shall consider all combinatorially possible systems corresponding to subsets of the axioms of finite set theory and develop a general technique called axiom closure of graphs; for each of the 2 = 64 combinatorially possible systems we shall either show that it cannot hold in a transitive submodel of the hereditarily finite sets or provide a concrete model in which it holds (cf. Table 1).
Identity is a peculiar notion. On the one hand, surely everything is just the thing that it is and nothing else. But on the other hand, we are tempted to think of it as a relation. After all, it appears as a relational predicate in formal languages (e.g. ‘? = ?’) and in natural languages (e.g. ‘Eric is identical to George’). However, the idea of identity as a relation that might hold between two or more things is absurd for, after all, the whole idea of identity is that it concerns just one thing. At best, we can think of it as a relation that everything bears only to itself.
Rational agents seem more confident in any possible event than in an impossible event. But if rational credences are real-valued, then there are some possible events that are assigned credence nonetheless. How do we differentiate these events from impossible events then when we order events? de Finetti (1975), Hájek (2012) and Easwaran (2014) suggest that when ordering events, conditional credences and subset relations are as relevant as unconditional credences. I present a counterexample to all their proposals in this paper. While their proposals order possible and impossible events correctly, they deliver the wrong verdict for disjoint possible events assigned equal positive credence.
I address some major critical arguments against a constructive truth concept and intuitionist logic. I put the notions of in principle possibilities and valid constructions (mathematical proofs) under scrutiny. I argue that the objections against a constructive account of truth miss target, thus they are not decisive. Eventually, constructivism is at least as cogent and natural a stance as realism.
We present a new notion of mereological sum that is inequivalent to extant ones in the literature and does not fall prey to reasonable complaints that can be raised against some such notions. In light of this notion, we then revisit the relation between mereological universalism and extensionalism. In particular we argue that Varzi’s claim to the point that universalism entails extensionalism is justified only insofar as one sticks to Varzi’s notion of sum. In effect, we distinguish different versions of extensionalism and argue that universalism—when cashed out in terms of our new notion of sum—entails some versions but not others. Most significantly it does not entail extensionality of proper parthood. In the light of the above we set forth a new mereological system, Universalist Quasi-Supplemented Mereology, that can be considered a worthy alternative to different mereological systems in the literature.
In a recent paper, the question of determining the fraction of binary trees that contain a fixed pattern known as the snowflake was posed. We show that this fraction goes to 1, providing two very different proofs: a purely combinatorial one that is quantitative and specific to this problem; and a proof using branching process techniques that is less explicit, but also much more general, as it applies to any fixed patterns and can be extended to other trees and networks. In particular, it follows immediately from our second proof that the fraction of d-ary trees (resp. level-k networks) that contain a fixed d-ary tree (resp. level-k network) tends to 1 as the number of leaves grows.
In studies of linguistic meaning, it is often assumed that the relevant expressions exhibit many semantic types: <e> for entity denoters; <t> for truthevaluable sentences; and the non-basic types <a, b> such that <a> and <b> are types. Expressions of a type <a , b>—e.g., <e, t> or <<e, t>, <<e, t>, t>—are said to signify functions, from things of the sort associated with expressions of type <a> to things of the sort associated with expressions of type <b>. On this view, children acquire languages that are importantly like the language that Frege invented to study the foundations of arithmetic. I think this conception of human linguistic meaning overgenerates wildly, even distinguishing—as we should—competence from performance. I sketch an alternative, defended elsewhere, to illustrate a broader point: when offering theories of natural languages, we shouldn’t be surprised if vocabulary designed for other purposes is inadequate, and attention to relevant phenomena motivates a spare semantic typology.
The paper intends to contribute towards a more realistic treatment and formalization of the abilities of players to achieve objectives in multi-player games under incomplete, imperfect, or simply wrong information that they may have about the game and about the course of the play. In particular, we aim to develop a logical formalism for dealing with the interplay between the dynamics of information and dynamics of abilities, taking into account both the a priori information of players with respect to the game structure and the empirical information that players develop over the course of an actual play, and associate with these respective information relations and notions of ‘a priori’ and ‘empirical’ strategies and strategic abilities.
Adonai Sant’Anna made some criticisms to the theory of quasi-sets and in particular he asked why there is no a theory of quasi-sets that does not presuppose the existence of atoms. In this paper we present a sketch of such a theory. In between the text, we make some comments on Sant’Anna’s arguments and try to answer at least part of them.
The question of whether the use of a certain method or axiom is
necessary in order to prove a given theorem is widespread in
mathematics. Two historical examples are particularly prominent: the
parallel postulate in Euclidean geometry, and the axiom of choice in
set theory. In both cases, one demonstrates that the axiom is indeed
necessary by proving a “reversal” from the theorem to the
axiom: that is, by assuming the theorem and deriving the axiom from
it, relative to a set of background assumptions known as the base
theory. Reverse mathematics is a program in mathematical logic that
seeks to give precise answers to the question of which axioms are
necessary in order to prove theorems of “ordinary
mathematics”: roughly speaking, those concerning structures that
are either themselves countable, or which can be represented by
Regular resolution is a refinement of the resolution proof system requiring that no variable be resolved on more than once along any path in the proof. It is known that there exist sequences of formulas that require exponential-size proofs in regular resolution while admitting polynomial-size proofs in resolution. Thus, with respect to the usual notion of simulation, regular resolution is separated from resolution. An alternative, and weaker, notion for comparing proof systems is that of an “effective simulation,” which allows the translation of the formula along with the proof when moving between proof systems. We prove that regular resolution is equivalent to resolution under effective simulations. As a corollary, we recover in a black-box fashion a recent result on the hardness of automating regular resolution.
Unrelated Announcement: The Call for Papers for the 2024 Conference on Computational Complexity is now out! Submission deadline is Friday February 16. Every month or so, someone asks my opinion on the simulation hypothesis. …
I just set myself this task: Without moving my middle and index fingers, I would wiggle the ring finger of my right hand twenty times in ten seconds. I then fulfilled this task (holding the middle and index fingers still with my left hand). …
Plural quantification is meant to be a logical way of avoiding some technical and/or conceptual difficulties with sets and second-order quantification. Instead of quantifying over one thing, one quantifies over pluralities. …