In this paper, we are considering the Baire property of the eventually different topology as a regularity property for sets of reals and investigate the logical strength of the statements “Every ∆ set has the Baire property in the eventually different topology” and “Every Σ set has the Baire property in the eventually different topology”. The latter statement turns out to be equivalent to “ω1 is inaccessible by reals”.
Sentences about logic are often used to show that certain embedding expressions (attitude verbs, conditionals, etc.) are hyperintensional. Yet it is not clear how to regiment “logic talk” in the object language so that it can be compositionally embedded under such expressions. In this paper, I develop a formal system called hyperlogic that is designed to do just that. I provide a hyperintensional semantics for hyperlogic that doesn’t appeal to logically impossible worlds, as traditionally understood, but instead uses a shiftable parameter that determines the interpretation of the logical connectives. I argue this semantics compares favorably to the more common impossible worlds semantics, which faces difficulties interpreting propositionally quantified logic talk.
Dynamic Causal Decision Theory (EDC, ch.s 7 and 8)
Posted on Thursday, 23 Sep 2021. Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality. …
Yet we know from syntax and crosslinguistic work that conditionals can also be formed with ‘if’-clauses that modify the verb (‘V if S’), as in (2), or a noun (‘N if S’), as in (4). Tests such as the VP-ellipsis and Condition C data in (3), and the coordination and island data in (5), confirm that the ‘if’-clause is a constituent of the verb phrase and noun phrase, respectively, rather than scoping over the rest of the sentence (e.g., Lasersohn 1996, Bhatt & Pancheva 2006).
This paper aims to shed light on the relation between Boltzmannian statistical mechanics and Gibbsian statistical mechanics by studying the Mechanical Averaging Principle, which says that, under certain conditions, Boltzmannian equilibrium values and Gibbsian phase averages are approximately equal. What are these conditions? We identify three conditions each of which is individually sufficient (but not necessary) for Boltzmannian equilibrium values to be approximately equal to Gibbsian phase averages: the Khinchin condition, and two conditions that result from two new theorems, the Average Equivalence Theorem and the Cancelling Out Theorem. These conditions are not trivially satisfied, and there are core models of statistical mechanics, the six-vertex model and the Ising model, in which they fail.
I remember the night I first discovered the meaning of the word worship. That morning I had been to church and had gotten into a brief discussion with the Pastor about what he kept calling the 'worthiness of God.' I remember thinking that this phrase seemed odd to me and I wasn’t sure what to make of it. Oh, I had heard it used before. It was the sort of thing one nodded one's head to and then went on one's way. Like talk about the 'glory' of God. I was never sure what that meant either, and given all the violent things God was sometimes said to do for the sake of his 'glory' I wasn't sure I cared to know. But now I began wondering about this phrase. Worthy? Was God worthy? Worthy of what?
We consider a learning agent in a partially observable environment, with which the agent has never interacted before, and about which it learns both what it can observe and how its actions affect the environment. The agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) both about what is directly observable and about what actions do in the domain, given the learner’s observational constraints. We differentiate the level of domain knowledge attained by each algorithm, and characterize the type of observations required to reach it. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. Our work continues that of Bolander and Gierasimczuk (2015), which developed DEL-based learning algorithms based to learn domain information in fully observable domains.
We present nine questions related to the concept of negation and, in passing, we refer to connections with the essays in this special issue. The questions were submitted to one of the most eminent logicians who contributed to the theory of negation, Prof. (Jon) Michael Dunn, but, unfortunately, Prof. Dunn was no longer able to answer them. Michael Dunn passed away on 5 April 2021, and the present special issue of Logical Investigations is dedicated to his memory. The questions concern (i) negation-related topics that have particularly interested Michael Dunn or to which he has made important contributions, (ii) some controversial aspects of the logical analysis of the concept of negation, or (iii) simply properties of negation in which we are especially interested. Though sadly and regrettably unanswered by the distinguished scholar who intended to reply, the questions remain and might stimulate answers by other logicians and further research.
We implement a recent characterization of metaphysical indeterminacy in the context of orthodox quantum theory, developing the syntax and semantics of two propositional logics equipped with determinacy and indeterminacy operators. These logics, which extend a novel semantics for standard quantum logic that accounts for Hilbert spaces with superselection sectors, preserve different desirable features of quantum logic and logics of indeterminacy. In addition to comparing the relative advantages of the two, we also explain how each logic answers Williamson’s challenge to any substantive account of (in)determinacy: For any proposition p, what could the difference between “p” and “it’s determinate that p” ever amount to?
I offer a case that quantum query complexity still has loads of enticing and fundamental open problems—from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.
While evenness is understood to be maximal if all types (species, geno-types, alleles, etc.) are represented equally (via abundance, biomass, area, etc.), its opposite, maximal unevenness, either remains conceptually in the dark or is conceived as the type distribution that minimizes the applied evenness index. The latter approach, however, frequently leads to conceptual inconsistency due to the fact that the minimizing distribution is not specifiable or is monomorphic. The state of monomorphism, however, is indeterminate in terms of its evenness/unevenness characteristics. Indeed, the semantic indeterminacy also shows up in the observation that monomorphism represents a state of pronounced discontinuity for the established evenness indices. This serious conceptual inconsistency is latent in the widely held idea that evenness is an independent component of diversity. As a consequence, the established evenness indices largely appear as indicators of relative polymorphism rather than as indicators of evenness.
I explore, from a proof-theoretic perspective, the hierarchy of classical and paraconsistent logics introduced by Barrio, Pailos and Szmuc in (Journal o f Philosophical Logic, 49, 93-120, 2021). First, I provide sequent rules and axioms for all the logics in the hierarchy, for all inferential levels, and establish soundness and completeness results. Second, I show how to extend those systems with a corresponding hierarchy of validity predicates, each one of which is meant to capture “validity” at a different inferential level. Then, I point out two potential philosophical implications of these results. (i) Since the logics in the hierarchy differ from one another on the rules, I argue that each such logic maintains its own distinct identity (contrary to arguments like the one given by Dicher and Paoli in 2019). (ii) Each validity predicate need not capture “validity” at more than one metainferential level. Hence, there are reasons to deny the thesis (put forward in Barrio, E., Rosenblatt, L. & Tajer, D. (Synthese, 2016)) that the validity predicate introduced in by Beall and Murzi in (Journal o f Philosophy, 110(3), 143–165, 2013) has to express facts not only about what follows from what, but also about the metarules, etc.
In the mid 1980s, I was into ‘semantic automata’, (van Benthem, 1986), classifying linguistic quantifiers in terms of the complexity of their verification procedures on Venn diagrams. The next step in developing this ‘procedural semantics’ was an analysis of linguistic expressions that depend on the underlying structure of the object domain, and so, I developed an interest in tree automata whose computation rule is recursive in a given tree ordering. This led to an intensive and fruitful correspondence with Dick de Jongh about connections with provability logic, where such recursive definitions can be made explicit. In this correspondence, Dick came up with an elegant generalization of the key step in the Fixed-Point Theorem which applied far beyond the modalities, namely, to arbitrary generalized quantifiers satisfying suitable abstract conditions. Dick’s result was included in my somewhat long and meandering paper ‘Toward a Computational Semantics’, (van Benthem, 1987) where it remained hidden. The purpose of this brief note is twofold. I want to advertize Dick’s result by itself, and its elegant level of abstraction. After that, I go further in this spirit and add some simple observations showing how the Fixed-Point Theorem can be seen as an instance of a family of abstract results on generalized well-founded orders. Much of what follows may be present in the folklore or the expert literature (more on this in Section 3), but a compact story may be useful.
Optimization is used extensively in engineering, industry, and finance, and various methods are used to transform problems to the point where they are amenable to solution by numerical methods. We describe progress towards developing a framework, based on the Lean interactive proof assistant, for designing and applying such reductions in reliable and flexible ways.
On the last page of “On Referring” (1950), Strawson proposes to extend his account of singular definite and indefinite descriptions to plurals and complex quantified noun phrases. He mentions in particular some uses of expressions consisting of ‘the’, ‘all the’, ‘all’, ‘some’, ‘some of the’, etc. followed by a noun, qualified or unqualified, in the plural (1950: ) His account of singular noun phrases is, of course, in contrast to Russell, that they are referential, or used “to mention or refer to some individual person or single object or particular event or place or process” (1950: ). Russell (1905) had proposed a contextual reanalysis of such “denoting phrases,” in first-order logic, that transmuted the sentences in which they occurred into generalizations.
We study the origin of quantum probabilities as arising from non-boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorvian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case).
The Kochen-Specker theorem is one of the fundamental no-go theorems in quantum theory. It has far-reaching consequences for all attempts trying to give an interpretation of the quantum formalism. In this work, we examine the hypotheses that, at the ontological level, lead to the Kochen- Specker contradiction. We emphasize the role of the assumptions about identity and distinguishability of quantum objects in the argument.
Many classically valid meta-inferences fail in a standard supervaluationist framework. This allegedly prevents supervaluationism from offering an account of good deductive reasoning. We provide a proof system for supervaluationist logic which includes supervaluationistically acceptable versions of the classical meta-inferences. The proof system emerges naturally by thinking of truth as licensing assertion, falsity as licensing negative assertion and lack of truth-value as licensing rejection and weak assertion. Moreover, the proof system respects well-known criteria for the admissibility of inference rules. Thus, supervaluationists can provide an account of good deductive reasoning. Our proof system moreover brings to light how one can revise the standard supervaluationist framework to make room for higher-order vagueness. We prove that the resulting logic is sound and complete with respect to the consequence relation that preserves truth in a model of the non-normal modal logic NT. Finally, we extend our approach to a first-order setting and show that supervaluation-ism can treat vagueness in the same way at every order. The failure of conditional proof and other meta-inferences is a crucial ingredient in this treatment and hence should be embraced, not lamented.
The Curry-Howard isomorphism is a proof-theoretic result that establishes a connection between derivations in natural deduction and terms in typed lambda calculus. It is an important proof-theoretic result, but also underlies the development of type systems for programming languages. This fact suggests a potential importance of the result for a philosophy of code.
As idealized descriptions of mathematical language, there is a sense in which formal systems specify too little, and there is a sense in which they specify too much. They are silent with respect to a number of features of mathematical language that are essential to the communicative and inferential goals of the subject, while many of these features are independent of a specific choice of foundation. This chapter begins to map out the design features of mathematical language without descending to the level of formal implementation, drawing on examples from the mathematical literature and insights from the design of computational proof assistants.
We discuss Lorenzen’s consistency proof for ramified type theory without reducibility, published in 1951, in its historical context and highlight Lorenzen’s contribution to the development of modern proof theory, notably by the introduction of the ω-rule.
Citation: Holik, F.; Massri, C.; Plastino, A.; Sáenz, M. Generalized Probabilities in Statistical Theories.
As a response to the semantic and logical paradoxes, theorists often reject some principles of classical logic. However, classical logic is entangled with mathematics, and giving up mathematics is too high a price to pay, even for nonclassical theorists. The so-called recapture theorems come to the rescue. When reasoning with concepts such as truth/class membership/property instantiation, if ones is interested in consequences of the theory that only contain mathematical vocabulary, nothing is lost by reasoning in the nonclassical framework. It is shown below that this claim is highly misleading, if not simply false. Under natural assumptions, recapture claims are incorrect.
This is a draft of the “Yes” side of a proposed debate book, Will AI Match (or Even Exceed) Human Intelligence? (Routledge). The “No” position will be taken by Selmer Bringsjord, and will be followed by rejoinders on each side. AI should be considered as the branch of computer science that investigates whether, and to what extent, cognition is computable. Computability is a logical or mathematical notion. So, the only way to prove that something— including (some aspect of) cognition—is not computable is via a logical or mathematical argument. Because no such argument has met with general acceptance (in the way that other proofs of non-computability, such as the Halting Problem, have been generally accepted), there is no logical reason to think that AI won’t eventually match human intelligence. Along the way, I discuss the Turing Test as a measure of AI’s success at showing the computability of various aspects of cognition, and I consider the potential roadblocks set by consciousness, qualia, and mathematical intuition.
Background: The ontology authoring step in ontology development involves having to make choices about what subject domain knowledge to include. This may concern sorting out ontological differences and making choices between conflicting axioms due to limitations in the logic or the subject domain semantics. Examples are dealing with different foundational ontologies in ontology alignment and OWL 2 DL’s transitive object property versus a qualified cardinality constraint. Such conflicts have to be resolved somehow. However, only isolated and fragmented guidance for doing so is available, which therefore results in ad hoc decision-making that may not be the best choice or forgotten about later.
It is standard in set theory to assume that Cantor’s Theorem establishes that the continuum is an uncountable set. A challenge for this position comes from the observation that through forcing one can collapse any cardinal to the countable and that the continuum can be made arbitrarily large. In this paper, we present a different take on the relationship between Cantor’s Theorem and extensions of universes, arguing that they can be seen as showing that every set is countable and that the continuum is a proper class. We examine several principles based on maximality considerations in this framework, and show how some (namely Ordinal Inner Model Hypotheses) enable us to incorporate standard set theories (including ZFC with large cardinals added). We conclude that the systems considered raise questions concerning the foundational purposes of set theory.
According to a standard story, part of what we have in mind when we say that an argument is valid is that it is necessarily truth preserving: if the premises are true, the conclusion must also be true. But—the story continues—that’s not enough, since ‘Roses are red, therefore roses are coloured’ for example, while it may be necessarily truth-preserving, is not so in virtue of form. Thus we arrive at a standard contemporary characterisation of validity: an argument is valid when it is NTP in virtue of form. Here I argue that we can and should drop the N; the resulting account is simpler, less problematic, and performs just as well with examples.
A number of philosophers have thought that fair lotteries over countably infinite sets of outcomes are conceptually incoherent by virtue of violating Countable Additivity. In this paper, I show that a qualitative analogue of this argument generalizes to an argument against the conceptual coherence of a much wider class of fair infinite lotteries— including continuous uniform distributions. I argue that this result suggests that fair lotteries over countably infinite sets of outcomes are no more conceptually problematic than continuous uniform distributions. Along the way, I provide a novel argument for a weak qualitative, epistemic version of Regularity.
Obligation-describing language (and so, I assume, obligation) seems
to be hooked up with preference, a relation of what-is-better-than-
what. But ordinary situations with ordinary normative constraints
underdetermine such relations of what-is-better-than-what. Even so,
there are plainly true sentences describing our obligations in those
situations. My argument will be that this mismatch is troublemaking
and that getting out of that trouble requires either giving up the direct
link between obligation and preference or rethinking the kind of things
preferences can be
In belief revision theory, conditionals are often interpreted via the Ramsey test. However, the classical Ramsey Test fails to take into account a fundamental feature of conditionals as used in natural language: typically, the antecedent is relevant to the consequent. Rott has extended the Ramsey Test by introducing so-called difference-making conditionals that encode a notion of relevance. This paper explores difference-making conditionals in the framework of Spohn’s ranking functions. We show that they can be expressed by standard conditionals together with might conditionals. We prove that this reformulation is fully compatible with the logic of difference-making conditionals, as introduced by Rott. Moreover, using c-representations, we propose a method for inductive reasoning with sets of difference-making conditionals and also provide a method for revising ranking functions by a set of difference-making conditionals.