
424.0123
The aim of this paper is to present a constructive solution to Frege’s puzzle (largely limited to the mathematical context) based on type theory. Two ways in which an equality statement may be said to have cognitive significance are distinguished. One concerns the mode of presentation of the equality, the other its mode of proof. Frege’s distinction between sense and reference, which emphasizes the former aspect, cannot adequately explain the cognitive significance of equality statements unless a clear identity criterion for senses is provided. It is argued that providing a solution based on proofs is more satisfactory from the standpoint of constructive semantics.

110531.012479
This is a report on the project “Axiomatizing Conditional Normative Reasoning” (ANCoR, M 3240N) funded by the Austrian Science Fund (FWF). The project aims to deepen our understanding of conditional normative reasoning by providing an axiomatic study of it at the propositional but also firstorder level. The focus is on a particular framework, the socalled preferencebased logic for conditional obligation, whose main strength has to do with the treatment of contrarytoduty reasoning and reasoning about exceptions. The project considers not only the metatheory of this family of logics but also its mechanization.

173556.012488
According to the ωrule, it is valid to infer that all natural numbers possess some property, if possesses it, 1 possesses it, 2 possesses it, and so on. The ωrule is important because its inclusion in certain arithmetical theories results in true arithmetic. It is controversial because it seems impossible for finite human beings to follow, given that it seems to require accepting infinitely many premises. Inspired by a remark of Wittgenstein’s, I argue that the mystery of how we follow the ωrule subsides once we treat the rule as helping to give meaning to the symbol, “…”.

173590.012502
We give a new and elementary construction of primitive positive decomposition of higher arity relations into binary relations on finite domains. Such decompositions come up in applications to constraint satisfaction problems, clone theory and relational databases. The construction exploits functional completeness of 2input functions in manyvalued logic by interpreting relations as graphs of partially defined multivalued ‘functions’. The ‘functions’ are then composed from ordinary functions in the usual sense. The construction is computationally effective and relies on welldeveloped methods of functional decomposition, but reduces relations only to ternary relations. An additional construction then decomposes ternary into binary relations, also effectively, by converting certain disjunctions into existential quantifications. The result gives a uniform proof of Peirce’s reduction thesis on finite domains, and shows that the graph of any Sheffer function composes all relations there.

173626.012508
We study logical reduction (factorization) of relations into relations of lower arity by Boolean or relative products that come from applying conjunctions and existential quantifiers to predicates, i.e. by primitive positive formulas of predicate calculus. Our algebraic framework unifies natural joins and data dependencies of database theory and relational algebra of clone theory with the bond algebra of C.S. Peirce. We also offer new constructions of reductions, systematically study irreducible relations and reductions to them, and introduce a new characteristic of relations, ternarity, that measures their ‘complexity of relating’ and allows to refine reduction results. In particular, we refine Peirce’s controversial reduction thesis, and show that reducibility behavior is dramatically different on finite and infinite domains.

173659.012513
We argue that traditional formulations of the reduction thesis that tie it to privileged relational operations do not suffice for Peirce’s justification of the categories, and invite the charge of gerrymandering to make it come out as true. We then develop a more robust invariant formulation of the thesis by explicating the use of triads in any relational operations, which is immune to that charge. The explication also allows us to track how Thirdness enters the structure of higher order relations, and even propose a numerical measure of it. Our analysis reveals new conceptual phenomena when negation or disjunction are used to compound relations.

519788.012518
The formalism of generalized quantum histories allows a symmetrical treatment of space and time correlations, by taking different traces of the same history density matrix. We recall how to characterize spatial and temporal entanglement in this framework. An operative protocol is presented, to map a history state into the ket of a static composite system. We show, by examples, how the LeggettGarg and the temporal CHSH inequalities can be violated in our approach.

605287.012524
Wilhelm (Forthcom Synth 199:6357–6369, 2021) has recently defended a criterion for comparing structure of mathematical objects, which he calls Subgroup. He argues that Subgroup is better than SYM , another widely adopted criterion. We argue that this is mistaken; Subgroup is strictly worse than SYM . We then formulate a new criterion that improves on both SYM and Subgroup, answering Wilhelm’s criticisms of SYM along the way. We conclude by arguing that no criterion that looks only to the automorphisms of mathematical objects to compare their structure can be fully satisfactory.

981413.012529
Given synthetic Euclidean geometry, I define length λ(a, b) (of a segment ab), by taking equivalence classes with respect to the congruence relation, ≡: i.e., λ(a, b) = λ(c, d) ↔ ab ≡ cd. By geometric constructions and explicit definitions, one may define the Length structure, _{L} = (_{L}, ,⊕,⪯, ), “instantiated by Euclidean geometry”, so to speak. One may show that this structure is isomorphic to the set of nonnegative elements of the onedimensional linearly ordered vector space over _{R}. One may define the notion of a numerical scale (for length) and a unit (for length). One may show how numerical scales for length are determined by Cartesian coordinate systems. One may also obtain a derivation of Maxwell’s quantity formula, Q = {Q}[Q], for lengths.

1096823.012534
Let us consider an acyclic causal model M of the sort that is central to causal modeling (Spirtes et al. 1993/2000, Pearl 2000/2009, Halpern 2016, Hitchcock 2018). Readers familiar with them can skip this section. M = , ^{F ⟩} is a causal model if, and only if, is a signature and = {F1 , . . . , Fn represents a set of n structural equations, for a finite natural number n. S = , , R is a signature if, and only if, is a finite set of exogenous variables, V = V1 , . . . ,Vn is a set of n endogenous variables that is disjoint from U, and R : U ∪ V → R assigns to each exogenous or endogenous variable X in U ∪ V its range (not codomain) R (X) ⊆ R. F = F1 , . . . , Fn represents a set of n structural equations if, and only if, for each natural number i, 1 ≤ i ≤ n: Fi is a function from the Cartesian product i = X∈U∪V\{Vi R (X) of the ranges of all exogenous and endogenous variables other than Vi into the range R (Vi) of the endogenous variable Vi. The set of possible worlds of the causal model M is defined as the Cartesian productW = X∈U∪VR (X) of the ranges of all exogenous and endogenous variables.

1097141.012539
To analyse contingent propositions, this paper investigates how branching time structures can be combined with probability theory. In particular, it considers assigning infinitesimal probabilities—available in nonArchimedean probability theory—to individual histories. This allows us to introduce the concept of ‘remote possibility’ as a new modal notion between ‘impossibility’ and ‘appreciable possibility’. The proposal is illustrated by applying it to a future contingent and a historical counterfactual concerning an infinite sequence of coin tosses. The latter is a toy model that is used to illustrate the applicability of the proposal to more realistic physical models.

1329891.012546
A wide variety of stochastic models of cladogenesis (based on speciation and extinction) lead to an identical distribution on phylogenetic tree shapes once the edge lengths are ignored. By contrast, the distribution of the tree’s edge lengths is generally quite sensitive to the underlying model. In this paper, we review the impact of different model choices on tree shape and edge length distribution, and its impact for studying the properties of phylogenetic diversity (PD) as a measure of biodiversity, and the loss of PD as species become extinct at the present. We also compare PD with a stochastic model of feature diversity, and investigate some mathematical links and inequalities between these two measures plus their predictions concerning the loss of biodiversity under extinction at the present.

1676020.012552
We give a new coalgebraic semantics for intuitionistic modal logic with 2. In particular, we provide a colagebraic representation of intuitionistic descriptive modal frames and of intuitonistic modal Kripke frames based on imagefinite posets. This gives a solution to an implicit problem in the area of coalgebaic logic for these classes of frames, raised explicitly by Litak (2014) and de Groot and Pattinson (2020). Our key technical tool is a recent generalization of a construction by Ghilardi, in the form of a right adjoint to the inclusion of the category of Esakia spaces in the category of Priestley spaces. As an application of these results, we study bisimulations of intuitionistic modal frames, describe dual spaces of free modal Heyting algebras, and provide a path towards a theory of coalgebraic intuitionistic logics.

1676054.012557
The GoldblattThomason theorem is a classic result of modal definability of Kripke frames. Its topological analogue for the closure semantics has been proved by ten Cate et al. (2009). In this paper we prove a version of the GoldblattThomason theorem for topological semantics via the Cantor derivative. We work with derivative spaces which provide a natural generalisation of topological spaces on the one hand and of weakly transitive frames on the other.

1676096.012562
Polyhedral semantics is a recently introduced branch of spatial modal logic, in which modal formulas are interpreted as piecewise linear subsets of an Euclidean space. Polyhedral semantics for the basic modal language has already been well investigated. However, for many practical applications of polyhedral semantics, it is advantageous to enrich the basic modal language with a reachability modality. Recently, a language with an Untillike spatial modality has been introduced, with demonstrated applicability to the analysis of 3D meshes via model checking. In this paper, we exhibit an axiom system for this logic, and show that it is complete with respect to polyhedral semantics. The proof consists of two major steps: First, we show that this logic, which is built over Grzegorczyk’s system Grz, has the finite model property. Subsequently, we show that every formula satisfied in a finite poset is also satisfied in a polyhedral model, thereby establishing polyhedral completeness.

1736423.012567
Natural language does not express all connectives definable in classical logic as simple lexical items. Coordination in English is expressed by conjunction and, disjunction or, and negated disjunction nor. Other languages pattern similarly. Nonlexicalized connectives are typically expressed compositionally: in English, negated conjunction is typically expressed by combining negation and conjunction (not both). This is surprising: if ∧ and ∨ are duals, and the negation of the latter can be expressed lexically (nor), why not the negation of the former? I present a twotiered model of the semantics of the binary connectives. The first tier captures the expressive power of the lexicon: it is a bilateral statebased semantics that, under a restriction, can express all and only the distinctions that can be expressed by the lexicon of natural language (and, or, nor). This first tier is characterized by rejection as nonassertion and a Neglect Zero assumption. The second tier is obtained by dropping the Neglect Zero assumption and enforcing a stronger notion of rejection, thereby recovering classical logic and thus definitions for all Boolean connectives. On the twotiered model, we distinguish the limited expressive resources of the lexicon and the greater combinatorial expressive power of the language as a whole. This gives us a logicbased account of compositionality for the Boolean fragment of the language.

1851782.012573
In this article, I try to shed new light on Frege’s envisaged definitional introduction of real and complex numbers in Die Grundlagen der Arithmetik (1884) and the status of crosssortal identity claims with side glances at Grundgesetze der Arithmetik (vol. I 1893, vol. II 1903). As far as I can see, this topic has not yet been discussed in the context of Grundlagen. I show why Frege’s strategy in the case of the projected definitions of real and complex numbers in Grundlagen is modelled on his definitional introduction of cardinal numbers in two steps, tentatively via a contextual definition and finally and definitively via an explicit definition. I argue that the strategy leaves a few important questions open, in particular one relating to the status of the envisioned abstraction principles for the real and complex numbers and another concerning the proper handling of crosssortal identity claims.

1909520.012578
In this paper we use prooftheoretic methods, specifically sequent calculi, admissibility of cut within them and the resultant subformula property, to examine a range of philosophicallymotivated deontic logics. We show that for all of those logics it is a (meta)theorem that the Special Hume Thesis holds, namely that no purely normative conclusion follows nontrivially from purely descriptive premises (nor vice versa). In addition to its interest on its own, this also illustrates one way in which proof theory sheds light on philosophically substantial questions.

2285128.012583
We’ve been hard at work here in Edinburgh. Kris Brown has created Julia code to implement the ‘stochastic Cset rewriting systems’ I described last time. I want to start explaining this code and also examples of how we use it. …

2308766.012588
Judgmentaggregation theory has always focused on the attainment of rational collective judgments. But so far, rationality has been understood in static terms: as coherence of judgments at a given time, defined as consistency, completeness, and/or deductive closure. This paper asks whether collective judgments can be dynamically rational, so that they change rationally in response to new information. Formally, a judgment aggregation rule is dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments in light of some information (a learnt proposition), then the new aggregate judgments are the old ones revised in light of this information, i.e., aggregation and revision commute. We prove an impossibility theorem: if the propositions on the agenda are nontrivially connected, no judgment aggregation rule with standard properties is dynamically rational with respect to any revision operator satisfying some basic conditions. Our theorem is the dynamicrationality counterpart of some wellknown impossibility theorems for static rationality. We also explore how dynamic rationality might be achieved by relaxing some of the conditions on the aggregation rule and/or the revision operator. Notably, premisebased aggregation rules are dynamically rational with respect to socalled premisebased revision operators.

2308835.012593
In quantum mechanics, we appeal to decoherence as a process that explains the emergence of a quasiclassical order. Decoherence has no classical counterpart. Moreover, it is an apparently irreversible process [1–7]. In this paper, we investigate the nature and origin of its irreversibility. Decoherence and quantum entanglement are two physical phenomena that tend to go together. The former relies on the latter, but the reverse is not true. One can imagine a simple bipartite system in which two microscopic subsystems are initially unentangled and become entangled at the end of the interaction. Decoherence does not occur, since neither system is macroscopic. Nevertheless, we will still need to quantify entanglement in order to describe the arrow of time associated with decoherence, because it occurs when microscopic systems become increasingly entangled with the degrees of freedom in their macroscopic environments. To do this we need to define entanglement entropy in terms of the sum of the von Neumann entropies of the subsystems.

2308945.012598
Duality in the Exact Sciences: The Application to Quantum Mechanics.

2539957.012603
In this brief note I will try to develop the following thesis: Gödel’s program includes a rich and exciting task for the philosopher that has been overlooked by the majority of the philosophers of set theory (let alone set theorists). Gödel’s program intends, in a nutshell, to solve Cantor’s Continuum Hypothesis (hereafter, CH) as legitimate problem by means of the addition of new axioms to ZFC that satisfy some criteria of naturalness and that, moreover, allow to derive either CH or its negation. Hence, the view encapsulated by such program clashes violently with other attitudes towards the status of CH, like those defending that CH is a problem but is solved by the independence phenomenon itself , those that argue that CH is a vague statement and therefore is illposed as a problem and, finally, those that regard the axiomadding proposals as incapable of settling the question .

2544108.012609
In this paper we reassess the philosophical foundation of Exactly True Logic (ETL), a competing variant of First Degree Entailment (FDE). In order to do this, we first rebut an argument against it. As the argument appears in an interview with Nuel Belnap himself, one of the fathers of FDE , we believe its provenance to be such that it needs to be taken seriously. We submit, however, that the argument ultimately fails, and that ETL cannot easily be dismissed. We then proceed to give an overview of the research that was inspired by this logic over the last decade, thus providing further motivation for the study of ETL and, more generally, of FDErelated logics that result from semantical analyses alternative to Belnap’s canonical one. We focus, in particular, on philosophical questions that these developments raise.

2586791.012614
Central to certain versions of logical atomism is the claim that every proposition is a truthfunctional combination of elementary propositions. Assuming that propositions form a Boolean algebra, we consider five natural formal regimentations of this informal claim, and show that they are equivalent. For a number of reasons, such as the need to accommodate quantifiers, logical atomists might consider only complete Boolean algebras, and take into account infinite truthfunctional combinations. We show that in such a variant setting, the five regimentations come apart, and explore how they relate to each other. We also discuss how they relate to the claim that propositions form a double powerset algebra, which has been proposed by a number of authors as a way of capturing the central logical atomist idea.

2586855.012619
Is it possible, for instance, that the ratio between the diameter and circumference of a Euclidean circle be something other than the number π? If so, what would it be like to live in a Euclidean world where that ratio is different — wouldn’t something go horribly wrong? If you think the answer here is obvious, what about the more abstract mathematical claims, such as 4, 5 and 6 above, which also can have interpretations in Euclidean physical space, but are independent of our most widely accepted mathematical and physical theories?

2713052.012624
It seems that from an epistemological point of view comparative probability has many advantages with respect to a probability measure. It is more reasonable as an evaluation of degrees of rational beliefs. It allows the formulation of a comparative indifference principle free from well known paradoxes. Moreover it makes it possible to weaken the principal principle, so that it becomes more reasonable. But the logical systems of comparative probability do not admit an adequate probability updating, which on the contrary is possible for a probability measure. Therefore we are faced with a true epistemological dilemma between comparative and quantitative probability.

3053709.012629
Our aim in this paper is to extend the semantics for the kind of logic of ground developed in [deRosset and Fine, 2023]. In that paper, the authors very briefly suggested a way of treating universal and existential quantification over a fixed domain of objects. Here we explore some options for extending the treatment to allow for a variable domain of objects.

3053732.012634
This paper is concerned with the semantics for the logics of ground that derive from a slight variant GG of the logic of [Fine, 2012b] that have already been developed in [deRosset and Fine, 2023]. Our aim is to outline that semantics and to provide a comparison with two related semantics for ground, given in [Correia, 2017] and [Kramer, 2018a]. This comparison highlights the strengths and difficulties of these different approaches. KEYWORDS: Impure Logic of Ground; Truthmaker Semantics; Logic of Ground; Ground This paper concerns the semantics for the logics of ground deriving from a slight variant GG of the logic of [Fine, 2012b] that have already been developed in [deRosset and Fine, 2023]. Our aim is to outline that semantics and to provide a comparison with two related semantics for ground, given in [Correia, 2017] and [Kramer, 2018a]. This will serve to highlight the strengths and difficulties of these different approaches. In particular, it will show how deRosset and Fine’s approach has a greater degree of flexibility in its ability to acccommodate different extensions of a basic minimal system of ground. We shall assume that the reader is already acquainted with some of the basic work on ground and on the framework of truthmaker semantics. Some background material may be found in [Fine, 2012b, 2017a,b].

3235462.012639
We propose a framework for the analysis of choice behaviour when the latter is made explicitly in chronological order. We relate this framework to the traditional choice theoretic setting from which the chronological aspect is absent, and compare it to other frameworks that extend this traditional setting. Then, we use this framework to analyse various models of preference discovery. We characterise, via simple revealed preference tests, several models that differ in terms of (i) the priors that the decisionmaker holds about alternatives and (ii) whether the decisionmaker chooses period by period or uses her knowledge about future menus to inform her present choices. These results provide novel testable implications for the preference discovery process of myopic and forwardlooking agents.