While Classical Logic (CL) used to be the gold standard for evaluating the rationality of human reasoning, certain non-theorems of CL—like Aristotle’s (∼(? → ∼?)) and Boethius’ theses ((? → ?) → ∼(? → ∼?))—appear intuitively rational and plausible. Connexive logics have been developed to capture the underlying intuition that conditionals whose antecedents contradict their consequents, should be false. We present results of two experiments (total ? = 72), the first to investigate connexive principles and related formulae systematically. Our data suggest that connexive logics provide more plausible rationality frameworks for human reasoning compared to CL. Moreover, we experimentally investigate two approaches for validating connexive principles within the framework of coherence-based probability logic . Overall, we observed good agreement between our predictions and the data, but especially for Approach 2.
Denić (2021) observes that the availability of distributive inferences — for sentences with disjunction embedded in the scope of a universal quantifier — depends on the size of the domain quantified over as it relates to the number of disjuncts. Based on her observations, she argues that probabilistic considerations play a role in the computation of implicatures. In this paper we explore a different possibility. We argue for a modification of Denić’s generalization, and provide an explanation that is based on intricate logical computations but is blind to probabilities. The explanation is based on the observation that when the domain size is no larger than the number of disjuncts, universal and existential alternatives are equivalent if distributive inferences are obtained. We argue that under such conditions a general ban on ‘fatal competition’ (Magri 2009a,b; Spector 2014) is activated thereby predicting distributive inferences to be unavailable.
There are two things called contexts that play important but distinct roles in standard accounts of language and communication. The first—call these compositional contexts—feature in a semantic theory. Compositional contexts are sequences of parameters that play a role in characterizing compositional semantic values for a given language, and in characterizing how such compositional semantic values determine a proposition expressed by a given sentence. The second—call these context sets—feature in a pragmatic theory. Context sets are abstract representations of the conversational states that serve to determine the compositional contexts relevant for interpreting a speech-act and that such speech-acts act upon. In this paper, I’ll consider how, given mutual knowledge of the information codified in a compositional semantic theory, an assertion of a sentence serves to update the context set. There is a standard account of how such conversational updating occurs. However, while this account has much to recommend it, I’ll argue that it needs to be revised in light of certain natural discourses.
Forty years ago, Niels Green-Pedersen listed five different accounts of valid consequence, variously promoted by logicians in the early fourteenth century and discussed by Niels Drukken of Denmark in his commentary on Aristotle’s Prior Analytics, written in Paris in the late 1330s. Two of these arguably fail to give defining conditions: truth preservation was shown by Buridan and others to be neither necessary nor sufficient; incompatibility of the opposite of the conclusion with the premises is merely circular if incompatibility is analysed in terms of consequence. Buridan was perhaps the first to define consequence in terms of preservation of what we might dub verification, that is, signifying as things are. John Mair pinpointed a sophism which threatens to undermine this proposal. Bradwardine turned it around: he suggested that a necessary condition on consequence was that the premises signify everything the conclusion signifies. Dumbleton gave counterexamples to Bradwardine’s postulates in which the conclusion arguably signifies more than, or even completely differently from the premises. Yet a long-standing tradition held that some species of validity depend on the conclusion being in some way contained in the premises. We explore the connection between signification and consequence and its role in solving the insolubles.
Human languages vary in terms of which meanings they lexicalize, but there are important constraints on this variation. It has been argued that languages are under the pressure to be simple (e.g., to have a small lexicon size) and to allow for an informative (i.e., precise) communication with their lexical items, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures ([ ] and much subsequent work). However, in certain semantic domains, it is possible to reach very high levels of informativeness even if very few meanings from that domain are lexicalized. This is due to productive morphosyntax, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey any natural number meaning using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages are (near-)optimal solutions to a different kind of trade-off problem: the trade-off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances).
I have now had a chance to read the first part of Greg Restall and Shawn Sandefer’s Logical Methods, some 113 pages on propositional logic.I enjoyed this well enough but I am, to be frank, a bit puzzled about the intended readership. …
In his well-known book Thought Experiments (1992), R.A. Sorensen provides two modal-logical schemata for two different types of ‘destructive’ thought experiments, baptised the Necessity Refuter and the Possibility Refuter. Regarding his schemata, Sorensen (1992, p. 132) advances the following caveat: Don’t worry about whether this is the uniquely correct scheme. The adequacy of a classification system is more a question of efficiency and suggestiveness. A good scheme consolidates knowledge in a way that minimizes the demand on your memory and expedites the acquisition of new knowledge by raising helpful leading questions. Both the Necessity Refuter and the Possibility Refuter consist of five premises, which are claimed to be inconsistent (ibid,, p. 135, 153). Besides clarity about the logical structure of thought experiments, another virtue of the modal-logical schemata is, Sorensen submits, the following (ibid, p. 136): Since the above five premises are jointly inconsistent, one cannot hold all five. This means that there are at most five consistent responses to the set. Sorensen then discusses the five possible responses, each of which rejects one premise. We concur with Sorensen that systematisations of the arguments accompanying thought experiments should be judged by their usefulness, such as classifying different responses. If the premises are inconsistent, then at least one premise must be given up; but if they are consistent, they can all be held. Indeed, we claim that sticto sensu both these modal-logical schemata are consistent, undermining the usefulness of the systematisation.
We report our first results regarding the automated verification of deontic correspondences (broadly conceived) and related matters in Isabelle/HOL, analogous to what has been achieved for the modal logic cube.
Assertions, so Stalnaker’s (1978) familiar narrative goes, express propositions and are made in context; in fact, context and what is said frequently affect each other. Since language has context-sensitive expressions, which proposition some given assertion expresses may depend on the context in which it is made. Assertions, in turn, affect the context, and they do so by adding the proposition expressed by that assertion to the context.
How should your opinion change in response to the opinion of an epistemic peer? We show that the pooling rule known as “upco” is the unique answer satisfying some natural desiderata. If your revised opinion will influence your opinions on other matters by Jeffrey conditionalization, then upco is the only standard pooling rule that ensures the order in which peers are consulted makes no difference. Popular proposals like linear pooling, geometric pooling, and harmonic pooling cannot boast the same. In fact, no alternative to upco can if it possesses four minimal properties which these proposals share.
The proper translation of “unless” into intuitionistic formalisms is examined. After a brief examination of intuitionistic writings on “unless”, and on translation in general, and a close examination of Dummett’s use of “unless” in Elements of Intuitionism (1975b), I argue that the correct intuitionistic translation of “A unless B” is no stronger than “¬? → ?”. In particular, “unless” is demonstrably weaker than disjunction. I conclude with some observations regarding how this shows that one’s choice of logic is methodologically prior to translation from informal natural language to formal systems.
There are two overarching aims of the five collated papers that make up my thesis. The first is to demonstrate that making sense of an ineffable Islamic God in virtue of classical logic and various truth theories (under the purview of analytic philosophy) motivates a theological contradiction. The second is to offer a solution to this problem. I spend a substantial part of my thesis establishing the first of these aims. The reason for this is twofold. Firstly, it is to illustrate the incompatibility between an ineffable God of Islam and various modes of logical and metaphysical inquiry that fall under the purview of analytic philosophy. Although, it becomes increasingly evident that we cannot philosophically make sense of an absolute ineffable God, my inquiry still bears relevance. It offers a comprehensive insight into the logical and metaphysical perspectives that are responsible for motivating the theological contradiction in question. Secondly, fleshing out the various logical and metaphysical perspectives helps lay the theoretical groundwork for the solution.
Almost periodic functions form a natural example of a non-separable normed space. As such, it has been a challenge for constructive mathematicians to find a natural treatment of them. Here we present a simple proof of Bohr’s fundamental theorem for almost periodic functions which we then generalize to almost periodic functions on general topological groups.
Like Lewis, many philosophers hold reductionist accounts of chance (on which claims about chance are to be understood as claims that certain patterns of events are instantiated) and maintain that rationality requires that credence should defer to chance (in the sense that under certain circumstances one’s credence in an event must coincide with the chance of that event). It is a shortcoming of an account of chance if it implies that this norm of rationality is unsatisfiable by computable agents. This shortcoming is more common than one might have hoped.
Isaac Wilhelm (2020) compares two answers to this question: (i) the fact that the mug is self-identical (for example) is grounded in the fact that the mug exists; (ii) the fact that the mug is self-identical is grounded in the mug itself. Wilhelm argues that (i) results in a troubling disunity in our account of what grounds identity facts, and concludes that (ii) is the better answer (§2-§3). He takes this conclusion to have a broader significance. For (ii) is not available to fact-grounders, who hold that grounding obtains only between facts; (ii) is only available to entity-grounders, who think that grounding can also obtain between entities of various other kinds: objects, properties, events, and so on. Wilhelm concludes that this gives us a reason to be entity-grounders. Here I rebut Wilhelm’s argument. I show that (i) is not the only answer that commits us to a disunity; (ii) brings with it a disunity of the very same kind. The advocate of (ii) has a natural way of restoring unity – but this maneuver is equally available to the advocate of (i), leaving neither theorist with any particular advantage (§4).
A mixture preorder is a preorder on a mixture space (such as a convex set) that is compatible with the mixing operation. In decision theoretic terms, it satisfies the central expected utility axiom of strong independence. We consider when a mixture preorder has a multi-representation that consists of real-valued, mixture-preserving functions. If it does, it must satisfy the mixture continuity axiom of Herstein and Milnor (1953). Mixture continuity is sufficient for a mixture-preserving multi-representation when the dimension of the mixture space is countable, but not when it is uncountable. Our strongest positive result is that mixture continuity is sufficient in conjunction with a novel axiom we call countable domination, which constrains the order complexity of the mixture preorder in terms of its Archimedean structure. We also consider what happens when the mixture space is given its natural weak topology. Continuity (having closed upper and lower sets) and closedness (having a closed graph) are stronger than mixture continuity. We show that continuity is necessary but not sufficient for a mixture preorder to have a mixture-preserving multi-representation. Closedness is also necessary; we leave it as an open question whether it is sufficient. We end with results concerning the existence of mixture-preserving multi-representations that consist entirely of strictly increasing functions, and a
In The Contradictory Christ, Jc Beall argues that paraconsistent logic provides a way to show how the central claims of Christology can all be true, despite their paradoxical appearances. For Beall, claims such as “Christ is peccable” and “Christ is impeccable” are both true, with no change of subject matter or ambiguity of meaning of any term involved in each claim. Since to say that Christ is impeccable is to say that Christ is not peccable, these two claims are contradictory, and so, for Beall the conjunction “Christ is peccable and Christ is not peccable” is a true contradiction. This is a radical and original view of the incarnation, and a revisionary view of what is permissible for theological reasoning.
Chs 1 to 7 of MLC, as we’ve seen, give us a high-level and often rather challenging introduction to core first-order logic with a quite strongly proof-theoretic flavour. Now moving on, the next three chapters are on arithmetics — Ch. …
If you dissect a square into n similar rectangles, what proportions can these rectangles have? Folks on Mathstodon figured this out for n ≤ 7, and I blogged about it here recently. But I was left feeling that some deeper structure governed this problem. …
A major challenge in the philosophy of mathematics is to explain how mathematical language can pick out unique structures and acquire determinate content. In recent work, Button and Walsh have introduced a view they call ‘internalism’, according to which mathematical content is explained by internal categoricity results formulated and proven in second-order logic. In this paper, we critically examine the internalist response to the challenge and discuss the philosophical significance of internal categoricity results. Surprisingly, as we argue, while internalism arguably explains how we pick out unique mathematical structures, this does not suffice to account for the determinacy of mathematical discourse.
A increasing amount of contemporary philosophy of mathematics posits, and theorizes in terms of special kinds of mathematical modality. The goal of this paper is to bring recent work on higher-order metaphysics to bear on the investigation of these modalities. The main focus of the paper will be views that posit mathematical contingency or indeterminacy about statements that concern the ‘width’ of the set theoretic universe, such as Cantor’s continuum hypothesis. In the higher-order framework I show that contingency about the width of the set-theoretic universe refutes two orthodoxies concerning the structure of modal reality: the view that the broadest necessity has a logic of S5, and the ‘Leibniz biconditionals’ stating that what is possible, in the broadest sense of possible, is what is true in some possible world. Nonetheless, I argue that the underlying picture of modal set-theory is coherent and has natural models.
Russell famously announced “All mathematics deals exclusively with concepts definable in terms of a very small number of logical concepts and … all its propositions are decidable from a very small number of fundamental logical principles.” That wildly ambitious version of logicism is evidently sabotaged by Gödel’s Theorem which shows that, fix on a small number of logical principles and definitions as you will, you won’t even be able to derive from them all arithmetical truths let alone the rest of mathematics. …
Classical logic is the appropriate formal language for describing
mathematical structures containing a single universe or domain of
discourse. By contrast, many-sorted logic (MSL) allows quantification
over a variety of domains (called sorts). For this reason, it is a
suitable vehicle for dealing with statements concerning different
types of objects, which are ubiquitous in mathematics, philosophy,
computer science, and formal semantics. Each sort groups a unique
category of objects (for example, points and straight lines are
different types of objects in a 2-sorted structure). Despite the addition of this expressive resource, many-sorted logic
“stays inside” first-order logic, so the main metatheorems
(completeness, interpolation, and so on) can be proved.
SAT-based model checking is currently one of the most successful approaches to checking very large systems. In its early days, SAT-based (bounded) model checking was mainly used for bug hunting. The introduction of interpolation and IC3\PDR enable efficient complete algorithms that can provide full verification as well. In this paper, we survey several approaches to enhancing SAT-based model checking. They are all based on iteratively computing an over-approximation of the set of reachable system states. They use different mechanisms to achieve scalability and faster convergence.
Wilhelm (2021) has recently defended a criterion for comparing structure of mathematical objects, which he calls Subgroup. He argues that Subgroup is better than SYM , another widely adopted criterion. We argue that this is mistaken; Subgroup is strictly worse than SYM . We then formulate a new criterion that improves on both SYM and Subgroup, answering Wilhelm’s criticisms of SYM along the way. We conclude by arguing that no criterion that looks only to the automorphisms of mathematical objects to compare their structure can be fully satisfactory.
Jeremy Avigad’s book is turning out to be not quite what I was expecting. The pace and (often) compression can make for rather surprisingly tough going. And there are few pauses for broader reflection on what’s going on. …
Inspired by Cantor’s Theorem (CT), orthodoxy takes infinities to come in different sizes. The orthodox view has had enormous influence in mathematics, philosophy, and science. We will defend the contrary view—Countablism—according to which, necessarily, every infinite collection (set or plurality) is countable.
Recent work on the development of a dialogical approach to the logic of fiction stresses the notion of existence as choice. Moreover, this approach to existence has been combined with the notion of ontological dependence as deployed by A. Thomasson's artifactual theory of fiction. In order to implement such a combination within the dialogical frame several predicates of ontological dependence have been defined. However, the definition of such predicates seems to lean on a model-theoretic semantics for modal logic after all. The main aim of the present paper is to set a dialogical frame for the study of fictions in the context of the dialogical approach of CTT recently developed by S. Rahman and N. Clerbout where a fully-interpreted language is unfolded. We will herewith develop the idea that in such a setting fictional entities are understood as hypothetical objects, that is, objects (functions) the existence of which is dependent upon one or more hypotheses that restrict the scope of choices available. We will finish the paper by suggesting that this provides both a natural and genuinely dialogical way to understand R. Frigg's take on scientific models as fictions and a new perspective on Thomasson’s notion of generic ontological dependence.
A brief examination of the most recent literature in logic shows that a host of research in this area studies the interface between games, logic and epistemology. These studies provide the basis for ongoing enquiries in the history and philosophy of logic, going from the Indian, the Greek, the Arabic, the Obligationes of the Middle Ages to the most contemporary developments in the fields of theoretical computer science, computational linguistics, artificial intelligence, social sciences and legal reasoning. In fact, a dynamic turn, as Johan van Benthem puts it, is taking place where the epistemic aspects of inference are linked with game theoretical approaches to meaning.
Only recently has it been possible to construct a self-adjoint Hamiltonian that involves the creation of Dirac particles at a point source in 3d space. Its definition makes use of an interior-boundary condition. Here, we develop for this Hamiltonian a corresponding theory of the Bohmian configuration. That is, we construct a Markov jump process (Qt)t∈R in the configuration space of a variable number of particles that is |ψt| -distributed at every time t and follows Bohmian trajectories between the jumps. The jumps correspond to particle creation or annihilation events and occur either to or from a configuration with a particle located at the source. The process is the natural analog of Bell’s jump process, and a central piece in its construction is the determination of the rate of particle creation. The construction requires an analysis of the asymptotic behavior of the Bohmian trajectories near the source. We find that the particle reaches the source with radial speed 0, but orbits around the source infinitely many times in finite time before absorption (or after emission).