George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.
An approach to frame semantics is built on a conception of frames as finite automata, observed through the strings they accept. An institution (in the sense of Goguen and Burstall) is formed where these strings can be refined or coarsened to picture processes at various bounded granularities, with transitions given by Brzozowski derivatives.
Beall and Murzi (J Philos 110(3):143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules (over sufficiently expressive base theories). As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field (Notre Dame J Form Log 58(1):1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi (J Philos Log. doi:10.1007/s10992-017-9438-x, 2017), is a generalisation of Kripke’s notion of grounded truth (J Philos 72:690–716, 1975), and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic (nor, more generally, of a revisionary approach to semantic paradox), we take the notion of naïve
It’s no secret that there are many competing views on the semantics of conditionals. One of the tools of the trade is that of any experimental scientist: put the object of study in various environments and see what happens.
The logical systems within which Frege, Schroder, Russell, Zermelo and other early mathematical logicians worked were all higher-order. It was not until the 1910s that first-order logic was even distinguished as a subsystem of higher-order logic. As late as in the 1920s, higher-order quantification was still quite generally allowed: in fact, it does not seem as if any major logician, among non-intuitionists, except Thoralf Skolem restricted himself to first-order logic. Proofs were sometimes allowed to be infinite and infinitely long expressions were allowed in the languages that were used.
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely,
Different set theories extending ZF are never bi-interpretable! …
Failures of supervenience reveal gaps. There is a mental-physical gap if the mental facts fail to supervene on the physical facts. There is a nomic-categorical gap if the nomic facts fail to supervene on the categorical facts. In the same way, there may be macro-micro gaps. Some terminology: let an atom be any object in spacetime without proper parts; let a composite be any object in spacetime with proper parts; let the micro facts be the facts about the atoms, their identities, their intrinsic properties, and their relations to one another; and let the macro facts be the facts about the composites, their identities, their properties, their relations to one another, and their relations to the atoms. There is a macro-micro gap just if the macro facts fail to supervene on the micro facts.
The sorites paradox originated in an ancient puzzle that appears to be
generated by vague terms, viz., terms with unclear
(“blurred” or “fuzzy”) boundaries of
application. ‘Bald’, ‘heap’,
‘tall’, ‘old’, and ‘blue’ are
prime examples of vague terms: no clear line divides people who are
bald from people who are not, or blue objects from green (hence not
blue), or old people from middle-aged (hence not old). Because the
predicate ‘heap’ has unclear boundaries, it seems that no
single grain of wheat can make the difference between a number of
grains that does, and a number that does not, make a heap.
I show that intuitive and logical considerations do not justify introducing Leibniz’s Law of the Indiscernibility of Identicals in more than a limited form, as applying to atomic formulas. Once this is accepted, it follows that Leibniz’s Law generalises to all formulas of the first-order Predicate Calculus but not to modal formulas. Among other things, identity turns out to be logically contingent.
Some recent work has challenged two principles thought to govern the logic of the indicative conditional: modus ponens (Kolodny & MacFarlane 2010) and modus tollens (Yalcin 2012). There is a fairly broad consensus in the literature that Kolodny and Mac- Farlane’s challenge can be avoided, if the notion of logical consequence is understood aright (Willer 2012; Yalcin 2012; Bledin 2014). The viability of Yalcin’s counterexample to modus tollens has meanwhile been challenged on the grounds that it fails to take proper account of context-sensitivity (Stojnić forthcoming). This paper describes a new counterexample to modus ponens, and shows that strategies developed for handling extant challenges to modus ponens and modus tollens fail for it. It diagnoses the apparent source of the counterexample: there are bona fide instances of modus ponens that fail to represent deductively reasonable modes of reasoning.
Assuming that the target of theory oriented empirical science in general and of nomic truth approximation in particular is to characterize the boundary or demarcation between nomic possibilities and nomic impossibilities, I have presented, in my article entitled “Models, postulates, and generalized nomic truth approximation” (Kuipers, 2016), the ‘basic’ version of generalized nomic truth approximation, starting from ‘two-sided’ theories. Its main claim is that nomic truth approximation can perfectly be achieved by combining two prima facie opposing views on theories: (1) the traditional (Popperian) view: theories are (models of) postulates that exclude certain possibilities from being realizable, enabling explanation and prediction and (2) the model view: theories are sets of models that claim to (approximately) represent certain realizable possibilities. Nomic truth approximation, i.e. increasing truth-content and decreasing falsity-content, becomes in this way revising theories by revising their models and/or their postulates in the face of increasing evidence.
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may show that there is no conditional operator that has all the properties native speaker intuitions suggest if has.
Good’s Theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather (reliable) information before making a decision when doing so is free. We argue that Good’s Theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this.
Homotopy Type theory and its Model theory provide a novel formal semantic framework for representing scientific theories. This framework supports a constructive view of theories according to which a theory is essentially characterised by its methods. The constructive view of theories was earlier defended by Ernest Nagel and a number of other philosophers of the past but available logical means did not allow these people to build formal representational frameworks that implement this view.
Kenny Courser and I have been working hard on this paper for months:
• John Baez and Kenny Courser, Coarse-graining open Markov processes. It may be almost done. So, it would be great if people here could take a look and comment on it! …
I just read something cool:
• Joel David Hamkins, Nonstandard models of arithmetic arise in the complex numbers, 3 March 2018. Let me try to explain it in a simplified way. I think all cool math should be known more widely than it is. …
Take a mathematician of Frege’s generation, accustomed to writing the likes
(2) If , then or ,
— and fancier things, of course! Whatever unclear thoughts about ‘variables’ people may or may not have had once upon a time, they have surely been dispelled well before the 1870s, if not by Balzano’s 1817 Rein analytischer Beweis (though perhaps that was not widely enough read? …
According to spacetime state realism (SSR), the fundamental ontology of a quantum mechanical world consists of a state-valued field evolving in 4- dimensional spacetime. One chief advantage it claims over rival wavefunction realist views is its natural compatibility with relativistic quantum field theory (QFT). I argue that the original density operator formulation of SSR cannot be extended to QFTs where the local observables form type III von Neumann algebras. Instead, I propose a new formulation of SSR in terms of a presheaf of local statespaces dual to the net of local observables studied by algebraic QFT.
When theorizing about the a priori, philosophers typically deploy a sentential operator: ‘it is a priori that’. This operator can be combined with metaphysical modal operators, and in particular with ‘it is necessary that’ and ‘actually’ (in the standard, rigidifying sense) in a single argument or a single sentence. Arguments and theses that involve such combinations have had played a starring role in post-Kripkean metaphysics and epistemology. The phenomena the contingent a priori and the necessary a posteriori have been organizing themes in post-Kripkean discussions, and these phenomena cannot be easily discussed without using sentences and arguments that involve the interaction of the a priority, necessity, and actuality operators. However, there has been surprisingly little discussion of the logic of the interaction of these operators. In this paper we shall attempt to make some progress on that topic.
Atomic sentences – or the propositions they express – can be true, as can logically complex sentences composed out of atomic sentences. A comprehensive metaphysics of truth aims to tell us, in an informative way, what the truth of any sentence whatsoever consists in, be it atomic or complex. Monists about truth are committed to truth always consisting in the same thing, no matter which sentence you consider. Pluralists about truth think that the nature of truth is different for different sets of sentences. The received view seems to be that logically complex sentences – and indeed logic itself – somehow impose a monistic constraint on any comprehensive metaphysics of truth. In what follows, I argue that the received view is mistaken.
This paper gives a definition of self-reference on the basis of the dependence relation given by Leitgeb (2005), and the dependence digraph by Beringer & Schindler (2015). Unlike the usual discussion about self-reference of paradoxes centering around Yablo’s paradox and its variants, I focus on the paradoxes of finitary characteristic, which are given again by use of Leitgeb’s dependence relation. They are called ‘locally finite paradoxes’, satisfying that any sentence in these paradoxes can depend on finitely many sentences. I prove that all locally finite paradoxes are self-referential in the sense that there is a directed cycle in their dependence digraphs. This paper also studies the ‘circularity dependence’ of paradoxes, which was introduced by Hsiung (2014). I prove that the locally finite paradoxes have circularity dependence in the sense that they are paradoxical only in the digraph containing a proper cycle. The proofs of the two results are based directly on Konig’s infinity lemma. In contrast, this paper also shows that Yablo’s paradox and its ∀∃-unwinding variant are non-self-referential, and neither McGee’s paradox nor the ω-cycle liar has circularity dependence.
Ruetsche () claims that an abstract C*-algebra of observables will not contain all of the physically significant observables for a quantum system with infinitely many degrees of freedom. This would signal that in addition to the abstract algebra, one must use Hilbert space representations for some purposes. I argue to the contrary that there is a way to recover all of the physically significant observables by purely algebraic methods.
For simplicity, most of the literature introduces the concept of definitional equivalence only to languages with disjoint signatures. In a recent paper, Barrett and Halvorson introduce a straightforward generalization to languages with non-disjoint signatures and they show that their generalization is not equivalent to intertranslatability in general. In this paper, we show that their generalization is not transitive and hence it is not an equivalence relation. Then we introduce the Andr´eka and N´emeti generalization as one of the many equivalent formulations for languages with disjoint signatures. We show that the Andr´eka–N´emeti generalization is the smallest equivalence relation containing the Barrett–Halvorson generalization and it is equivalent to intertranslatability even for languages with non-disjoint signatures. Finally, we investigate which definitions for definitional equivalences remain equivalent when we generalize them for theories with non-disjoint signatures.
Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer scrutiny are not. The premises have modal import that is required for the arguments but is not immediately grasped on inspection, and which ultimately undermines the simpler logical intuitions that make the premises seem plausible. Furthermore, the notion of necessity that they involve goes unspecified, and yet must go beyond standard varieties of logical necessity. This leaves us little reason to believe the premises, while their implausible existential import gives us good reason not to.
It is a striking fact from reverse mathematics that almost all theorems of countable and countably representable mathematics are equivalent to just five subsystems of second order arithmetic. The standard view is that the significance of these equivalences lies in the set existence principles that are necessary and sufficient to prove those theorems. In this article I analyse the role of set existence principles in reverse mathematics, and argue that they are best understood as closure conditions on the powerset of the natural numbers.
This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes.
Weak supplementation says that if x is a proper part of y, then y has a proper part that doesn’t overlap x. Suppose that we are impressed by standard counterexamples to weak supplementation like the following. …