A traditional question in the philosophy of mathematics is whether abstract mathematical objects really exist, just as planets and atoms and giraffes do, independently of the human mind. Views on this question are often grouped into “platonist”, “nominalist”, and “constructivist” groups depending on whether they state that mathematical objects have their own independent existence, or don’t truly exist at all, or exist but only as constructions of the human mind. This paper will not directly address this question. We expect that to the extent that each of these views is defensible, they will not settle the questions that we are interested in (though there may be some affinities between some possible answers to these questions). Rather, we will focus on two other questions of mathematical realism: the question of which mathematical claims can be taken as meaningful and true within mathematics, and which mathematical ideas can be taken as applying to the physical world as part of a scientific theory.
Modern logic emerged in the period from 1879 to the Second World War. In the post-war period what we know as classical first-order logic largely replaced traditional syllogistic logic in introductory textbooks, but the main development has been simply enormous growth: The publications of the Association for Symbolic Logic, the main professional organization for logicians, became ever thicker. While 1950 saw volume 15 of the Journal of Symbolic Logic, about 300 pages of articles and reviews and a 6-page member list, 2000 saw volume 65 of that journal, over 1900 pages of articles, plus volume 6 of the Bulletin of Symbolic Logic, 570 pages of reviews and a 60-page member list. Of so large a field, the present survey will have to be ruthlessly selective, with no coverage of the history of informal or inductive logic, or of philosophy or historiography of logic, and slight coverage of applications. Remaining are five branches of pure, formal, deductive logic, four being the branches of mathematical logic recognized in Barwise 1977, first of many handbooks put out by academic publishers: set theory, model theory, recursion theory, proof theory. The fifth is philosophical logic, in one sense of that label, otherwise called non-classical logic, including extensions of and alternatives to textbook logic. For each branch, a brief review of pre-war background will be followed by a few highlights of subsequent history. The references will be mix primary and secondary sources, landmark papers and survey articles.
I return to the material in  “Paris-Harrington in an NF context”. Various people had commented that the concept of a relatively large set of natural numbers is unstratified, and in that essay I mused about whether or not the extra strength of PH over finite Ramsey was to do with this failure of stratification. In the present—self-contained— note I shall show that—somewhat to my annoyance—it is not: Paris-Harrington has a stratified formulation.
In computer science, formal methods are used to specify, develop, and verify hardware and software systems. Such methods hold great promise for mathematical discovery and verification of mathematics as well.
The relativized BQP vs. PH problem (1993-2018)
Update (June 4): OK, I think the blog formatting issues are fixed now—thanks so much to everyone who offered their help! True story. A couple nights ago, I was sitting in the Knesset, Israel’s parliament building, watching Gilles Brassard and Charles Bennett receive the Wolf Prize in Physics for their foundational contributions to quantum computing and information. …
Constructive mathematics is distinguished from its traditional
counterpart, classical mathematics, by the strict interpretation of the
phrase “there exists” as “we can construct”. In
order to work constructively, we need to re-interpret not only the
existential quantifier but all the logical connectives and quantifiers
as instructions on how to construct a proof of the statement involving
these logical expressions. In this article we introduce modern constructive mathematics based on
the BHK-interpretation of the logical connectives and quantifiers. We
discuss four major varieties of constructive mathematics, with
particular emphasis on the two varieties associated with Errett Bishop
and Per Martin-Löf, which can be regarded as minimal constructive
Parthood is used widely in ontologies across subject domains, specified in a multitude of mereological theories, and even more when combined with topology. To complicate the landscape, decidable languages put restrictions on the language features, so that only fragments of the mereo(topo)logical theories can be represented, even though those full features may be needed to check correctness during modelling. We address these issues by specifying a structured network of theories formulated in multiple logics that are glued together by the various linking constructs of the Distributed Ontology Language, DOL. For the KGEMT mereotopology and its five sub-theories, together with the DL-based OWL species and first- and second-order logic, this network in DOL orchestrates 28 ontologies.
What i and -i could not be
Posted on Monday, 28 May 2018
According to realist structuralism, mathematics is the study of
structures. Structures are understood to be special kinds of complex
properties that can be instantiated by particulars together with
relations between these particulars. …
Logical monists and pluralists disagree about how many correct logics there are; the monists say there is just one, the pluralists that there are more. Could it turn out that both are wrong, and that there is no logic at all? Such a view might with justice be called logical nihilism and here I’ll assume a particular gloss on what that means: nihilism is the view that there are no laws of logic, so that all candidates—e.g. the law of excluded middle, modus ponens, disjunctive syllogism et. al.—fail. Nihilism might sound absurd, but the view has come up in recent discussions of logical pluralism. Some pluralists have claimed that different logics are correct for different kinds of case, e.g. classical logic for consistent cases and paraconsistent logics for dialethic ones. Monists have responded by appealing to a principle of generality for logic: a law of logic must hold for absolutely all cases, so that it is only those principles that feature in all of the pluralist’s systems that count as genuine laws of logic. The pluralist replies that the monist’s insistence on generality collapses monism into nihilism, because, they maintain, every logical law fails in some cases.
Berkeley’s ‘master argument’ for idealism has been the subject of extensive criticism. Two of his strongest critics, A.N. Prior and J.L. Mackie, argue that due to various logical confusions on the part of Berkeley, the master argument fails to establish his idealist conclusion. Prior (1976) argues that Berkeley’s argument ‘proves too little’ in its conclusion, while Mackie (1964) contends that Berkeley confuses two different kinds of self-refutation in his argument. In this paper, I put forward a defence of the master argument based on intuitionistic logic. I argue that, analysed along these lines, Prior’s and Mackie’s criticisms fail to undermine Berkeley’s argument.
In the societal tradeoffs problem, each agent perceives certain quantitative tradeoffs between pairs of activities, and the goal is to aggregate these tradeoffs across agents. This is a problem in social choice; specifically, it is a type of quantitative judgment aggregation problem. A natural rule for this problem was axiomatized by Conitzer et al. [AAAI 2016]; they also provided several algorithms for computing the outcomes of this rule. In this paper, we present a significantly improved algorithm and evaluate it experimentally. Our algorithm is based on a tight connection to minimum-cost flow that we exhibit. We also show that our algorithm cannot be improved without breakthroughs on min-cost flow.
We show that combining two different hypothetical enhancements to quantum computation— namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
« The stupidest story I ever wrote (it was a long flight)
PDQP/qpoly = ALL
I’ve put up a new paper. Unusually for me these days, it’s a very short and simple one (8 pages)—I should do more like this! Here’s the abstract:
We show that combining two different hypothetical enhancements to quantum computation—namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. …
A critical survey of some attempts to define ‘computer’, beginning with some informal ones (from reference books, and definitions due to H. Simon, A.L. Samuel, and M. Davis), then critically evaluating those of three philosophers (J.R. Searle, P.J. Hayes, and G. Piccinini), and concluding with an examination of whether the brain and the universe are computers.
The sustained failure of efforts to design an infinite lottery machine using ordinary probabilistic randomizers is traced back to a problem familiar to set theorists: there are no constructive prescriptions for probabilistically non-measurable sets. Yet construction of such sets is required if we are to be able to read the result of an infinite lottery machine that is built from ordinary probabilistic randomizers. All such designs face a dilemma: they can provide an accessible (readable) result with probability zero; or an inaccessible result with probability greater than zero.
Classical logic is characterized by the familiar truth-value semantics, in which an interpretation assigns one of two truth values to any propositional letter in the language (in the propositional case), and a function from a power of the domain to the set of truth values in the predicate case. Truth values of composite sentence are assigned on the basis of the familiar truth functions. This abstract semantics immediately yields an applied semantics in the sense that the truth value of an interpreted sentence is given by the truth value of that sentence in an interpretation in which the propositional variables are given the truth values of the statements that interpret them. So if p is interpreted as the statement “Paris is in France” and q as “London is in Italy” then the truth value of “p ∨ q” is |p ∨ q| where the interpretation | | is given by |p| = T and |q| = F. And since the truth value of |A ∨ B| is defined as
guest post by Matteo Polettini
Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. …
Why should this be of any more than esoteric interest? The principle, which sometimes gets called plural rigidity is put to work to important and controversial metaphysical ends. Crucially, Williamson deploys it in arguing both for necessitism (the doctrine that ontology is modally invariant) and for a property-based interpretation of second-order quantification (since he thinks the modal behaviour of plurals rules out a Boolos style plural interpretation) (Williamson (2013)) (Williamson (2010)) (Boolos (1998c)). Linnebo develops a foundational programme for set-theory in a modal plural logic strengthened by the addition of statements equivalent to the principle, and whilst he himself proposes a non-standard interpretation of the modalities, its acceptablity with respect to metaphysical modality would provide a fall-back position for someone sympathetic to the approach (Linnebo (2013)).
Discussion of new axioms for set theory has often focussed on conceptions of maximality, and how these might relate to the iterative conception of set. This paper provides critical appraisal of how certain maximality axioms behave on different conceptions of ontology concerning the iterative conception. In particular, we argue that forms of multiversism (the view that any universe of a certain kind can be extended) and actualism (the view that there are universes that cannot be extended in particular ways) face complementary problems. The latter view is unable to use maximality axioms that make use of extensions, where the former has to contend with the existence of extensions violating maximality axioms. An analysis of two kinds of multiversism, a Zermelian form and Skolemite form, leads to the conclusion that the kind of maximality captured by an axiom differs substantially according to background ontology.
The dialectical and dynamic dimensions of argumentation have been object of scrutiny since the inception of Dung’s abstract argumentation theory (cf. [Dung, 1994; 1995]). However, while the definition and analysis of ‘static’ justifiability criteria (i.e., argumentation semantics [Baroni et al., 2011]) has come to form the bulk of abstract argumentation theory, comparatively little work within Dung’s framework has been dedicated to a systematic study of forms of dynamic and multi-agent interaction. Some research has focused on operationalizations of argumentation semantics via two-player games (see [Modgil and Caminada, 2009] for an overview), while some other has attempted an analysis of strategic behavior in abstract forms of argumentation games (in particular [Procaccia and Rosenschein, 2005; Riveret et al., 2010; Thimm and Garcia, 2010]). This paper pursues further the understanding of multi-agent argumentation over abstract argumentation frameworks (AF) capitalizing on techniques from logic and multi-agent verification.
While the computational complexity of many game-theoretic solution concepts, notably Nash equilibrium, has now been settled, the question of determining the exact complexity of computing an evolutionarily stable strategy has resisted solution since attention was drawn to it in 2004. In this paper, I settle this question by proving that deciding the existence of an evolutionarily stable strategy is ΣP2 -complete.
The paper generalizes abstract argument games to cope with cases where proponent and opponent argue in front of an audience whose type is known only with uncertainty. The generalization, which makes use of basic tools from probability theory, is motivated by several examples and delivers a class of abstract argument games whose adequacy is proven robust against uncertainty.
A central area of current philosophical debate in the foundations of mathematics concerns whether or not there is a single, maximal, universe of set theory. Universists maintain that there is such a universe, while Multiversists argue that there are many universes, no one of which is ontologically privileged. Often forcing constructions that add subsets to models are cited as evidence in favour of the latter. This paper informs this debate by analysing ways the Universist might interpret this discourse that seems to necessitate the addition of subsets to V . We argue that despite the prima facie incoherence of such talk for the Universist, she nonetheless has reason to try and provide interpretation of this discourse. We analyse extant interpretations of such talk, and argue that while tradeoffs in naturality have to be made, they are not too severe.
There is, I think, a gap between what many students learn in their first course in formal logic, and what they are expected to know for their second. While courses in mathematical logic with metalogical components often cast only the barest glance at mathematical induction or even the very idea of reasoning from definitions, a first course may also leave these untreated, and fail explicitly to lay down the definitions upon which the second course is based. The aim of this text is to integrate material from these courses and, in particular, to make serious mathematical logic accessible to students I teach. The first parts introduce classical symbolic logic as appropriate for beginning students; the last parts build to Godel’s adequacy and incompleteness results. A distinctive feature of the last section is a complete development of Godel’s second incompleteness theorem.
This entry is about two kinds of circularity: object
circularity, where an object is taken to be part of itself in
some sense; and definition circularity, where a collection is
defined in terms of itself. Instances of these two kinds of
circularity are sometimes problematic, and sometimes not. We are
primarily interested in object circularity in this entry, especially
instances which look problematic when one tries to model them in set
theory. But we shall also discuss circular definitions. The term non-wellfounded set refers to sets which contain
themselves as members, and more generally which are part of an
infinite sequence of sets each term of which is an element of the
This paper argues that multiple coordinations like tall, thin and happy are interpreted in a “flat” iterative process, but using “nested” recursive application of binary coordination operators in the compositional meaning derivation. Ample motivation for flat interpretation is shown by contrasting such coordinations with nested, syntactically ambiguous, coordinate structures like tall and thin and happy. However, new evidence coming from type shifting and predicate distribution with verb phrases show motivation for an independent hierarchical ingredient in the compositional semantics of multiple coordination with no parallel hierarchy in the syntax. This establishes a contrast between operations at the syntax-semantics interface and compositional semantic mechanisms. At the same time, such evidence motivate the treatment of operations like type shifting and distributivity as purely semantic.
George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.