Logical monists and pluralists disagree about how many correct logics there are; the monists say there is just one, the pluralists that there are more. Could it turn out that both are wrong, and that there is no logic at all? Such a view might with justice be called logical nihilism and here I’ll assume a particular gloss on what that means: nihilism is the view that there are no laws of logic, so that all candidates—e.g. the law of excluded middle, modus ponens, disjunctive syllogism et. al.—fail. Nihilism might sound absurd, but the view has come up in recent discussions of logical pluralism. Some pluralists have claimed that different logics are correct for different kinds of case, e.g. classical logic for consistent cases and paraconsistent logics for dialethic ones. Monists have responded by appealing to a principle of generality for logic: a law of logic must hold for absolutely all cases, so that it is only those principles that feature in all of the pluralist’s systems that count as genuine laws of logic. The pluralist replies that the monist’s insistence on generality collapses monism into nihilism, because, they maintain, every logical law fails in some cases.
Berkeley’s ‘master argument’ for idealism has been the subject of extensive criticism. Two of his strongest critics, A.N. Prior and J.L. Mackie, argue that due to various logical confusions on the part of Berkeley, the master argument fails to establish his idealist conclusion. Prior (1976) argues that Berkeley’s argument ‘proves too little’ in its conclusion, while Mackie (1964) contends that Berkeley confuses two different kinds of self-refutation in his argument. In this paper, I put forward a defence of the master argument based on intuitionistic logic. I argue that, analysed along these lines, Prior’s and Mackie’s criticisms fail to undermine Berkeley’s argument.
In the societal tradeoffs problem, each agent perceives certain quantitative tradeoffs between pairs of activities, and the goal is to aggregate these tradeoffs across agents. This is a problem in social choice; specifically, it is a type of quantitative judgment aggregation problem. A natural rule for this problem was axiomatized by Conitzer et al. [AAAI 2016]; they also provided several algorithms for computing the outcomes of this rule. In this paper, we present a significantly improved algorithm and evaluate it experimentally. Our algorithm is based on a tight connection to minimum-cost flow that we exhibit. We also show that our algorithm cannot be improved without breakthroughs on min-cost flow.
We show that combining two different hypothetical enhancements to quantum computation— namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
« The stupidest story I ever wrote (it was a long flight)
PDQP/qpoly = ALL
I’ve put up a new paper. Unusually for me these days, it’s a very short and simple one (8 pages)—I should do more like this! Here’s the abstract:
We show that combining two different hypothetical enhancements to quantum computation—namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. …
A critical survey of some attempts to define ‘computer’, beginning with some informal ones (from reference books, and definitions due to H. Simon, A.L. Samuel, and M. Davis), then critically evaluating those of three philosophers (J.R. Searle, P.J. Hayes, and G. Piccinini), and concluding with an examination of whether the brain and the universe are computers.
The sustained failure of efforts to design an infinite lottery machine using ordinary probabilistic randomizers is traced back to a problem familiar to set theorists: there are no constructive prescriptions for probabilistically non-measurable sets. Yet construction of such sets is required if we are to be able to read the result of an infinite lottery machine that is built from ordinary probabilistic randomizers. All such designs face a dilemma: they can provide an accessible (readable) result with probability zero; or an inaccessible result with probability greater than zero.
Classical logic is characterized by the familiar truth-value semantics, in which an interpretation assigns one of two truth values to any propositional letter in the language (in the propositional case), and a function from a power of the domain to the set of truth values in the predicate case. Truth values of composite sentence are assigned on the basis of the familiar truth functions. This abstract semantics immediately yields an applied semantics in the sense that the truth value of an interpreted sentence is given by the truth value of that sentence in an interpretation in which the propositional variables are given the truth values of the statements that interpret them. So if p is interpreted as the statement “Paris is in France” and q as “London is in Italy” then the truth value of “p ∨ q” is |p ∨ q| where the interpretation | | is given by |p| = T and |q| = F. And since the truth value of |A ∨ B| is defined as
guest post by Matteo Polettini
Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. …
Why should this be of any more than esoteric interest? The principle, which sometimes gets called plural rigidity is put to work to important and controversial metaphysical ends. Crucially, Williamson deploys it in arguing both for necessitism (the doctrine that ontology is modally invariant) and for a property-based interpretation of second-order quantification (since he thinks the modal behaviour of plurals rules out a Boolos style plural interpretation) (Williamson (2013)) (Williamson (2010)) (Boolos (1998c)). Linnebo develops a foundational programme for set-theory in a modal plural logic strengthened by the addition of statements equivalent to the principle, and whilst he himself proposes a non-standard interpretation of the modalities, its acceptablity with respect to metaphysical modality would provide a fall-back position for someone sympathetic to the approach (Linnebo (2013)).
Discussion of new axioms for set theory has often focussed on conceptions of maximality, and how these might relate to the iterative conception of set. This paper provides critical appraisal of how certain maximality axioms behave on different conceptions of ontology concerning the iterative conception. In particular, we argue that forms of multiversism (the view that any universe of a certain kind can be extended) and actualism (the view that there are universes that cannot be extended in particular ways) face complementary problems. The latter view is unable to use maximality axioms that make use of extensions, where the former has to contend with the existence of extensions violating maximality axioms. An analysis of two kinds of multiversism, a Zermelian form and Skolemite form, leads to the conclusion that the kind of maximality captured by an axiom differs substantially according to background ontology.
The dialectical and dynamic dimensions of argumentation have been object of scrutiny since the inception of Dung’s abstract argumentation theory (cf. [Dung, 1994; 1995]). However, while the definition and analysis of ‘static’ justifiability criteria (i.e., argumentation semantics [Baroni et al., 2011]) has come to form the bulk of abstract argumentation theory, comparatively little work within Dung’s framework has been dedicated to a systematic study of forms of dynamic and multi-agent interaction. Some research has focused on operationalizations of argumentation semantics via two-player games (see [Modgil and Caminada, 2009] for an overview), while some other has attempted an analysis of strategic behavior in abstract forms of argumentation games (in particular [Procaccia and Rosenschein, 2005; Riveret et al., 2010; Thimm and Garcia, 2010]). This paper pursues further the understanding of multi-agent argumentation over abstract argumentation frameworks (AF) capitalizing on techniques from logic and multi-agent verification.
While the computational complexity of many game-theoretic solution concepts, notably Nash equilibrium, has now been settled, the question of determining the exact complexity of computing an evolutionarily stable strategy has resisted solution since attention was drawn to it in 2004. In this paper, I settle this question by proving that deciding the existence of an evolutionarily stable strategy is ΣP2 -complete.
The paper generalizes abstract argument games to cope with cases where proponent and opponent argue in front of an audience whose type is known only with uncertainty. The generalization, which makes use of basic tools from probability theory, is motivated by several examples and delivers a class of abstract argument games whose adequacy is proven robust against uncertainty.
A central area of current philosophical debate in the foundations of mathematics concerns whether or not there is a single, maximal, universe of set theory. Universists maintain that there is such a universe, while Multiversists argue that there are many universes, no one of which is ontologically privileged. Often forcing constructions that add subsets to models are cited as evidence in favour of the latter. This paper informs this debate by analysing ways the Universist might interpret this discourse that seems to necessitate the addition of subsets to V . We argue that despite the prima facie incoherence of such talk for the Universist, she nonetheless has reason to try and provide interpretation of this discourse. We analyse extant interpretations of such talk, and argue that while tradeoffs in naturality have to be made, they are not too severe.
There is, I think, a gap between what many students learn in their first course in formal logic, and what they are expected to know for their second. While courses in mathematical logic with metalogical components often cast only the barest glance at mathematical induction or even the very idea of reasoning from definitions, a first course may also leave these untreated, and fail explicitly to lay down the definitions upon which the second course is based. The aim of this text is to integrate material from these courses and, in particular, to make serious mathematical logic accessible to students I teach. The first parts introduce classical symbolic logic as appropriate for beginning students; the last parts build to Godel’s adequacy and incompleteness results. A distinctive feature of the last section is a complete development of Godel’s second incompleteness theorem.
This entry is about two kinds of circularity: object
circularity, where an object is taken to be part of itself in
some sense; and definition circularity, where a collection is
defined in terms of itself. Instances of these two kinds of
circularity are sometimes problematic, and sometimes not. We are
primarily interested in object circularity in this entry, especially
instances which look problematic when one tries to model them in set
theory. But we shall also discuss circular definitions. The term non-wellfounded set refers to sets which contain
themselves as members, and more generally which are part of an
infinite sequence of sets each term of which is an element of the
This paper argues that multiple coordinations like tall, thin and happy are interpreted in a “flat” iterative process, but using “nested” recursive application of binary coordination operators in the compositional meaning derivation. Ample motivation for flat interpretation is shown by contrasting such coordinations with nested, syntactically ambiguous, coordinate structures like tall and thin and happy. However, new evidence coming from type shifting and predicate distribution with verb phrases show motivation for an independent hierarchical ingredient in the compositional semantics of multiple coordination with no parallel hierarchy in the syntax. This establishes a contrast between operations at the syntax-semantics interface and compositional semantic mechanisms. At the same time, such evidence motivate the treatment of operations like type shifting and distributivity as purely semantic.
George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.
An approach to frame semantics is built on a conception of frames as finite automata, observed through the strings they accept. An institution (in the sense of Goguen and Burstall) is formed where these strings can be refined or coarsened to picture processes at various bounded granularities, with transitions given by Brzozowski derivatives.
Beall and Murzi (J Philos 110(3):143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules (over sufficiently expressive base theories). As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field (Notre Dame J Form Log 58(1):1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi (J Philos Log. doi:10.1007/s10992-017-9438-x, 2017), is a generalisation of Kripke’s notion of grounded truth (J Philos 72:690–716, 1975), and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic (nor, more generally, of a revisionary approach to semantic paradox), we take the notion of naïve
It’s no secret that there are many competing views on the semantics of conditionals. One of the tools of the trade is that of any experimental scientist: put the object of study in various environments and see what happens.
The logical systems within which Frege, Schroder, Russell, Zermelo and other early mathematical logicians worked were all higher-order. It was not until the 1910s that first-order logic was even distinguished as a subsystem of higher-order logic. As late as in the 1920s, higher-order quantification was still quite generally allowed: in fact, it does not seem as if any major logician, among non-intuitionists, except Thoralf Skolem restricted himself to first-order logic. Proofs were sometimes allowed to be infinite and infinitely long expressions were allowed in the languages that were used.
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely,
Different set theories extending ZF are never bi-interpretable! …
Failures of supervenience reveal gaps. There is a mental-physical gap if the mental facts fail to supervene on the physical facts. There is a nomic-categorical gap if the nomic facts fail to supervene on the categorical facts. In the same way, there may be macro-micro gaps. Some terminology: let an atom be any object in spacetime without proper parts; let a composite be any object in spacetime with proper parts; let the micro facts be the facts about the atoms, their identities, their intrinsic properties, and their relations to one another; and let the macro facts be the facts about the composites, their identities, their properties, their relations to one another, and their relations to the atoms. There is a macro-micro gap just if the macro facts fail to supervene on the micro facts.
The sorites paradox originated in an ancient puzzle that appears to be
generated by vague terms, viz., terms with unclear
(“blurred” or “fuzzy”) boundaries of
application. ‘Bald’, ‘heap’,
‘tall’, ‘old’, and ‘blue’ are
prime examples of vague terms: no clear line divides people who are
bald from people who are not, or blue objects from green (hence not
blue), or old people from middle-aged (hence not old). Because the
predicate ‘heap’ has unclear boundaries, it seems that no
single grain of wheat can make the difference between a number of
grains that does, and a number that does not, make a heap.
I show that intuitive and logical considerations do not justify introducing Leibniz’s Law of the Indiscernibility of Identicals in more than a limited form, as applying to atomic formulas. Once this is accepted, it follows that Leibniz’s Law generalises to all formulas of the first-order Predicate Calculus but not to modal formulas. Among other things, identity turns out to be logically contingent.