
138195.000505
In Part I of this paper, we identified and compared various schemes for trivalent truth conditions for indicative conditionals, most notably the proposals by de Finetti (1936) and Reichenbach (1935, 1944) on the one hand, and by Cooper (Inquiry, 11, 295–320, 1968) and Cantwell (Notre Dame Journal of Formal Logic, 49, 245– 260, ) on the other. Here we provide the proof theory for the resulting logics DF/TT and CC/TT, using tableau calculi and sequent calculi, and proving soundness and completeness results. Then we turn to the algebraic semantics, where both logics have substantive limitations: DF/TT allows for algebraic completeness, but not for the construction of a canonical model, while CC/TT fails the construction of a LindenbaumTarski algebra. With these results in mind, we draw up the balance and sketch future research projects.

138219.000575
The notion of grounding is usually conceived as an objective and explanatory relation. It connects two relata if one—the ground—determines or explains the other—the consequence. In the contemporary literature on grounding, much effort has been devoted to logically characterize the formal aspects of grounding, but a major hard problem remains: defining suitable grounding principles for universal and existential formulae. Indeed, several grounding principles for quantified formulae have been proposed, but all of them are exposed to paradoxes in some very natural contexts of application. We introduce in this paper a firstorder formal system that captures the notion of grounding and avoids the paradoxes in a novel and nontrivial way. The system we present formally develops Bolzano’s ideas on grounding by employing Hilbert’s εterms and an adapted version of Fine’s theory of arbitrary objects.

138380.000606
In the 1920s, Ackermann and von Neumann, in pursuit of Hilbert’s Programme, were working on consistency proofs for arithmetical systems. One proposed method of giving such proofs is Hilbert’s epsilonsubstitution method. There was, however, a second approach which was not reflected in the publications of the Hilbert school in the 1920s, and which is a direct precursor of Hilbert’s first epsilon theorem and a certain ‘general consistency result’ due to Bernays. An analysis of the form of this socalled ‘failed proof’ sheds further light on an interpretation of Hilbert’s Programme as an instrumentalist enterprise with the aim of showing that whenever a ‘real’ proposition can be proved by ‘ideal’ means, it can also be proved by ‘real’, finitary means.

193858.000624
On the basis of a coherently applied physicalist ontology, I will argue that there is nothing conceptual in logic and mathematics. What we usually call “mathematical concepts”—from the most exotic ones to the most “evident” ones—are just names tagged to various elements of mathematical formalism. In fact they have nothing to do with concepts, as they have nothing to do with the actual things; they can be completely ignored by both philosophy and physics.

193997.000638
In this note we provide a concise report on the complexity of the causal ordering problem, originally introduced by Simon to reason about causal dependencies implicit in systems of mathematical equations. We show that Simon’s classical algorithm to infer causal ordering is NPHard—an intractability previously guessed but never proven. We present then a detailed account based on Nayak’s suggested algorithmic solution (the best available), which is dominated by computing transitive closure—bounded in time by O(V·S), where S(E, V) is the input system structure composed of a set E of equations over a set V of variables with number of variable appearances (density) S. We also comment on the potential of causal ordering for emerging applications in largescale hypothesis management and analytics. Keywords: Causal ordering, Causal reasoning, Structural equations, Hypothesis management.

200512.000652
Economic policy evaluations require social welfare functions for variablesize populations. Two important candidates are criticallevel generalized utilitarianism (CLGU) and rankdiscounted criticallevel generalized utilitarianism, which was recently characterized by Asheim and Zuber (2014) (AZ). AZ introduce a novel axiom, existence of egalitarian equivalence (EEE). First, we show that, under some uncontroversial criteria for a plausible social welfare relation, EEE suffices to rule out the Repugnant Conclusion of population ethics (without AZ’s other novel axioms). Second, we provide a new characterization of CLGU: AZ’s set of axioms is equivalent to CLGU when EEE is replaced by the axiom samenumber independence.

386536.000674
Every beginning real analysis student learns the classic HeineBorel theorem, that the interval [0, 1] is compact. The standard proof involves techniques such as constructing a sequence and appealing to the completeness of the reals (which some may find unsatisfying). In this article, we present a different perspective by showing how the Heine Borel theorem can be derived from a few fundamental results in mathematical logic. In particular, we put an ultrametric on the space of infinite binary sequences. Compactness of this space can be established from Brouwer’s fan theorem. This result can be derived from either Konig’s infinity lemma or from Godel’s compactness theorem in model theory. The HeineBorel theorem is an immediate corollary. This illustrates an interesting connection between the fundamental yet different notions of compactness in analysis and compactness in logic.

386575.000689
For the uninitiated, the dense nature of mathematical language can act as an obscuring force. With this essay we aim to bring two classical results of discrete mathematics into the light. To this end we analyze winning strategies in a certain class of solitaire games. The gains are nonstandard proofs of the results of K˝onig [3] and Vizing [7]. For the standard treatment of these results, see [6]. (For a dense and obscure version of the nonstandard proofs presented here, see [4].) First, let’s introduce the games.

386614.000703
In Rabern and Rabern (2008) we presented a two question solution to ‘the hardest logic puzzle ever’ (as presented in Boolos (1996)), which relied on selfreferential questions. In this note we respond to several worries related to this solution. We clarify our claim that some yesno questions cannot be answered by the gods and thus that asking such questions of the gods will result in head explosion. We argue that the inclusion of exploding head possibilities is neither cheating nor ad hoc but is instead forced upon us by principles related to Tarski’s theorem. We also respond to concerns that have been raised about our use of selfreferential questions in support of the two question solution. In particular, we address the worry that there is a revenge problem lurking, which is analogous to revenge problems that arise for purported solutions to the liar paradox. And we make some further observations about the relationship between selfreferential questions, truthtelling gods and the semantic paradoxes. In the appendix we give a two question solution to the modified puzzle (where Random randomly answers ‘ja’ or ‘da’).

386662.000717
The semantic paradoxes are often associated with selfreference or referential circularity. However, Yablo has shown in [2] that there are infinitary versions of the paradoxes that do not involve this form of circularity. It remains an open question what relations of reference between collections of sentences afford the structure necessary for paradoxicality. In [1] we laid the groundwork for a general investigation into the nature of reference structures that support the semantic paradoxes. The remaining task is to classify the socalled dangerous directed graphs. In appendix A of [1], we sketched a reformulation of the problem in terms of fixed points of certain functions. Here we expand on this reformulation, removing all syntactic considerations to get a purely mathematical problem. It is definitely possible that the problem’s solution depends on the axioms of set theory we choose—this would be an interesting outcome.

391231.000758
On an influential line of thinking tracing back to Ramsey, conditionals are closely linked to the attitude of supposition. When applied to counterfactuals, this view suggests a subjunctive version of the socalled Ramsey test: the probability of a counterfactual If A, would B ought to be equivalent to the probability of B, under the subjunctive supposition that A. I present a collapse result for any view that endorses the subjunctive version of the Ramsey test. Starting from plausible assumptions, the result shows that one’s rational credence in a wouldcounterfactual and in the corresponding mightcounterfactual have to be identical.

563007.000776
It is intuitive to say that persons have infinite value, and recently Rasmussen and Bailey have given some cool arguments for this thesis. But what does it mean to say that humans have infinite value? …

597177.000791
Traditionally, the mechanism design literature has been primarily focused on settings where the bidders’ valuations are independent. However, in settings where valuations are correlated, much stronger results are possible. For example, the entire surplus of efficient allocations can be extracted as revenue. These stronger results are true, in theory, under generic conditions on parameter values. But in practice, they are rarely, if ever, implementable due to the stringent requirement that the mechanism designer knows the distribution of the bidders types exactly. In this work, we provide a computationally efficient and sample efficient method for designing mechanisms that can robustly handle imprecise estimates of the distribution over bidder valuations. This method guarantees that the selected mechanism will perform at least as well as any expost mechanism with high probability. The mechanism also performs nearly optimally with sufficient information and correlation. Further, we show that when the distribution is not known, and must be estimated from samples from the true distribution, a sufficiently high degree of correlation is essential to implement optimal mechanisms. Finally, we demonstrate through simulations that this new mechanism design paradigm generates mechanisms that perform significantly better than traditional mechanism design techniques given sufficient samples.

597313.00081
Gödel’s slingshotargument proceeds from a referential theory of definite descriptions and from the principle of compositionality for reference. It outlines a metasemantic proof of Frege’s thesis that all true sentences refer to the same object—as well as all false ones. Whereas Frege drew from this the conclusion that sentences refer to truthvalues, Gödel rejected a referential theory of definite descriptions. By formalising Gödel’s argument, it is possible to reconstruct all premises that are needed for the derivation of Frege’s thesis. For this purpose, a referencetheoretical semantics for a language of firstorder predicate logic with identity and referentially treated definite descriptions will be defined. Some of the premises of Gödel’s argument will be proven by such a referencetheoretical semantics, whereas others can only be postulated. For example, the principle that logically equivalent sentences refer to the same object cannot be proven but must be assumed in order to derive Frege’s thesis. However, different true (or false) sentences can refer to different states of affairs if the latter principle is rejected and the other two premises are maintained. This is shown using an identity criterion for states of affairs according to which two states of affairs are identical if and only if they involve the same objects and have the same necessary and sufficient condition for obtaining.

825982.000852
« QC ethics and hype: the call is coming from inside the house
The Computational Expressiveness of a Model Train Set: A Paperlet
My son Daniel had his fourth birthday a couple weeks ago. For a present, he got an electric train set. …

964088.000884
Theories of truth approximation in terms of truthlikeness (or verisimilitude) almost always deal with (nonprobabilistically) approaching deterministic truths, either actual or nomic. This paper deals first with approaching a probabilistic nomic truth, viz. a true probability distribution. It assumes a multinomial probabilistic context, hence with a lawlike true, but usually unknown, probability distribution. We will first show that this true multinomial distribution can be approached by Carnapian inductive probabilities. Next we will deal with the corresponding deterministic nomic truth, that is, the set of conceptually possible outcomes with a positive true probability. We will introduce Hintikkian inductive probabilities, based on a prior distribution over the relevant deterministic nomic theories and on conditional Carnapian inductive probabilities, and first show that they enable again probabilistic approximation of the true distribution. Finally, we will show, in terms of a kind of success theorem, based on Niiniluoto’s estimated distance from the truth, in what sense Hintikkian inductive probabilities enable the probabilistic approximation of the relevant deterministic nomic truth. In sum, the (realist) truth approximation perspective on Carnapian and Hintikkian inductive probabilities leads to the unification of the inductive probability field and the field of truth approximation.

1057659.000916
We describe the translation invariant stationary states of the one dimensional discretetime facilitated totally asymmetric simple exclusion process (FTASEP). In this system a particle at site j in Z jumps, at integer times, to site j + 1, provided site j − 1 is occupied and site j + 1 is empty. This defines a deterministic noninvertible dynamical evolution from any specified initial configuration on {0, 1} . When started with a Bernoulli product measure at density ρ the system approaches a stationary state, with phase transitions at ρ = 1/2 and ρ = 2/3. We discuss various properties of these states in the different density regimes 0 < ρ < 1/2, 1/2 < ρ < 2/3, and 2/3 < ρ < 1; for example, we show that the pair correlation g(j) = hη(i)η(i+j)i satisfies, for all n ∈ Z, k(n+1) j=kn+1 g(j) = kρ , with k = 2 when 0 ≤ ρ ≤ 1/2 and k = 3 when 2/3 ≤ ρ ≤ 1, and conjecture (on the basis of simulations) that the same identity holds with k = 6 when 1/2 ≤ ρ ≤ 2/3. The ρ < 1/2 stationary state referred to above is also the stationary state for the deterministic discretetime TASEP at density ρ (with Bernoulli initial state) or, after exchange of particles and holes, at density 1 − ρ.

1083531.000941
We will extend the wellknown Church encoding of Boolean logic into λ calculus to an encoding of McCarthy’s 3valued logic into a suitable infinitary extension of λcalculus that identifies all unsolvables by ⊥, where ⊥ is a fresh constant. This encoding refines to nvalued logic, for n ∈ {4, 5}. Such encodings also exist for Church’s original λIcalculus.

1083721.000956
The initial motivating question for this thesis is what the standard of rigour in modern mathematics amounts to: what makes a proof rigorous, or fail to be rigorous? How is this judged? A new account of rigour is put forward, aiming to go some way to answering these questions. Some benefits of the norm of rigour on this account are discussed. The account is contrasted with other remarks that have been made about mathematical proof and its workings, and is tested and illustrated by considering a case study discussed in the literature.

1129294.000973
It has been argued recently (in Beall (2009) and Beall and Murzi (2013)) that dialetheist theories are unable to express the concept of naive validity. In this paper, we will show that LP can be nontrivially expanded with a naive validity predicate. The resulting theory, LP^{Val} reaches this goal by adopting a weak selfreferential procedure. We show that LP^{Val} is sound and complete with respect to the threesided sequent calculus SLP^{Val}.

1129389.000987
The aim of this paper is to explore the peculiar case of infectious logics, a group of systems obtained generalizing the semantic behavior characteristic of the {¬, ∧, ∨}fragment of the logics of nonsense, such as the ones due to Bochvar and Hallden, among others. Here, we extend these logics with classical negations, and we furthermore show that some of these extended systems can be properly regarded as Logics of Formal Inconsistency (LFIs) and Logics of Formal Undeterminedness (LFUs).

1177982.001001
This paper develops an information sensitive theory of the semantics and probability of conditionals and statements involving epistemic modals. The theory validates a number of principles linking probability and modality, including the principle that the probability of a conditional If A, then C equals the probability of C, updated with A. The theory avoids socalled triviality results, which are standardly taken to show that principles of this sort cannot be validated. To achieve this, we deny that rational agents update their credences via conditionalization. We offer a new rule of update, Hyperconditionalization, which agrees with Conditionalization whenever nonmodal statements are at stake, but differs for modal and conditional sentences.

1716251.001015
Turing’s much debated test has just turned 70 and is still fairly controversial. His seminal 1950 paper is seen as a complex and multilayered text and key questions are yet to be answered. Why did Turing refer to “can machines think?” as a question that was “too meaningless to deserve discussion” and yet spent the largest section (over 40%) of his text discussing it? Why did he spend several years working with chessplaying as a task to illustrate and test machine intelligence only to trade it off for conversational questionanswering in his 1950 test? Why did Turing refer to gender imitation in a test for machine intelligence? In this paper I shall address these questions directly by unveiling social, historical and epistemological roots of Turing’s 1950 test. I will show that it came out of a controversy over the cognitive capabilities of digital computers, most notably with physicist and computer pioneer Douglas Hartree, chemist and philosopher Michael Polanyi, and neurosurgeon Geoffrey Jefferson. Turing’s 1950 paper is essentially a reply to a series of challenges posed to him by these thinkers against the view that machines can think. My goal is to improve the intelligibility of Turing’s test and contribute to ground it in its history.

1776459.001029
The relation between literal meaning (semantics) and inferences based on language use (pragmatics) has been the subject of a longstanding debate in philosophy and linguistics and important progress has been made in the development of diagnostics to distinguish between semantic and pragmatic inference and in the formal derivation of the latter from general principles of conversation. On the canonical view, pragmatic inference (aka conversational implicature) is derivable by general conversational principles, is cancellable, and is not embeddable under logical operators. Semantic inference, instead, lacks all these properties.

1819277.001043
In this paper, we present simple randomized multipass streaming algorithms for fundamental computational geometry problems of finding the skyline (maximal) points and the extreme points of the convex hull. For the skyline problem, one of our algorithm occupies O(h) space and performs O(log n) passes, where h is the number of skyline points. This improves the space bound of the currently best known result by Das Sarma, Lall, Nanongkai, and Xu [VLDB’09] by a logarithmic factor. For the extreme points problem, we present the first nontrivial result for any constant dimension greater than two: an O(h logO(1) n) space and O(log n) pass algorithm, where h is the number of extreme points. Finally, we argue why randomization seems unavoidable for these problems, by proving lower bounds on the performance of deterministic algorithms for a related problem of finding maximal elements in a poset.

1819285.001057
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the stateoftheart for provable classical/quantum algorithms for SVP. We present the following results.

1819292.001071
We answer the question of which conjunctive queries are uniquely characterized by polynomially many positive and negative examples, and how to construct such examples efficiently. As a consequence, we obtain a new efficient exact learning algorithm for a class of conjunctive queries. At the core of our contributions lie two new polynomialtime algorithms for constructing frontiers in the homomorphism lattice of finite structures. We also discuss implications for the unique characterizability and learnability of schema mappings and of description logic concepts.

2157882.001086
We develop a theory of necessity operators within a version of higherorder logic that is neutral about how finegrained reality is. The theory is axiomatized in terms of the primitive of being a necessity, and we show how the central notions in the philosophy of modality can be recovered from it. Various questions are formulated and settled within the framework, including questions about the ordering of necessities under strength, the existence of broadest necessities satisfying various logical conditions, and questions about their logical behaviour. We also wield the framework to probe the conditions under which completely reductive theories of necessities are possible.

2158038.001101
Prior to Kripke’s seminal work on the semantics of modal logic, McKinsey offered an alternative interpretation of the necessity operator, inspired by the BolzanoTarski notion of logical truth. According to this interpretation, ‘it is necessary that A’ is true just in case every sentence with the same logical form as A is true. In our paper, we investigate this interpretation of the modal operator, resolving some technical questions, and relating it to the logical interpretation of modality and some views in modal metaphysics. In particular, we present an hitherto unpublished solution to problems 41 and 42 from Friedman’s 102 problems, which uses a different method of proof from the solution presented in the paper of Tadeusz Prucnal. A common conception of a logical truth, often credited to Bolzano, is that of a sentence true in virtue of its logical form alone. In a given interpreted language one might make this precise by stipulating a sentence to be logically true if and only if the result of uniformly substituting any of the nonlogical constants with expressions of the same grammatical category is true, and dually, logically consistent if and only if some substitution instance is true. For instance, ‘if John is tall then John is tall’ is a logical truth, since the result of substituting any name and predicate for ‘John’ and ‘is tall’ respectively results in a truth, whereas ‘John is tall’ is not a logical truth because it is either already false or the result of substituting the predicate ‘not tall’ for ‘tall’ in it is false.

2476292.001115
This paper formulates a bilateral account of harmony, which is an alternative to the one proposed by Francez. It builds on an account of harmony for unilateral logic proposed by K ürbis and the observation that reading some of the rules for the connectives of bilateral logic bottom up gives the grounds and consequences of formulas with the opposite speech act. Thus the consequences of asserting a formula give grounds for denying it, namely if the opposite speech act is applied to the consequences. Similarly, the consequences of denying a formula give grounds for asserting the formula. I formulate a process of inversion, which allows the determination of assertive elimination rules from assertive introduction rules, and rejective elimination rules from rejective introduction rules, and conversely. It corresponds to Francez’s notion of vertical harmony. I also formulate a process of conversion, which allows the determination of rejective introduction rules from certain assertive elimination rules and conversely, and the determination for assertive introduction rules from certain rejective elimination rules and conversely. It corresponds to Francez’s notion of horizontal harmony.