
102559.293782
Much has been said about Moore’s proof of the external world, but the notion of proof that Moore employs has been largely overlooked. I suspect that most have either found nothing wrong with it, or they have thought it somehow irrelevant to whether the proof serves its antiskeptical purpose. I show, however, that Moore’s notion of proof is highly problematic. For instance, it trivializes in the sense that any known proposition is provable. This undermines Moore’s proof as he conceives it since it introduces a skeptical regress that he goes at length to resist. I go on to consider various revisions of Moore’s notion of proof and finally settle on one that I think is adequate for Moore’s purposes and faithful to what he says concerning immediate knowledge.

179506.29383
In his 1961 paper, “Irreversibility and Heat Generation in the Computing Process,” Rolf Landauer speculated that there exists a fundamental link between heat generation in computing devices and the computational logic in use. According to Landauer, this heating effect is the result of a connection between the logic of computation and the fundamental laws of thermodynamics. The minimum heat generated by computation, he argued, is fixed by rules independent of its physical implementation. The limits are fixed by the logic and are the same no matter the hardware, or the way in which the logic is implemented. His analysis became the foundation for both a new literature, termed “the thermodynamics of computation” by Charles Bennett, and a new physical law, Landauer’s principle.

441306.293847
Despite initial appearance, paradoxes in classical logic, when comprehension is unrestricted, do not go away even if the law of excluded middle is dropped, unless the law of noncontradiction is eliminated as well, which makes logic much less powerful. Is there an alternative way to preserve unrestricted comprehension of common language, while retaining power of classical logic? The answer is yes, when provability modal logic is utilized. Modal logic NL is constructed for this purpose. Unless a paradox is provable, usual rules of classical logic follow. The main point for modal logic NL is to tune the law of excluded middle so that we allow for φ and its negation ¬φ to be both false in case a paradox provably arises. Curry's paradox is resolved differently from other paradoxes but is also resolved in modal logic NL. The changes allow for unrestricted comprehension and naïve set theory, and allow us to justify use of common language in formal sense.

640884.293862
I argue that a general logic of definitions must tolerate ωinconsistency. I present a semantical scheme, S, under which some definitions imply ωinconsistent sets of sentences. I draw attention to attractive features of this scheme, and I argue that S yields the minimal general logic of definitions. I conclude that any acceptable general logic should permit definitions that generate ω inconsistency. This conclusion gains support from the application of S to the theory of truth. Keywords Circular definitions, revision theory, truth, paradox, McGee’s Theorem, omegainconsistent theories.

1196341.293881
Let me introduce to you the topic of modal model theory, injecting some ideas from modal logic into the traditional subject of model theory in mathematical logic. For example, we may consider the class of all models of some firstorder theory, such as the class of all graphs, or the class of all groups, or all fields or what have you. …

1399902.293895
Gödel's ontological proof is interpreted in a logically clear and sensible way without empirical and theological implications  rendering it mostly tautological interpretationwise. Gödel's ontological argument thus cannot be said to prove existence of God. The real value of Gödel's ontological proof lies on the modal collapse consequence.

1399933.293909
The paper investigates the relations between Hausdorff and nonHausdorff manifolds as objects of General Relativity. We show that every nonHausdorff manifold can be seen as a result of gluing together some Hausdorff manifolds. In the light of this result, we investigate a modal interpretation of a nonHausdorff differential manifold, according to which it represents a bundle of alternative spacetimes, all of which are compatible with a given initial data set.

1542779.293923
The possibility question concerns the status of possibilities: do they form an irreducible category of the external reality, or are they merely features of our cognitive framework? If fundamental physics is ever to shed light on this issue, it must be done by some future theory that unifies insights of general relativity and quantum mechanics. The paper investigates one programme of this kind, namely the causal sets programme, as it apparently considers alternative developments of a given system. To evaluate this claim, we prove some algebraic facts about the sequential growth of causal sets. These facts tell against alternative developments, given that causal sets are understood as particular events. We thus interpret causal sets as multirealisable objects, like states. This interpretation, however, is undermined by an argument for the probabilistic constraint of General Covariance, as it says that multiple paths along which a causal set is produced are not physically different.

1542861.293937
We investigate the concepts of past, present, and future that build upon a modal distinction between a settled past and an open future. The concepts are defined in terms of a precausal ordering that is determined by the qualitative differences between alternative possible histories. We look what an event’s past, present, and future look like in the socalled Minkowskian Branching Structures, one in which histories are isomorphic to Minkowski spacetime.

1553490.29395
« Quantum Sabinacy
Sensitivity Conjecture: Proof by the book
The Sensitivity Conjecture, which I blogged about here, says that, for every Boolean function f:{0,1}n→{0,1}, the sensitivity of f—that is, the maximum number of input bits such that flipping them can change the value of f—is polynomially related to a bunch of other complexity measures of f, including its block sensitivity, degree as a real polynomial, and classical and quantum query complexities. …

2486997.293964
Consider a subjective expected utility preference relation. It is usually held that the representations with which this preference is compatible differ only in one respect, namely, the possible scales for the measurement of utility. In this paper, I discuss the fact that there are, metaphorically speaking, two additional dimensions along which infinitely many more admissible representations can be found. The first additional dimension is that of statedependence. The second—and, in this context, much lesserknown—additional dimension is that of actdependence. One major implication of their usually neglected existence is that the standard axiomatizations of subjective expected utility fail to provide the measurement of subjective probability with satisfactory behavioral foundations.

2489928.293981
The first task of this lecture is to present a well known problem concerning probabilistic independence that arises whenever elements with extreme probabilities (probabilities of 0 and 1) are of serious interest, to criticize briefly a solution published in 2017 by two leading writers in this area, and to compare it with the solution offered by Karl Popper in 1994 in appendix *XX of Logik der Forschung.

2489952.293995
The doctrine that the content of the conclusion of a deductively valid argument is included in the content of its premises, taken jointly, is a familiar one. It has important consequences for the question of what value valid arguments possess, since it indicates the poverty of three traditional answers: that arguments may and should be used as instruments of persuasion, that they may and should be used as instruments of justication; and that they may and should be used to advance knowledge. The truth is, however, that in each of these cases the argument has only a managerial role and, if there is any work done, it is the premises that do it. It will be maintained that this point has little force against the critical rationalist answer, which I shall defend, that the principal purpose of deductive reasoning from an assemblage of premises is the exploration of their content, facilitating their criticism and rejection.

2489980.294009
In the axiomatic theory of relative probability expounded in appendices ∗iv and ∗v of The Logic of Scientic Discovery, the relation a ∼ c =Df ∀b[p(a, b) = p(c, b)] of probabilistic indistinguishability on a set S is demonstrably a congruence, and the quotient S = S/∼ is demonstrably a Boolean algebra. The twoelement algebra { ^{,1}} satises the axioms if and only if  , ^{p(1}  ), and  ) are assigned the value 1, and ^{p( , 1)} is assigned the value 0 (where is the interpretation of the quotient p/∼). The fourelement models are almost as straightforwardly described. This note sketches a method of construction and authentication that can, in principle, be applied to larger algebras, and identies all the eightelement models of Popper's system.

2545256.294022
ZFC has sentences that quantify over all sets or all ordinals, without restriction. Some have argued that sentences of this kind lack a determinate meaning. We propose a set theory called TOPS, using Natural Deduction, that avoids this problem by speaking only about particular sets.

2554964.294036
In 1935, Einstein, Podolsky and Rosen (EPR) argued that quantum mechanics is incomplete by considering two particles in one dimension moving in opposite directions and whose joint wave function (see 3.2.1 below) was such that the measurement of the position of one of the particles immediately determined the position of the other particle and, similarly, the measurement of the momentum of one of the particles immediately determined the momentum of the other one. Since, said EPR, a measurement made on one particle obviously could not possibly influence the physical state of the other particle, situated far away from the first particle, and since the wave function of both particles specifies neither the position nor the momentum of those particles, this quantum mechanical description of the state of both particles provided by this wave function must be incomplete in the sense that other variables, such as the values of the positions and momenta of both particles, must be included in a complete description of that physical system.

2560585.294049
Ever since its foundations were laid nearly a century ago, quantum theory has provoked questions about the very nature of reality. We address these questions by considering the universe – and the multiverse – fundamentally as complex patterns, or mathematical structures. Basic mathematical structures can be expressed more simply in terms of emergent parameters. Even simple mathematical structures can interact within their own structural environment, in a rudimentary form of selfawareness, which suggests a definition of reality in a mathematical structure as simply the complete structure. The absolute randomness of quantum outcomes is most satisfactorily explained by a multiverse of discrete, parallel universes. Some of these have to be identical to each other, but that introduces a dilemma, because each mathematical structure must be unique. The resolution is that the parallel universes must be embedded within a mathematical structure – the multiverse – which allows universes to be identical within themselves, but nevertheless distinct, as determined by their position in the structure. The multiverse needs more emergent parameters than our universe and so it can be considered to be a superstructure. Correspondingly, its reality can be called a superreality. While every universe in the multiverse is part of the superreality, the complete superreality is forever beyond the horizon of any of its component universes.

2560958.294063
We revisit the question (most famously) initiated by Turing: can human intelligence be completely modeled by a Turing machine? We show that the answer is no, assuming a certain weak soundness hypothesis. More specifically we show that at least some meaningful thought processes of the brain cannot be Turing computable. In particular some physical processes are not Turing computable, which is not entirely expected. There are some similarities of our argument with the well known LucasPenrose argument, but we work purely on the level of Turing machines, and do not use Gödel’s incompleteness theorem or any direct analogue. Instead we construct directly and use a weak analogue of a Gödel statement for a certain system which involves our human, this allows us to sidestep some (possible) metalogical issues with their argument.

2561086.294077
We examine the influence of word choices on mathematical practice, i.e. in developing definitions, theorems, and proofs. As a case study, we consider Euclid’s and Euler’s word choices in their influential development and, in particular, their use of the term ‘polyhedron’. Then, jumping to the 20th century, we look at word choices surrounding the use of the term ‘polyhedron’ in the work of Coxeter and of Grunbaum. We also consider a recent and explicit conflict of approach between Grunbaum and Shephard on the one hand and that of Hilton and Pedersen on the other, elucidating that the conflict was engendered by disagreement over the proper conceptualization, and so also the appropriate word choices, in the study of polyhedra.

3582025.294093
In classical deterministic planning, solutions to planning tasks are simply sequences of actions, but that is not sufficient for contingent plans in nondeterministic environments. Contingent plans are often expressed through policies that map states to actions. An alternative is to specify contingent plans as programs, e.g. in the syntax of Propositional Dynamic Logic (PDL). PDL is a logic for reasoning about programs with sequential composition, test and nondeterministic choice. However, as we show in the paper, none of the existing PDL modalities directly captures the notion of a solution to a planning task under nondeterminism.

3592023.294106
Anthropic reasoning refers to a class of arguments that incorporate the information entailed by our own existence to make inferences about the world in which we live. One prominent example is the Doomsday Argument, which makes predictions about the future total population of human observers yet to be born based on the ordinal rank of our birth among humans that have been born so far. A central question in anthropic reasoning is from which distribution should we consider ourselves to be randomly sampled. The Self Sampling Assumption (SSA) states that we should reason as if we’re a random sample from the set of actual existent observers, while the self indication assumption (SIA) states that we should reason as if we’re a random sample from among the set of all possible observers (see [1]). Effectively, SIA weighs the probability of our actual world by the number of observers relative to SSA. The distinction is important, as SSA supports the Doomsday Argument, while SIA refutes it. We consider a new thought experiment called Geometric Incubator and show that SSA implies precognition of coin flips in this hypothetical world. We consider this to be very strong evidence in favor of SIA over SSA and against the Doomsday Argument. We use this observation to develop a more axiomatic mathematical theory of anthropic reasoning. We also introduce an empirical version of the Doomsday Argument.

3753489.29412
The purpose of this paper is to examine in detail a particularly interesting pair of firstorder theories. In addition to clarifying the overall geography of notions of equivalence between theories, this simple example yields two surprising conclusions about the relationships that theories might bear to one another. In brief, we see that theories lack both the CantorBernstein and coCantorBernstein properties.

3775137.294134
According to Aquinas, whenever we correctly say something nonnegative of God, we speak analogically. It is correct to say that Socrates is wise and God is wise. But being humanly wise and divinely wise are different—the most fundamental difference being that, by divine simplicity, God doesn’t have his wisdom, but is his wisdom. …

3781823.294147
Computational complexity theory offers an indication of the resources needed for a particular computational problem to be solved, as a function of the input size of a problem. These resources – most notably, time and memory – are typically fairly coarse and built on a theoretical abstract model of computation: Turing machines. Here, the ‘time’ resource refers to the number of state transitions in the machine, and the ‘memory’ resource refers to the number of memory cells on the tape that are used. This traditional way of representing computations may not be the most usable to describe information processing in the brain [BF16]. Most significantly, this model assumes symmetry in states and symbols, e.g., ‘0’ and ‘1’ are in principle interchangeable. This is in contrast to the powerefficient spike signals used in cortical processing; here, there is a clear asynchrony between the presence and absence of spikes: powerefficient coding requires as few spikes as possible. It has been proposed by a working group at the Dagstuhl seminar on ResourceBounded Problem Solving (seminar 14341, [HvRVW14, p. 66]) to have a more refined, brainfocused model of computation in the brain, based on networks of spiking neurons, and have complexity measures loosely based on brain resources, such as spiking rates, network size and connectivity [Maa00, Maa14]. In this extended abstract we describe workinprogress and future work towards that goal.

3811529.294161
Some philosophers have proposed intuitionistic logic as the one best suited to provide the foundation for a theory of vagueness.\ / As we remind readers in §1, that logic provides an elegant solution to the Sorites Paradox which avoids the implausible sharp cutoffs in classically based epistemicist theories.

3842723.294175
Daniel C. Dennett has long maintained that the Consequence Argument for incompatibilism is confused. In a joint work with Christopher Taylor, he claims to have shown that the argument is based on a failure to understand Logic 101. Given a fairly plausible account of having the power to cause something, they claim that an inference rule that the argument relies on is invalid. In this paper, I show that Dennett and Taylor’s refutation does not work against a better, more standard version of the Consequence Argument. Hence Dennett and Taylor’s alleged refutation fails.

3988250.29419
Drawing an analogy between modal structuralism about mathematics and theism, I offer a structuralist account that implicitly defines theism in terms of three basic relations: logical and metaphysical priority, and epistemic superiority. On this view, statements like ‘God is omniscient’ have a hypothetical and a categorical component. The hypothetical component provides a translation pattern according to which statements in theistic language are converted into statements of secondorder modal logic. The categorical component asserts the logical possibility of the theism structure on the basis of uncontroversial facts about the physical world. This structuralist reading of theism preserves objective truthvalues for theistic statements while remaining neutral on the question of ontology. Thus, it offers a way of understanding theism to which a naturalist cannot object, and it accommodates the fact that religious belief, for many theists, is an essentially relational matter.

4011870.294204
We present BIS, a Bayesian Inference Semantics, for probabilistic reasoning in natural language. The current system is based on the framework of Bernardy et al. ( ), but departs from it in important respects. BIS makes use of Bayesian learning for inferring a hypothesis from premises. This involves estimating the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syntactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phenomena, including frequency adverbs, generalised quantifiers, generics, and vague predicates. It performs well on a number of interesting probabilistic reasoning tasks. It also sustains most classically valid inferences (instantiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and classical inference patterns.

4391801.294218
Axioms for the strict order relation < on R. These are: 1. a < b and b < c implies a < c. 2. ¬(a < a) 3. a < b implies a + c < b + c for any c. 4. a < b and 0 < c implies a c b c i ≤ i 5. either 0 < a or a < 1. 6. a ≠ b implies a < b or b < a.

4392053.294232
In spacetime physics any set C of events—a causal set—is taken to be partially ordered by the relation ≤ of possible causation: for p, q ∈ C, p ≤ q means that q is in p’s future light cone. In her groundbreaking paper The internal description of a causal set: What the universe looks like from the inside, Fotini Markopoulou proposes that the causal structure of spacetime itself be represented by “sets evolving over C” —that is, in essence, by the topos Set of presheaves on C^{op.} To enable what she has done to be the more easily expressed within the framework presented here, I will reverse the causal ordering, that is, C will be replaced by C^{op}, and the latter written as P—which will, moreover, be required to be no more than a preordered set. Specifically, then: P is a set of events preordered by the relation ≤, where p ≤ q is intended to mean that p is in q’s future light cone—that q could be the cause of p, or, equally, that p could be an effect of q. In that case, for each event p, the set p↓ = {q: q ≤ p} may be identified as the causal future of p, or the set of potential effects of p. In requiring that ≤ be no more than a preordering—in dropping, that is, the antisymmetry of ≤—I am, in physical terms, allowing for the possibility that the universe is of Gödelian type, containing closed timelike lines.