Some recent work has challenged two principles thought to govern the logic of the indicative conditional: modus ponens (Kolodny & MacFarlane 2010) and modus tollens (Yalcin 2012). There is a fairly broad consensus in the literature that Kolodny and Mac- Farlane’s challenge can be avoided, if the notion of logical consequence is understood aright (Willer 2012; Yalcin 2012; Bledin 2014). The viability of Yalcin’s counterexample to modus tollens has meanwhile been challenged on the grounds that it fails to take proper account of context-sensitivity (Stojnić forthcoming). This paper describes a new counterexample to modus ponens, and shows that strategies developed for handling extant challenges to modus ponens and modus tollens fail for it. It diagnoses the apparent source of the counterexample: there are bona fide instances of modus ponens that fail to represent deductively reasonable modes of reasoning.
Assuming that the target of theory oriented empirical science in general and of nomic truth approximation in particular is to characterize the boundary or demarcation between nomic possibilities and nomic impossibilities, I have presented, in my article entitled “Models, postulates, and generalized nomic truth approximation” (Kuipers, 2016), the ‘basic’ version of generalized nomic truth approximation, starting from ‘two-sided’ theories. Its main claim is that nomic truth approximation can perfectly be achieved by combining two prima facie opposing views on theories: (1) the traditional (Popperian) view: theories are (models of) postulates that exclude certain possibilities from being realizable, enabling explanation and prediction and (2) the model view: theories are sets of models that claim to (approximately) represent certain realizable possibilities. Nomic truth approximation, i.e. increasing truth-content and decreasing falsity-content, becomes in this way revising theories by revising their models and/or their postulates in the face of increasing evidence.
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may show that there is no conditional operator that has all the properties native speaker intuitions suggest if has.
Good’s Theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather (reliable) information before making a decision when doing so is free. We argue that Good’s Theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this.
Homotopy Type theory and its Model theory provide a novel formal semantic framework for representing scientific theories. This framework supports a constructive view of theories according to which a theory is essentially characterised by its methods. The constructive view of theories was earlier defended by Ernest Nagel and a number of other philosophers of the past but available logical means did not allow these people to build formal representational frameworks that implement this view.
Kenny Courser and I have been working hard on this paper for months:
• John Baez and Kenny Courser, Coarse-graining open Markov processes. It may be almost done. So, it would be great if people here could take a look and comment on it! …
I just read something cool:
• Joel David Hamkins, Nonstandard models of arithmetic arise in the complex numbers, 3 March 2018. Let me try to explain it in a simplified way. I think all cool math should be known more widely than it is. …
Take a mathematician of Frege’s generation, accustomed to writing the likes
(2) If , then or ,
— and fancier things, of course! Whatever unclear thoughts about ‘variables’ people may or may not have had once upon a time, they have surely been dispelled well before the 1870s, if not by Balzano’s 1817 Rein analytischer Beweis (though perhaps that was not widely enough read? …
According to spacetime state realism (SSR), the fundamental ontology of a quantum mechanical world consists of a state-valued field evolving in 4- dimensional spacetime. One chief advantage it claims over rival wavefunction realist views is its natural compatibility with relativistic quantum field theory (QFT). I argue that the original density operator formulation of SSR cannot be extended to QFTs where the local observables form type III von Neumann algebras. Instead, I propose a new formulation of SSR in terms of a presheaf of local statespaces dual to the net of local observables studied by algebraic QFT.
When theorizing about the a priori, philosophers typically deploy a sentential operator: ‘it is a priori that’. This operator can be combined with metaphysical modal operators, and in particular with ‘it is necessary that’ and ‘actually’ (in the standard, rigidifying sense) in a single argument or a single sentence. Arguments and theses that involve such combinations have had played a starring role in post-Kripkean metaphysics and epistemology. The phenomena the contingent a priori and the necessary a posteriori have been organizing themes in post-Kripkean discussions, and these phenomena cannot be easily discussed without using sentences and arguments that involve the interaction of the a priority, necessity, and actuality operators. However, there has been surprisingly little discussion of the logic of the interaction of these operators. In this paper we shall attempt to make some progress on that topic.
Atomic sentences – or the propositions they express – can be true, as can logically complex sentences composed out of atomic sentences. A comprehensive metaphysics of truth aims to tell us, in an informative way, what the truth of any sentence whatsoever consists in, be it atomic or complex. Monists about truth are committed to truth always consisting in the same thing, no matter which sentence you consider. Pluralists about truth think that the nature of truth is different for different sets of sentences. The received view seems to be that logically complex sentences – and indeed logic itself – somehow impose a monistic constraint on any comprehensive metaphysics of truth. In what follows, I argue that the received view is mistaken.
This paper gives a definition of self-reference on the basis of the dependence relation given by Leitgeb (2005), and the dependence digraph by Beringer & Schindler (2015). Unlike the usual discussion about self-reference of paradoxes centering around Yablo’s paradox and its variants, I focus on the paradoxes of finitary characteristic, which are given again by use of Leitgeb’s dependence relation. They are called ‘locally finite paradoxes’, satisfying that any sentence in these paradoxes can depend on finitely many sentences. I prove that all locally finite paradoxes are self-referential in the sense that there is a directed cycle in their dependence digraphs. This paper also studies the ‘circularity dependence’ of paradoxes, which was introduced by Hsiung (2014). I prove that the locally finite paradoxes have circularity dependence in the sense that they are paradoxical only in the digraph containing a proper cycle. The proofs of the two results are based directly on Konig’s infinity lemma. In contrast, this paper also shows that Yablo’s paradox and its ∀∃-unwinding variant are non-self-referential, and neither McGee’s paradox nor the ω-cycle liar has circularity dependence.
Ruetsche () claims that an abstract C*-algebra of observables will not contain all of the physically significant observables for a quantum system with infinitely many degrees of freedom. This would signal that in addition to the abstract algebra, one must use Hilbert space representations for some purposes. I argue to the contrary that there is a way to recover all of the physically significant observables by purely algebraic methods.
For simplicity, most of the literature introduces the concept of definitional equivalence only to languages with disjoint signatures. In a recent paper, Barrett and Halvorson introduce a straightforward generalization to languages with non-disjoint signatures and they show that their generalization is not equivalent to intertranslatability in general. In this paper, we show that their generalization is not transitive and hence it is not an equivalence relation. Then we introduce the Andr´eka and N´emeti generalization as one of the many equivalent formulations for languages with disjoint signatures. We show that the Andr´eka–N´emeti generalization is the smallest equivalence relation containing the Barrett–Halvorson generalization and it is equivalent to intertranslatability even for languages with non-disjoint signatures. Finally, we investigate which definitions for definitional equivalences remain equivalent when we generalize them for theories with non-disjoint signatures.
Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer scrutiny are not. The premises have modal import that is required for the arguments but is not immediately grasped on inspection, and which ultimately undermines the simpler logical intuitions that make the premises seem plausible. Furthermore, the notion of necessity that they involve goes unspecified, and yet must go beyond standard varieties of logical necessity. This leaves us little reason to believe the premises, while their implausible existential import gives us good reason not to.
It is a striking fact from reverse mathematics that almost all theorems of countable and countably representable mathematics are equivalent to just five subsystems of second order arithmetic. The standard view is that the significance of these equivalences lies in the set existence principles that are necessary and sufficient to prove those theorems. In this article I analyse the role of set existence principles in reverse mathematics, and argue that they are best understood as closure conditions on the powerset of the natural numbers.
This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes.
Weak supplementation says that if x is a proper part of y, then y has a proper part that doesn’t overlap x. Suppose that we are impressed by standard counterexamples to weak supplementation like the following. …
Comparativism is the position that the fundamental doxastic state consists in comparative beliefs (e.g., believing p to be more likely than q), with partial beliefs (e.g., believing p to degree x) being grounded in and explained by patterns amongst comparative beliefs that exist under special conditions. In this paper, I develop a version of comparativism that originates with a suggestion made by Frank Ramsey in his ‘Probability and Partial Belief’ (1929). By means of a representation theorem, I show how this ‘Ramseyan comparativism’ can be used to weaken the (unrealistically strong) conditions required for probabilistic coherence that comparativists usually rely on, while still preserving enough structure to let us retain the usual comparativists’ account of quantitative doxastic comparisons.
The Kochen-Specker theorem is an important and subtle topic in the
foundations of quantum mechanics (QM). The theorem demonstrates the
impossibility of a certain type of interpretation of QM in terms of
hidden variables (HV) that naturally suggests itself when one begins
to consider the project of interpretating QM.We here present the
theorem/argument and the foundational discussion surrounding it at
different levels. The reader looking for a quick overview should read
the following sections and subsections: 1, 2, 3.1, 3.2, 4, and
6. Those who read the whole entry will find proofs of some non-trivial
claims in supplementary documents.
The following general attitude to mathematics seems plausible: standard claims, such as ‘there are infinitely many primes’ or ‘every consistent set of sentences has a model’, are true; nevertheless, if one rifles through the fundamental furniture of the universe, one will not find mathematical objects, such as numbers, sets or models. A natural way of making sense of this attitude is to augment it with the following thought: this is possible because such standard claims have paraphrases that make clear that their truth does not require the fundamental existence of such objects. This paper will draw out some surprising consequences of this general approach to mathematics—an approach that I call paraphrase anti-realism. These consequences concern the relationship between logical structure, on the one hand, and explanatory structure, on the other.
There is an ambiguity in the fundamental concept of deductive logic that went unnoticed until the middle of the 20th Century. Sorting it out has led to profound mathematical investigations with applications in complexity theory and computer science. The origins of this ambiguity and the history of its resolution deserve philosophical attention, because our understanding of logic stands to benefit from an appreciation of their details.
I discuss the problem of whether true contradictions of the form “x is P and not P ” might be the expression of an implicit relativization to distinct respects of application of one and the same predicate P . Priest rightly claims that one should not mistake true contradictions for an expression of lexical ambiguity. However, he primarily targets cases of homophony for which lexical meanings do not overlap. There exist more subtle forms of equivocation, such as the relation of privative opposition singled out by Zwicky and Sadock in their study of ambiguity. I argue that this relation, which is basically a relation of general to more specific, underlies the logical form of true contradictions. The generalization appears to be that all true contradictions really mean “x is P in some respects/to some extent, but not in all respects/not to all extent”. I relate this to the strict-tolerant account of vague predicates and outline a variant of the account to cover one-dimensional and multi-dimensional predicates.
We investigate the relative computability of exchangeable binary relational data when presented in terms of the distribution of an invariant measure on graphs, or as a graphon in either L or the cut distance. We establish basic computable equivalences, and show that L representations contain fundamentally more computable information than the other representations, but that 0 suffices to move between computable such representations. We show that 0 is necessary in general, but that in the case of random-free graphons, no oracle is necessary. We also provide an example of an L -computable random-free graphon that is not weakly isomorphic to any graphon with an a.e. continuous version.
Proof-theoretic semantics is an alternative to truth-condition
semantics. It is based on the fundamental assumption that the central
notion in terms of which meanings are assigned to certain expressions
of our language, in particular to logical constants, is that of
proof rather than truth. In this sense
proof-theoretic semantics is semantics in terms of proof . Proof-theoretic semantics also means the semantics of proofs,
i.e., the semantics of entities which describe how we arrive at certain
assertions given certain assumptions. Both aspects of proof-theoretic
semantics can be intertwined, i.e.
tion to perform in order to change a currently undesirable situation. The policymaker has at her disposal a team of experts, each with their own understanding of the causal dependencies between different factors contributing to the outcome. The policymaker has varying degrees of confidence in the experts’ opinions. She wants to combine their opinions in order to decide on the most effective intervention. We formally define the notion of an effective intervention, and then consider how experts’ causal judgments can be combined in order to determine the most effective intervention. We define a notion of two causal models being compatible, and show how compatible causal models can be combined. We then use it as the basis for combining experts causal judgments. We illustrate our approach on a number of real-life examples.
Øystein Linnebo and Richard Pettigrew () have recently developed a version of noneliminative mathematical structuralism based on Fregean abstraction principles. They argue that their theory of abstract structures proves a consistent version of the structuralist thesis that positions in abstract structures only have structural properties. They do this by defining a subset of the properties of positions in structures, so-called fundamental properties, and argue that all fundamental properties of positions are structural. In this paper, we argue that the structuralist thesis, even when restricted to fundamental properties, does not follow from the theory of structures that Linnebo and Pettigrew have developed. To make their account work, we propose a formal framework in terms of Kripke models that makes structural abstraction precise. The formal framework allows us to articulate a revised definition of fundamental properties, understood as intensional properties. Based on this revised definition, we show that the restricted version of the structuralist thesis holds.
We consider Geanakoplos and Polemarchakis’s generalization of Aumman’s famous result on “agreeing to disagree”, in the context of imprecise probability. The main purpose is to reveal a connection between the possibility of agreeing to disagree and the interesting and anomalous phenomenon known as dilation. We show that for two agents who share the same set of priors and update by conditioning on every prior, it is impossible to agree to disagree on the lower or upper probability of a hypothesis unless a certain dilation occurs. With some common topological assumptions, the result entails that it is impossible to agree not to have the same set of posterior probabilities unless dilation is present. This result may be used to generate sufficient conditions for guaranteed full agreement in the generalized Aumman-setting for some important models of imprecise priors, and we illustrate the potential with an agreement result involving the density ratio classes. We also provide a formulation of our results in terms of “dilation-averse” agents who ignore information about the value of a dilating partition but otherwise update by full Bayesian conditioning. Keywords: agreeing to disagree; common knowledge; dilation; imprecise probability.
According to a conventional view, there exists no common-cause model of quantum correlations satisfying locality requirements. In fact, Bell ’s inequality is derived from some locality conditions and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argue that in the derivation of the inequality the existence of a common common-cause for multiple correlations is implicitly assumed, and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, in this paper, we will show that in almost all entangled states we can not construct a local model that is consistent with quantum mechanical prediction even when we require only the existence of a common cause of each correlation.
In this paper I study an epistemic alternating offers game with a termination option, in which each rational and self-interested player expresses strategic caution – assigns positive probability to the event of opponent choosing the termination option – and internally coherent concession proportional beliefs – expects the opponent to be more likely to terminate the game after being offered a division of resource associated with a larger personal utility concession than after being offered a division of resource associated with a smaller personal utility concession. I define the epistemic conditions under which the players expressing concession proportional beliefs converge on a subjective equilibrium, as well as conditions under which the subjective equilibrium will yield an egalitarian distribution of bargaining gains .