
50160.343452
In this series of posts, I will raise some issues for the logical pluralism of Beall & Restall (hereafter 'B&R')  a muchdiscussed, topicrevivifying view in the philosophy of logic. My study of their view was prompted by Mark Colyvan, whose course on Philosophy of Logic at Sydney Uni I'm helping to teach this year. …

893092.343515
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration.

924346.343532
My grad student Christian Williams and I finished this paper just in time for him to talk about it at SYCO:
• John Baez and Christian Williams, Enriched Lawvere theories for operational semantics. Abstract. …

1011253.343547
We demonstrate how deep and shallow embeddings of functional programs can coexist in the Coq proof assistant using metaprogramming facilities of MetaCoq. While deep embeddings are useful for proving metatheoretical properties of a language, shallow embeddings allow for reasoning about the functional correctness of programs.

1113787.34356
Is it possible to introduce a small number of agents into an environment, in such a way that an equilibrium results in which almost everyone (including the original agents) cooperates almost all the time? This is a compelling question for those interested in the design of beneficial gametheoretic AI, and it may also provide insights into how to get human societies to function better. We investigate this broad question in the specific context of finitely repeated games, and obtain a mostly positive answer. Our main novel technical tool is the use of limited altruism (LA) types, which behave altruistically towards other LA agents but not towards selfish agents. The uncertainty about which type of agent one is facing turns out to be essential in establishing cooperation. We provide characterizations in several families of games of which LA types are effective for our purposes.

1211466.343573
According to a conventional view, there exists no common cause model of quantum correlations satisfying locality requirements. Indeed, Bell’s inequality is derived from some locality requirements and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argued that in the derivation of the inequality, the existence of a common commoncause for multiple correlations is implicitly assumed and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, Graßhoff et al. showed that when there are three pairs of perfectly correlated events and a common cause of each correlation exist, we cannot construct a common cause model that is consistent with quantum mechanical prediction and also meets several locality requirements. In this paper, first, as a consequence of the fact shown by Graßhoff et al., we will confirm that there exists no local common cause model when a twoparticle system is in any maximally entangled state. After that, based on Hardy’s famous argument, we will prove that there exists no local common cause model when a twoparticle system is in any nonmaximally entangled state. Therefore, it will be concluded that for any entangled state, there exists no local common cause model. It will be revealed that the nonexistence of a common cause model satisfying locality is not limited to a particular state like the singlet state.

1668197.343591
Agents make predictions based on similar past cases, while also learning the relative importance of various attributes in judging similarity. We ask whether the resulting "empirically optimal similarity function (EOSF) is unique, and how easy it is to find it. We show that with many observations and few relevant variables, uniqueness holds. By contrast, when there are many variables relative to observations, nonuniqueness is the rule, and finding the EOSF is computationally hard. The results are interpreted as providing conditions under which rational agents who have access to the same observations are likely to converge on the same predictions, and conditions under which they may entertain different probabilistic beliefs.

1676760.343607
We finish this chapter by turning from illustrations of strategies for proofconstruction to consider a basic issue of principle, one which we have so far passed quietly by. (a) Start from an example. Tachyons are, by definition, elementary particles which are superluminal, i.e. which travel faster than the speed of light. So, adopting a QL language quantifying over elementary particles, and with the obvious predicates, the following is true: (1) 8x(Tx ! Sx).

1693406.343621
This article shows how fundamental higherorder theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are cetegorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1storder theories of Gödel’s results necessarily leave the mathematical structures illdefined, e.g., there are necessarily models with infinite integers.

2278838.343633
I saw the following image on Twitter and Reddit, an image suggesting an entire class of infinitary analogues of the game ConnectFour. What fun! Let’s figure it out! The rules will naturally generalize those in ConnectFour. …

2334229.343646
In ‘Essence and Modality’, Kit Fine proposes that instead of explaining the notion of essence in terms of metaphysical necessity, we should understand metaphysical necessity as a special case of essence: For each class of objects, be they concepts or individuals or entities of some other kind, will give rise to its own domain of necessary truths, the truths which flow from the nature of the objects in question. The metaphysically necessary truths can then be identified with the propositions which are true in virtue of the nature of all objects whatever. (Fine, 1994, p.9) Call the view that for a proposition to be metaphysically necessary is for it to be true in virtue of the nature of all objects whatsoever Fine’s Thesis.

2354540.343664
Temporal notions based on a finite set A of properties are represented in strings, on which projections are defined that vary the granularity A. The structure of properties in A is elaborated to describe statives, events and actions, subject to a distinction in meaning (advocated by Levin and Rappaport Hovav) between what the lexicon prescribes and what a context of use supplies. The projections proposed are deployed as labels for records and record types amenable to finitestate methods.

2478197.343677
One argument for Duality is that it makes sense of the ‘inescapable clash’ involved in asserting q if p and might not q if p: Throughout the paper I will restrict attention to a propositional language L , defined as follows: Definition 1. Let L be a language consisting of a set A of atomic formulae α, α , ..., closed under the connectives ¬, ∨, ∧, the indicative and subjunctive conditionals → and >, and the epistemic and subjunctive possibility modals ♦e and ♦. Say that a claim is boolean if it does not contain →, >, ♦e, ♦, or ∨.

2795723.343694
that whenever a statement ϕ(a) in the firstorder language of set theory is true in the settheoretic universe V , then it is also true in a proper inner model W V . A stronger principle, the groundmodel reflection principle, asserts that any such ϕ(a) true in V is also true in some nontrivial ground model of the universe with respect to set forcing. These principles each express a form of width reflection in contrast to the usual height reflection of the Levy–Montague reflection theorem. They are each equiconsistent with ZFC and indeed Π_{2}conservative over ZFC, being forceable by class forcing while preserving any desired rankinitial segment of the universe. Furthermore, the innermodel reflection principle is a consequence of the existence of sufficient large cardinals, and lightface formulations of the reflection principles follow from the maximality principle MP and from the innermodel hypothesis IMH. We also consider some questions concerning the expressibility of the principles.

2796239.343708
In 1963 Prior proved a theorem that places surprising constraints on the logic of intentional attitudes, like ‘thinks that’, ‘hopes that’, ‘says that’ and ‘fears that’. Paraphrasing it in English, and applying it to ‘thinks’, it states: If, at t, I thought that I didn’t think a truth at t, then there is both a truth and a falsehood I thought at t. In this paper I explore a response to this paradox that exploits the opacity of attitude verbs, exemplified in this case by the operator ‘I thought at t that’, to block Prior’s derivation. According to this picture, both Leibniz’s law and existential generalization fail in opaque contexts. In particular, one cannot infer from the fact that I’m thinking at t that I’m not thinking a truth at t, that there is a particular proposition such that I am thinking it at t. Moreover, unlike some approaches to this paradox (see Bacon et al. [4]) the failure of existential generalization is not motivated by the idea that certain paradoxical propositions do not exist, for this view maintains that there is a proposition that I’m not thinking a truth at t. Several advantages of this approach over the nonexistence approach are discussed, and models demonstrating the consistency of this theory are provided. Finally, the resulting considerations are applied to the liar paradox, and are used to provide a nonstandard justification of a classical gap theory of truth. One of the main challenges for this sort of theory — to explain the point of assertion, if not to assert truths — can be met within this framework.

2796285.343721
In this paper two paradoxes of infinity are considered through the lense of counterfactual logic, drawing heavily on a result of Kit Fine [10]. I will argue that a satisfactory resolution of these paradoxes will have wide ranging implications for the logic of counterfactuals. I then situate these puzzles in the context of the wider role of counterfactuals, connecting them to indicative conditionals, probabilities, rationality and the direction of causation, and compare my own resolution of the paradoxes to alternatives inspired by the theories of Lewis and Fine.

3261225.343735
We obtain the exact solution of the facilitated totally asymmetric simple exclusion process (FTASEP) in 1D. The model is closely related to the conserved lattice gas (CLG) model and to some cellular automaton traffic models. In the FTASEP a particle at site j in Z jumps, at integer times, to site j + 1, provided site j − 1 is occupied and site j + 1 is empty. When started with a Bernoulli product measure at density ρ the system approaches a stationary state. This nonequilibrium steady state (NESS) has phase transitions at ρ = 1/2 and ρ = 2/3. The different density regimes 0 < ρ < 1/2, 1/2 < ρ < 2/3, and 2/3 < ρ < 1 exhibit many surprising properties; for example, the pair correlation g(j) = hη(i)η(i + j)i satisfies, for all n ∈ Z, k(n+1) j=kn+1 g(j) = kρ , with k = 2 when 0 ≤ ρ ≤ 1/2, k = 6 when 1/2 ≤ ρ ≤ 2/3, and k = 3 when 2/3 ≤ ρ ≤ 1. The quantity ^{lim}L→∞ VL/L, where VL is the variance in the number of particles in an interval of length L, jumps discontinuosly from ρ(1 − ρ) to 0 when ρ → 1/2 and when ρ → 2/3.

3290486.343747
The thesis is: the orders of infinite smallness which occur in the early calculus can be understood in terms of what we now call a valuation.

3290522.343763
Re. Chapter 4: Probably the most wellknown application of model theory has been Robinson’s development of nonstandard analysis to resuscitate the historical notion of an infinitesimal. We tried to further this development by showing how it could be used to handle the orders of infinite smallness in the early calculus.

3618811.34378
Just because you think (2) that the animal is a fish and (1) that if it’s a fish, then if it has lungs, it’s a lungfish, doesn’t mean you should think (3) that if the animal has lungs, then it’s a lungfish. Indeed, it seems reasonable to conjecture that if it has lungs, it’s not a fish at all. So it seems reasonable to accept (1) and (2) while judging (3) to be utterly unacceptable.

3897755.343796
As teorias sensoriomotoras da percepção afirmam que a ação é um componente constitutivo da ação. Noë, por exemplo, na primeira página do seu livro Action in Perception, alega que “o que percebemos é determinado pelo que nós fazemos (ou o que nós sabemos como fazer)” (2006, p. 1, ênfases do autor). Mas como a ação participa constitutivamente da percepção? A ideia básica é que perceber em si mesmo envolve o entendimento dos efeitos dos movimentos sobre o fluxo das nossas experiências. A percepção, nesta concepção, não se reduz à experiência instantânea e pontual, aqui entendida como a maneira como algo nos aparece em um instante a partir de uma perspectiva particular, muito menos às sensações associadas a essa experiência. Assim, sou capaz de perceber a garrafa de água sobre a mesa porque tenho um entendimento de como ela apareceria para mim se me aproximasse ou me afastasse dela, ou de como ela apareceria para mim se eu a agarrasse e a girasse. A garrafa como um todo me é dada na percepção em virtude desse entendimento. Sem esse horizonte de inteligibilidade conectando ações e variações nas aparências de um objeto, eu poderia ter experiências desconectadas causadas pela presença desse objeto, mas não percepções. A percepção, portanto, envolve o entendimento de relações entre ações e variações no fluxo da experiência ou no modo como as coisas nos aparecem.

4120926.343808
V. Gitman, J. D. Hamkins, and A. Karagila, “KelleyMorse set theory does not prove the class Fodor theorem.” (manuscript under review)
Citation arχiv
@ARTICLE{GitmanHamkinsKaragila:KMsettheorydoesnotprovetheclassFodortheorem,
author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila},
title = {KelleyMorse set theory does not prove the class {F}odor theorem},
journal = {},
year = {},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {underreview},
eprint = {1904.04190},
archivePrefix = {arXiv},
primaryClass = {math.LO},
source = {},
doi = {},
url = {http://wp.me/p5M0LV1RD},
}
Abstract. …

4278395.343823
A wide family of manyvalued logics—for instance, those based on the weak Kleene algebra—include a nonclassical truthvalue that is “contaminating” in the sense that whenever the value is assigned to a formula ϕ, any complex formula in which ϕ appears is assigned that value as well. In such systems, the contaminating value enjoys a wide range of interpretations, suggesting scenarios in which more than one of these interpretations are called for. This calls for an evaluation of systems with multiple contaminating values. In this paper, we consider the countably infinite family of multipleconclusion consequence relations in which classical logic is enriched with one or more contaminating values whose behavior is determined by a linear ordering between them. We consider some motivations and applications for such systems and provide general characterizations for all consequence relations in this family. Finally, we provide sequent calculi for a pair of fourvalued logics including two linearly ordered contaminating values before defining twosided sequent calculi corresponding to each of the infinite family of manyvalued logics studied in this paper.

4383771.343837
In his paper ‘The possibility of vagueness’ (2017), Kit Fine proposes a new logic of vagueness, CL, that promises to provide both a solution to the sorites paradox and a way to avoid the impossibility result from Fine (2008). The present paper presents a challenge to his new theory of vagueness. I argue that the possibility theorem stated in Fine (2017), as well as his solution to the sorites paradox essentially depend on an illegitimate expressive limitation of the language. More specifically, I show that if we extend the language with any negation operator that obeys reductio ad absurdum, we can prove a new impossibility result that makes the kind of indeterminacy that Fine takes to be a hallmark of vagueness impossible. I show that such negation operators can be conservatively added to CL and examine some of the philosophical consequences of this result. Moreover, I demonstrate that we can define a particular negation operator that behaves exactly like intuitionistic negation in a natural propositionally quantified extension of CL. Since intuitionistic negation obeys reductio, the new impossibility result holds in this propositionally quantified extension of CL. In addition, the sorites paradox resurfaces for the new negation. Since this extension of CL is completely unobjectionable, this poses what appears to be a serious problem for Fine’s theory of vagueness.

4830362.343853
Suppose that we are interested in learning, from nonexperimental data, the Markov equivalence class of an unknown causal Bayesian network (CBN) on a given, fixed set of variables. It is well known that no learning algorithm can be so good as to have the property of convergence to the truth for all CBNs (on the given set of variables) (Spirtes et al. 2000). That is, the convergence property can be secured only for some CBNs, and has to be sacrificed for some. In reaction to this result, the standard practice has been to design learning algorithms that secure the convergence property for at least all the CBNs that satisfy the famous causal Faithfulness condition, which implies sacrificing the convergence property for some CBNs that violate Faithfulness (Spirtes et al. 2000). We propose a new approach to justifying this standard design practice without assuming that the true, unknown CBN satisfies the Faithfulness condition or any weaker variant of it. Building on some earlier results (especially Meek 1995), we show that, although no learning algorithm can be so good as to have the convergence property for all CBNs, some learning algorithms are at least this good: having stochastic convergence to the truth (i) for almost all CBNs, (ii) on a maximal domain of CBNs, and (iii) with a kind of locally uniform convergence that guarantees that low error probability can be made stable under small perturbations of the joint probability distribution. We also show that, for any causal learning algorithm, if it is that good, i.e. achieves the joint mode of convergence to the truth (i)(iii), then it must follow the standard design practice: converging stochastically to the truth for (at least) all CBNs that satisfy Faithfulness and, hence, being forced to sacrifice the convergence property for (at least) some CBNs that violate Faithfulness. To the best of our knowledge, this is the first theoretical result that explains, without assuming the Faithfulness condition or any of its weaker variants, why it is mandatory rather than merely optional to follow the standard design practice when the available data are nonexperimental. This result is proved for any fixed finite set of categorical variables, under just the standard IID assumption and the assumptions built into the definition of CBNs.

5080587.343869
The Gibbs entropy of a macroscopic classical system is a function of a probability distribution over phase space, i.e., of an ensemble. In contrast, the Boltzmann entropy is a function on phase space, and is thus defined for an individual system. Our aim is to discuss and compare these two notions of entropy, along with the associated ensemblist and individualist views of thermal equilibrium. Using the Gibbsian ensembles for the computation of the Gibbs entropy, the two notions yield the same (leading order) values for the entropy of a macroscopic system in thermal equilibrium. The two approaches do not, however, necessarily agree for nonequilibrium systems. For those, we argue that the Boltzmann entropy is the one that corresponds to thermodynamic entropy, in particular in connection with the second law of thermodynamics. Moreover, we describe the quantum analog of the Boltzmann entropy, and we argue that the individualist (Boltzmannian) concept of equilibrium is supported by the recent works on thermalization of closed quantum systems.

5223124.343882
In 1926, Mally presented the first formal system of deontic logic. His
system had several consequences which Mally regarded as surprising but
defensible. It also had a consequence (“A is obligatory if and
only if A is the case”) which Menger (1939) and almost all later
deontic logicians have regarded as unacceptable. We will not only
describe Mally’s system but also discuss how it may be repaired.

5715373.343895
If presentism and most, if not all, other versions of the Atheory are true, then propositions change in truth value. For instance, on presentism, in the time of the dinosaurs it was not true that horses exist, but now it is true. …

5860101.343908
Our main result so far is a characterization of Schnorr randomness and MartinLof randomness in terms of Lévy’s classical upwards convergence theorem in martingale theory. This is interesting philosophically because it suggests that randomness notions should be brought to bear on the interpretation of convergence to the truth results.

7136415.343924
In light of logic’s historical roots in dialogue and
argumentation, games and logic are a natural fit. Argumentation is a
gamelike activity that involves taking turns, saying the right things
at the right time, and, in competitive settings, has clear payoffs in
terms of winning and losing. Pursuing this connection, specialized
logic games have already been used in the middle ages as a tool for
logic training (Hamblin 1970) . The
modern area augmented this picture with formal dialogue games as
foundation for logic, relating winning strategies in argumentation to
cogent proofs (Kamlah 1973 [1984]) .