1. 435571.458077
    A common view is that Charles Peirce influenced Josiah Royce. This paper demonstrates that Josiah Royce influenced Charles Peirce. A chronology is presented, followed with a brief description of a change in Peirce’s thinking from studying the writings of Royce.
  2. 783732.458175
    Computer programs are particular kinds of texts. It is therefore natural to ask what is the meaning of a program or, more generally, how can we set up a formal semantical account of a programming language. There are many possible answers to such questions, each motivated by some particular aspect of programs. So, for instance, the fact that programs are to be executed on some kind of computing machine gives rise to operational semantics, whereas the similarities of programming languages with the formal languages of mathematical logic has motivated the denotational approach that interprets programs and their constituents by means of set-theoretical models.
    Found 1 week, 2 days ago on Stanford Encyclopedia of Philosophy
  3. 899382.458204
    A small probability space representation of quantum mechanical probabilities is defined as a collection of Kolmogorovian probability spaces, each of which is associated with a context of a maximal set of compatible measurements, that portrays quantum probabilities as Kolmogorovian probabilities of classical events. Bell’s theorem is stated and analyzed in terms of the small probability space formalism.
    Found 1 week, 3 days ago on PhilSci Archive
  4. 1346030.458221
    Logicism is typically defined as the thesis that mathematics reduces to, or is an extension of, logic. Exactly what “reduces” means here is not always made entirely clear. (More definite articulations of logicism are explored in section 5 below.) While something like this thesis had been articulated by others (e.g., Dedekind 1888 and arguably Leibniz 1666), logicism only became a widespread subject of intellectual study when serious attempts began to be made to provide complete deductions of the most important principles of mathematics from purely logical foundations. This became possible only with the development of modern quantifier logic, which went hand in hand with these attempts. Gottlob Frege announced such a project in his 1884 Grundlagen der Arithmetik (translated as Frege 1950), and attempted to carry it out in his 1893–1902 Grundgesetze der Arithmetik (translated as Frege 2013). Frege limited his logicism to arithmetic, however, and it turned out that his logical foundation was inconsistent.
    Found 2 weeks, 1 day ago on Kevin C. Klement's site
  5. 1346047.458237
    Simplistic accounts of its history sometimes portray logic as having stagnated in the West completely from its origins in the works of Aristotle all the way until the 19th Century. This is of course nonsense. The Stoics and Megarians added propositional logic. Medievals brought greater unity and systematicity to Aristotle’s system and improved our understanding of its underpinnings (see e.g., Henry 1972), and important writings on logic were composed by thinkers from Leibniz to Clarke to Arnauld and Nicole. However, it cannot be denied that an unprecedented sea change occurred in the 19th Century, one that has completely transformed our understanding of logic and the methods used in studying it. This revolution can be seen as proceeding in two main stages. The first dates to the mid-19th Century and is owed most signally to the work of George Boole (1815–1864). The second dates to the late 19th Century and the works of Gott-lob Frege (1848–1925). Both were mathematicians primarily, and their work made it possible to bring mathematical and formal approaches to logical research, paving the way for the significant meta-logical results of the 20th Century. Boolean algebra, the heart of Boole’s contributions to logic, has also come to represent a cornerstone of modern computing. Frege had broad philosophical interests, and his writings on the nature of logical form, meaning and truth remain the subject of intense theoretical discussion, especially in the analytic tradition. Frege’s works, and the powerful new logical calculi developed at the end of the 19th Century, influenced many of its most seminal figures, such as Bertrand Russell, Ludwig Wittgenstein and Rudolf Carnap. Indeed, Frege is sometimes heralded as the “father” of analytic philosophy, although he himself would not live to become aware of any such movement.
    Found 2 weeks, 1 day ago on Kevin C. Klement's site
  6. 1507911.458252
    In this paper, I will examine the representative halfer and thirder solutions to the Sleeping Beauty problem. Then by properly applying the event concept in probability theory and examining similarity of the Sleeping Beauty problem to the Monty Hall problem, it is concluded that the representative thirder solution is wrong and the halfers are right, but that the representative halfer solution also contains a wrong logical conclusion.
    Found 2 weeks, 3 days ago on PhilSci Archive
  7. 1507931.458281
    This paper raises a simple continuous spectrum issue in many-worlds interpretation of quantum mechanics, or Everettian interpretation. I will assume that Everettian interpretation refers to many-worlds understanding based on quantum decoherence. The fact that some operators in quantum mechanics have continuous spectrum is used to propose a simple thought experiment based on probability theory. Then the paper concludes it is untenable to think of each possibility that wavefunction Ψ gives probability as actual universe. While the argument that continuous spectrum leads to inconsistency in the cardinality of universes can be made, this paper proposes a different argument not relating to theoretical math that actually has practical problems.
    Found 2 weeks, 3 days ago on PhilSci Archive
  8. 1857715.458298
    In this paper, I examine the relationship between physical quantities and physical states in quantum theories. I argue against the claim made by Arageorgis (1995) that the approach to interpreting quantum theories known as Algebraic Imperialism allows for “too many states”. I prove a result establishing that the Algebraic Imperialist has very general resources that she can employ to change her abstract algebra of quantities in order to rule out unphysical states.
    Found 2 weeks, 6 days ago on PhilSci Archive
  9. 1935686.458313
    Continuing from the previous post, I’ll consider five elementary textbooks aimed at philosophers, all either first published, or with new editions, well after e.g. Edgington’s State of the Art article. …
    Found 3 weeks ago on Peter Smith's blog
  10. 2786519.45833
    We present two-dimensional tableau systems for the actuality, fixedly, and ↑ operators. All systems are proved sound and complete with respect to a two-dimensional semantics. In addition, some issues regarding decidability are discussed.
    Found 1 month ago on The Australasian Journal of Logic
  11. 2800386.458346
    Before going off to Florence, I was reworking chapters on the material conditional for IFL2 (in fact I posted a couple of draft chapters here, which I then thought I could improve on,  and so I rapidly took them down again). …
    Found 1 month ago on Peter Smith's blog
  12. 3084932.458361
    We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If a state can be learned from another this way, then it is said to be Bayes accessible from the evidence. It is shown that the Bayes accessibility relation is reflexive, antisymmetric and non-transitive. If every state is Bayes accessible from some other defined on the same set of random variables, then the set of states is called weakly Bayes connected. It is shown that the set of states is not weakly Bayes connected if the probability space is standard. The set of states is called weakly Bayes connectable if, given any state, the probability space can be extended in such a way that the given state becomes Bayes accessible from some other state in the extended space. It is shown that probability spaces are weakly Bayes connectable. Since conditioning using the theory of conditional expectations includes both Bayes’ rule and Jeffrey conditionalization as special cases, the results presented generalize substantially some results obtained earlier for Jeffrey conditionalization.
    Found 1 month ago on Miklos Redei's site
  13. 3114972.458376
    It is sometimes convenient or useful in mathematics to treat isomorphic structures as the same. The recently proposed Univalence Axiom for the foundations of mathematics elevates this idea to a foundational principle in the setting of Homotopy Type Theory. It states, roughly, that isomorphic structures can be identified. We explore the motivations and consequences, both mathematical and philosophical, of making such a new logical postulate.
    Found 1 month ago on Steve Awodey's site
  14. 3115010.458434
    There are, at first blush, two kinds of construction involved: constructions of proofs of some proposition and constructions of objects of some type. But I will argue that, from the point of view of foundations of mathematics, there is no difference between the two notions. A proposition may be regarded as a type of object, namely, the type of its proofs. Conversely, a type A may be regarded as a proposition, namely, the proposition whose proofs are the objects of type A. So a proposition A is true just in case there is an object of type A.
    Found 1 month ago on Steve Awodey's site
  15. 3387112.458472
    Tomorrow, I’ll have something big to announce here. So, just to whet your appetites, and to get myself back into the habit of blogging, I figured I’d offer you an appetizer course: some more miscellaneous non-Trump-related news. …
    Found 1 month, 1 week ago on Scott Aaronson's blog
  16. 3863107.458494
    In his remarkable new book, Aboutness (Yablo 2013), Stephen Yablo makes a compelling case that sometimes the felt truth value or content of a sentence is not its semantic content but rather its semantic content minus a related presupposition (the result of logically subtracting a relevant presupposition from its semantic content). Yablo uses this theory to provide a unified account of non-catastrophic presupposition failures, cheap ontological arguments, and cases of unexpected assertive content. This proposed unification is surprising, illuminating, and really, quite exciting. But what is the notion of logical subtraction underlying the theory—what is it to logically subtract some content from another? And do we have an expression of ordinary language that picks out logical remainders (what’s left after an operation of logical subtraction has been carried out)? In this paper, I propose that indicative conditionals might be our way of expressing logical remainders. If correct, we’ll have some new resources of thinking about both conditionals and remainders. And even if indicative conditionals are not equivalent to their corresponding logical remainders, I’ll argue that they are pretty closely related—close enough to warrant thinking about what one can learn from one by way of the other.
    Found 1 month, 1 week ago on Justin Khoo's site
  17. 4054525.45851
    No single calculus of inductive inference can serve universally. There is even no guarantee that the inductive inferences warranted locally, in some domain, will be regular enough to admit the abstractions that form a calculus. However, in many important cases, when the background facts there warrant it, inductive inferences can be governed by a calculus. By far the most familiar case is the probability calculus. That many alternative calculi other than the probability calculus are possible is easy to see. Norton (2010) identifies a large class of what are there called “deductively definable” logics of induction. Generating a calculus in the class is easy. It requires little more than picking a function from infinitely many choices.
    Found 1 month, 2 weeks ago on John Norton's site
  18. 4054548.458535
    Measure theory is the branch of mathematics that investigates how, loosely speaking, sizes may be assigned to sets. Measures are familiar to us geometrically as lengths, areas and volumes. They qualify as measures since they carry the distinctive property of measures: the lengths, areas and volumes of disjoint sets may be added to give the measure of the union of the sets. Since the probabilities of disjoint events may likewise be added, probabilities fall under measure theory. One of the more striking results in measure theory is that one can readily find systems in which many sets are nonmeasurable, that is, no measure can consistently be assigned to them. That translates into the following difficulty for probabilistic analysis. We can have a probabilistic system in which everything seems quite in order. We use the familiar rules of the probability calculus to compute the probability of various outcomes of interest. But then we find some outcomes where the familiar ritual fails. Calculate the probability one way and get one result; do it another way and get a different result. There is, we must conclude, no probability consistently assignable to the outcome. It is nonmeasurable.
    Found 1 month, 2 weeks ago on John Norton's site
  19. 4054605.458552
    The indeterministic systems to be investigated in this chapter share the common characteristic that determining one aspect of the system leaves others open. The most familiar cases are ones in which the present state of the system fails to fix its future state. We shall see several such systems here in Section 3. The most important are systems with infinitely many degrees of freedom, for this sort of determinism is generic amongst them. Rather than delve into the details of the physics of such systems, the mechanism that generates the indeterminism will be illustrated by the simplified system of the infinite domino cascade. A different sort of indeterministic system will be explored in Section 4. At the risk of abusing the term, I will also describe as indeterministic systems in which, at the same moment of time, one component fails to fix others, contrary to normal expectations. The examples will be drawn from Newtonian gravitation theory.
    Found 1 month, 2 weeks ago on John Norton's site
  20. 4112702.458567
    Some, notably Peter van Inwagen, in order to avoid problems with free will and omniscience, replace the condition that an omniscient being knows all true propositions with a version of the apparently weaker condition that an omniscient being knows all knowable true propositions. I shall show that the apparently weaker condition, when conjoined with uncontroversial claims and the logical closure of an omniscient being’s knowledge, still yields the claim that an omniscient being knows all true propositions.
    Found 1 month, 2 weeks ago on Alexander R. Pruss's site
  21. 4112750.458582
    These are some minor notes and observations related to a paper by Cholak, Jockusch, and Slaman [3]. In particular, if T1 and T2 are theories in the language of second-order arithmetic and T2 is Π conservative over T1, it is not necessarily the case that every countable model of T1 is an ω-submodel of a countable model of T2; this answers a question posed in [3]. On the other hand, for n ≥ 1, every countable ω-model of IΣn (resp. BΣn+1 ) is an ω-submodel of a countable model of WKL + IΣn (resp. WKL + BΣn+1 ).
    Found 1 month, 2 weeks ago on Jeremy Avigad's site
  22. 4112801.458596
    This essay advances and develops a dynamic conception of inference rules and uses it to reexamine a long-standing problem about logical inference raised by Lewis Carroll’s regress. First, I argue that a dynamic conception of inference rules is motivated by a dynamic conception of inference and by a natural dynamic interpretation of inference schemas. Second, I develop the proposal in some detail by appeal to the tools provided by dynamic semantics. The question is then discussed whether a dynamic conception of inference rules is compatible with classical logic and, if not, what sort of revision it demands. Finally, the dynamic conception of inference rules is applied to a discussion of Lewis Carroll’s regress.
    Found 1 month, 2 weeks ago on Carlotta Pavese's site
  23. 4338426.458611
    In (Bonanno, 2013), a solution concept for extensive-form games, called perfect Bayesian equilibrium (PBE), was introduced and shown to be a strict refinement of subgame-perfect equilibrium; it was also shown that, in turn, sequential equilibrium (SE) is a strict refinement of PBE. In (Bonanno, 2016), the notion of PBE was used to provide a characterization of SE in terms of a strengthening of the two defining components of PBE (besides sequential rationality), namely AGM consistency and Bayes consistency. In this paper we explore the gap between PBE and SE by identifying solution concepts that lie strictly between PBE and SE; these solution concepts embody a notion of “conservative” belief revision. Furthermore, we provide a method for determining if a plausibility order on the set of histories is choice measurable, which is a necessary condition for a PBE to be a SE.
    Found 1 month, 2 weeks ago on Giacomo Bonanno's site
  24. 4353004.458625
    The consequence argument attempts to show that incompatibilism is true by showing that if there is determinism, then we never had, have or will have any choice about anything. Much of the debate on the consequence argument has focused on the “beta” transfer principle, and its improvements. We shall show that on an appropriate definition of “never have had, have or will have any choice”, a version of the beta principle is a theorem given one plausible axiom for counterfactuals (weakening). Instead of being about transfer principles, the debate should be over whether the distant past and laws are up to us.
    Found 1 month, 2 weeks ago on Alexander R. Pruss's site
  25. 4353370.45866
    Let Ω be a countable infinite product ΩN of copies of the same probability space Ω1, and let {Ξn} be the sequence of the coordinate projection functions from Ω to Ω1. Let Ψ be a possibly non-measurable function from Ω1 to R, and let X (ω) = Ψ(Ξ (ω)). Then we can think of {Xn} as a sequence of independent but possibly non-measurable random variables on Ω. Let Sn = X1 + · · · + Xn. By the ordinary Strong Law of Large Numbers, we almost surely have E[X ] ≤ lim inf Sn/n ≤ lim sup Sn/n ≤ E [X1], where E and E are the lower and upper expectations. We ask if anything more precise can be said about the limit points of Sn/n in the non-trivial case where E[X ] < E [X1], and obtain several negative answers. For instance, the set of points of Ω where S /n converges is maximally nonmeasurable: it has inner measure zero and outer measure one.
    Found 1 month, 2 weeks ago on Alexander R. Pruss's site
  26. 4353399.458692
    Consider the regularity thesis that each possible event has non-zero probability. Hajek challenges this in two ways: (a) there can be nonmeasurable events that have no probability at all and (b) on a large enough sample space, some probabilities will have to be zero. But arguments for the existence of nonmeasurable events depend on the Axiom of Choice (AC). We shall show that the existence of anything like regular probabilities is by itself enough to imply a weak version of AC sufficient to prove the Banach-Tarski Paradox on the decomposition of a ball into two equally sized balls, and hence to show the existence of nonmeasurable events. This provides a powerful argument against unrestricted orthodox Bayesianism that works even without AC. A corollary of our formal result is that if every partial order extends to a total preorder while maintaining strict comparisons, then the Banach-Tarski Paradox holds. This yields an argument that incommensurability cannot be avoided in ambitious versions of decision theory.
    Found 1 month, 2 weeks ago on Alexander R. Pruss's site
  27. 4353416.458721
    Popper functions allow one to take conditional probabilities as primitive instead of deriving them from unconditional probabilities via the ratio formula P (A|B) = P (A ∩ B)/P (B). A major advantage of this approach is it allows one to condition on events of zero probability. I will show that under plausible symmetry conditions, Popper functions often fail to do what they were supposed to do. For instance, suppose we want to define the Popper function for an isometrically invariant case in two dimensions and hence require the Popper function to be rotationally invariant and defined on pairs of sets from some algebra that contains at least all countable subsets. Then it turns out that the Popper function trivializes for all finite sets: P (A|B) = 1 for all A (including A = ∅) if B is finite. Likewise, Popper functions invariant under all sequence reflections can’t be defined in a way that models a bidirectionally infinite sequence of independent coin tosses.
    Found 1 month, 2 weeks ago on Alexander R. Pruss's site
  28. 4400013.458753
    The contemporary logical orthodoxy has it that, from contradictory premises, anything can be inferred. Let ⊨ be a relation of logical consequence, defined either semantically or proof-theoretically. Call ⊨ explosive if it validates {A , ¬A} ⊨ B for every A and B (ex contradictione quodlibet (ECQ)). Classical logic, and most standard ‘non-classical’ logics too such as intuitionist logic, are explosive. Inconsistency, according to received wisdom, cannot be coherently reasoned about. Paraconsistent logic challenges this orthodoxy. A logical consequence relation, ⊨, is said to be paraconsistent if it is not explosive.
    Found 1 month, 2 weeks ago on Stanford Encyclopedia of Philosophy
  29. 4406764.45878
    In this paper we explore the logic of broad necessity. Definitions of what it means for one modality to be broader than another are formulated, and we prove, in the context of higher-order logic, that there is a broadest necessity, settling one of the central questions of this investigation. We show, moreover, that it is possible to give a reductive analysis of this necessity in extensional language (using truth functional connectives and quantifiers). This relates more generally to a conjecture that it is not possible to define intensional connectives from extensional notions. We formulate this conjecture precisely in higher-order logic, and examine concrete cases in which it fails.
    Found 1 month, 2 weeks ago on Andrew Bacon's site
  30. 4620988.458805
    T he following is an expanded written version of my reply to Rosanna Keefe’s paper ‘Modelling higher-order vagueness: columns, borderlines and boundaries’ (Keefe 2015), which in turn is a reply to my paper ‘Columnar higher-order vagueness, or Vagueness is higher-order vagueness’ (Bobzien 2015). Both papers were presented at the Joint Session of the the Aristotelian Society and the Mind Association in July, 2015. At the Joint Session meeting, there was insufficient time to present all of my points in response to Keefe’s paper. In addition, the audio of the session, which is available online becomes inaudible at the beginning of my reply to Keefe’s comments due to a technical defect. The following is a full version of my remarks.
    Found 1 month, 3 weeks ago on PhilPapers