1. 449624.852732
    A “problem solver” (PS) is an agent who when interacting with other agents does not “put himself in their shoes” but rather chooses a best response to a uniform distribution over all possible configurations consistent with the information he receives about the other agents’ moves.
    Found 5 days, 4 hours ago on Ariel Rubinstein's site
  2. 678564.853
    It is well known how to define the operator Q for the total charge (i.e., positron number minus electron number) on the standard Hilbert space of the second-quantized Dirac equation. Here we ask about operators QA representing the charge content of a region A ⊆ R in 3d physical space. There is a natural formula for QA but, as we explain, there are difficulties about turning it into a mathematically precise definition. First, QA can be written as a series but its convergence seems hopeless. Second, we show for some choices of A that if QA could be defined then its domain could not contain either the vacuum vector or any vector obtained from the vacuum by applying a polynomial in creation and annihilation operators. Both observations speak against the existence of QA for generic A.
    Found 1 week ago on R. Tumulka's site
  3. 690227.85302
    Standard approaches to ontological simplicity focus either on the number of things or types a theory posits or on the number of fundamental things or types a theory posits. In this paper, I suggest a ground-theoretic approach that focuses on the number of something else. After getting clear on what this approach amounts to, I motivate it, defend it, and complete it.
    Found 1 week ago on Ergo
  4. 910209.853034
    In a recent paper (Found Phys 54:14, 2024), Carcassi, Oldofredi and Aidala concluded that the ψ-ontic models defined by Harrigan and Spekkens cannot be consistent with quantum mechanics, since the information entropy of a mixture of non-orthogonal states are different in these two theories according to their information theoretic analysis. In this paper, I argue that this no-go theorem for ψ-ontic models is false by explaining the physical origin of the von Neumann entropy in quantum mechanics.
    Found 1 week, 3 days ago on PhilSci Archive
  5. 1242118.853047
    For the one-dimensional Facilitated Exclusion Process with initial state a product measure of density ρ = 1/2− δ, δ ≥ 0, there exists an infinite-time limiting state νρ in which all particles are isolated and hence cannot move. We study the variance V (L), under νρ, of the number of particles in an interval of L sites. Under ν /2 either all odd or all even sites are occupied, so that V (L) = 0 for L even and V (L) = 1/4 for L odd: the state is hyperuniform [21], since V (L) grows more slowly than L. We prove that for densities approaching 1/2 from below there exist three regimes in L, in which the variance grows at different rates: for L ≫ δ , V (L) ≃ ρ(1 − ρ)L, just as in the initial state; for A(δ) ≪ L ≪ δ , with A(δ) = δ− /3 p and A(δ) = 1 for L even, V (L) ≃ CL3/2 with C = 2 for L odd 2/π/3; and for L ≪ δ−2/3 with L odd, V (L) ≃ 1/4. The analysis is based on a careful study of a renewal process with a long tail. Our study is motivated by simulation results showing similar behavior in higher dimensions; we discuss this background briefly.
    Found 2 weeks ago on Sheldon Goldstein's site
  6. 1588337.853059
    We consider the fluctuations in the number of particles in a box of size L in Z , d ⩾ 1, in the (infinite volume) translation invariant stationary states of the facilitated exclusion process, also called the conserved lattice gas model. When started in a Bernoulli (product) measure at density ρ, these systems approach, as t → ∞, a ‘frozen’ state for ⩽ c, with c 1 2 for d 1 and ρc < 1/2 for d ⩾ 2. At ρ= ρc the limiting state is, as observed by Hexner and Levine, hyperuniform, that is, the variance of the number of particles in the box grows slower than L . We give a general description of how the variances at different scales of L behave as ρ ↗ c. On the largest scale, L≫ L , the fluctuations are normal (in fact the same as in the original product measure), while in a region L1 ≪ L ≪ L2, with both L1 and L2 going to infinity as ↗ c, the variance grows faster than normal. For 1≪ L ≪ L1 the variance is the same as in the hyperuniform system. (All results discussed are rigorous for d = 1 and based on simulations for d ⩾ 2.)
    Found 2 weeks, 4 days ago on Sheldon Goldstein's site
  7. 1588992.853071
    This paper seeks to determine a rational agent’s evidential constraints given her beliefs. Rationality is here construed as adherence to a principle of entropy maximisation. I determine the rational agent’s set of probability efunctions compatible with the evidence, ? , given the maximum entropy function and given some constraints on the shape of ? . I also consider agents employing a centre of mass approach to form their beliefs rather than entropy maximisation.
    Found 2 weeks, 4 days ago on Jürgen Landes's site
  8. 1601388.853083
    In Michael Sipser’s Introduction to the Theory of Computation textbook, he has one Platonically perfect homework exercise, so perfect that I can reconstruct it from memory despite not having opened the book for over a decade. …
    Found 2 weeks, 4 days ago on Scott Aaronson's blog
  9. 1602141.853095
    In any standard reference text on expected utility theory one will find representation theorems (for example, [27], [13], [35]). These theorems link expected utility maximization to a qualitative description of an agent’s choice behavior. Typically an agent’s choice behavior is captured by a preference relation ⪯ on the set of decisions they face (in our above example this is the set of gambles, but preferences might instead be defined on acts which have no intrinsic probabilities). We say that P ⪯ Q if and only if the agent deems Q to be at least as desirable as P . We then prove something of the form: the preference relation satisfies a given set of axioms if and only if there exists a utility function (and, in some cases, a probability measure) such that the agent prefers gambles with greater expected utility ([32], [42]). The most prominent instances of such representation theorems are due to von Neumann and Morgenstern ([38]), Anscombe and Aumann ([1]), and Savage ([35]).1
    Found 2 weeks, 4 days ago on PhilSci Archive
  10. 1602165.853118
    We investigate a model of becoming – Classical Sequential Growth (CSG) – that has been proposed within the framework of causal sets (causets), with the latter defined as order types of certain partial orderings. To investigate how causets grow, we introduce special sequences of causets, which we call “csg-paths”. We prove a number of results concerning relations between csg-paths and causets. These results paint a highly non-trivial picture of csg-paths. There are uncountably many csg-paths, all of them sharing the same beginning, after which they branch. Every infinite csg-path achieves in the limit an infinite causet, and vice versa, every infinite causet is achieved in the limit by an infinite csg-path. However, coalescing csg-paths, i.e., ones that achieve the same causet even after forking off at some point, are ubiquitous.
    Found 2 weeks, 4 days ago on PhilSci Archive
  11. 2060291.853132
    Arithmetical pluralism is the view that there is not one true arithmetic but rather many apparently conflicting arithmetical theories, each true in its own language. While pluralism has recently attracted considerable interest, it has also faced significant criticism. One powerful objection, which can be extracted from Parsons (2008), appeals to a categoricity result to argue against the possibility of seemingly conflicting true arithmetics. Another salient objection raised by Putnam (1994) and Koellner (2009) draws upon the arithmetization of syntax to argue that arithmetical pluralism is inconsistent with the objectivity of syntax. First, we review these arguments and explain why they ultimately fail. We then offer a novel, more sophisticated argument that avoids the pitfalls of both. Our argument combines strategies from both objections to show that pluralism about arithmetic entails pluralism about syntax. Finally, we explore the viability of pluralism in light of our argument and conclude that a stable pluralist position is coherent. This position allows for the possibility of rival packages of arithmetic and syntax theories, provided that they systematically co-vary with one another.
    Found 3 weeks, 2 days ago on Daniel Waxman's site
  12. 2141319.853144
    The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth. All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! …
    Found 3 weeks, 3 days ago on Scott Aaronson's blog
  13. 2640795.853155
    We define a notion of inaccessibility of a decision between two options represented by utility functions, where the decision is based on the order of the expected values of the two utility functions. The inaccessibility expresses that the decision cannot be obtained if the expectation values of the utility functions are calculated using the conditional probability defined by a prior and by partial evidence about the probability that determines the decision. Examples of inaccessible decisions are given in finite probability spaces. Open questions and conjectures about inaccessibility of decisions are formulated. The results are interpreted as showing the crucial role of priors in Bayesian taming of epistemic uncertainties about probabilities that determine decisions based on utility maximizing.
    Found 1 month ago on PhilSci Archive
  14. 2750599.853168
    We report on the mechanization of (preference-based) conditional normative reasoning. Our focus is on ˚Aqvist’s system E for conditional obligation, and its extensions. Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The equivalence is automatically verified in one direction, leading from the property to the axiom. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox (or impossibility theorem) in population ethics, Parfit’s repugnant conclusion. While some have proposed overcoming the impossibility theorem by abandoning the presupposed transitivity of “better than,” our formalisation unveils a less extreme approach, suggesting among other things the option of weakening transitivity suitably rather than discarding it entirely. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
    Found 1 month ago on X. Parent's site
  15. 2814011.85318
    When is it explanatorily better to adopt a conjunction of explanatory hypotheses as opposed to committing to only some of them? Although conjunctive explanations are inevitably less probable than less committed alternatives, we argue that the answer is not ‘never’. This paper provides an account of the conditions under which explanatory considerations warrant a preference for less probable, conjunctive explanations. After setting out four formal conditions that must be met by such an account, we consider the shortcomings of several approaches. We develop an account that avoids these shortcomings and then defend it by applying it to a well-known example of explanatory reasoning in contemporary science.
    Found 1 month ago on PhilSci Archive
  16. 2871783.853196
    Standard textbooks on quantum mechanics present the theory in terms of Hilbert spaces over the …eld of complex numbers and complex linear operator algebras acting on these spaces. What would be lost (or gained) if a different scalar …eld, e.g. the real numbers or the quaternions, were used? This issue arose with the birthing of the new quantum theory, and over the decades it has been raised over and over again, drawing a variety of different opinions. Here I attempt to identify and to clarify some of the key points of contention, focusing especially on procedures for complexifying real Hilbert spaces and real algebras of observables.
    Found 1 month ago on PhilSci Archive
  17. 3246272.853209
    I hope this is my last post for a while on Integrated Information Theory (IIT), in Aaronson’s simplified formulation. One of the fun and well-known facts is that if you have an impractically large square two-dimensional grid of interconnected logic gates (presumably with some constant time-delay in each gate between inputs and outputs to prevent race conditions) in a fixed point (i.e., nothing is changing), the result can still have a degree of integrated information proportional to the square root of the number of gates. …
    Found 1 month, 1 week ago on Alexander Pruss's Blog
  18. 3520319.853243
    I’m still thinking about Integrated Information Theory (IIT), in Aaronson’s simplified formulation. Aaronson’s famous criticisms show pretty convincingly that IIT fails to correctly characterize consciousness: simple but large systems of unchanging logic gates end up having human-level consciousness on IIT. …
    Found 1 month, 1 week ago on Alexander Pruss's Blog
  19. 3737630.853257
    The aim of this paper is to present a constructive solution to Frege’s puzzle (largely limited to the mathematical context) based on type theory. Two ways in which an equality statement may be said to have cognitive significance are distinguished. One concerns the mode of presentation of the equality, the other its mode of proof. Frege’s distinction between sense and reference, which emphasizes the former aspect, cannot adequately explain the cognitive significance of equality statements unless a clear identity criterion for senses is provided. It is argued that providing a solution based on proofs is more satisfactory from the standpoint of constructive semantics.
    Found 1 month, 1 week ago on PhilSci Archive
  20. 3847737.853274
    This is a report on the project “Axiomatizing Conditional Normative Reasoning” (ANCoR, M 3240-N) funded by the Austrian Science Fund (FWF). The project aims to deepen our understanding of conditional normative reasoning by providing an axiomatic study of it at the propositional but also first-order level. The focus is on a particular framework, the so-called preference-based logic for conditional obligation, whose main strength has to do with the treatment of contrary-to-duty reasoning and reasoning about exceptions. The project considers not only the meta-theory of this family of logics but also its mechanization.
    Found 1 month, 2 weeks ago on X. Parent's site
  21. 3910762.853291
    According to the ω-rule, it is valid to infer that all natural numbers possess some property, if possesses it, 1 possesses it, 2 possesses it, and so on. The ω-rule is important because its inclusion in certain arithmetical theories results in true arithmetic. It is controversial because it seems impossible for finite human beings to follow, given that it seems to require accepting infinitely many premises. Inspired by a remark of Wittgenstein’s, I argue that the mystery of how we follow the ω-rule subsides once we treat the rule as helping to give meaning to the symbol, “…”.
    Found 1 month, 2 weeks ago on PhilSci Archive
  22. 3910796.853306
    We give a new and elementary construction of primitive positive decomposition of higher arity relations into binary relations on finite domains. Such decompositions come up in applications to constraint satisfaction problems, clone theory and relational databases. The construction exploits functional completeness of 2-input functions in many-valued logic by interpreting relations as graphs of partially defined multivalued ‘functions’. The ‘functions’ are then composed from ordinary functions in the usual sense. The construction is computationally effective and relies on well-developed methods of functional decomposition, but reduces relations only to ternary relations. An additional construction then decomposes ternary into binary relations, also effectively, by converting certain disjunctions into existential quantifications. The result gives a uniform proof of Peirce’s reduction thesis on finite domains, and shows that the graph of any Sheffer function composes all relations there.
    Found 1 month, 2 weeks ago on PhilSci Archive
  23. 3910832.853323
    We study logical reduction (factorization) of relations into relations of lower arity by Boolean or relative products that come from applying conjunctions and existential quantifiers to predicates, i.e. by primitive positive formulas of predicate calculus. Our algebraic framework unifies natural joins and data dependencies of database theory and relational algebra of clone theory with the bond algebra of C.S. Peirce. We also offer new constructions of reductions, systematically study irreducible relations and reductions to them, and introduce a new characteristic of relations, ternarity, that measures their ‘complexity of relating’ and allows to refine reduction results. In particular, we refine Peirce’s controversial reduction thesis, and show that reducibility behavior is dramatically different on finite and infinite domains.
    Found 1 month, 2 weeks ago on PhilSci Archive
  24. 3910865.853336
    We argue that traditional formulations of the reduction thesis that tie it to privileged relational operations do not suffice for Peirce’s justification of the categories, and invite the charge of gerrymandering to make it come out as true. We then develop a more robust invariant formulation of the thesis by explicating the use of triads in any relational operations, which is immune to that charge. The explication also allows us to track how Thirdness enters the structure of higher order relations, and even propose a numerical measure of it. Our analysis reveals new conceptual phenomena when negation or disjunction are used to compound relations.
    Found 1 month, 2 weeks ago on PhilSci Archive
  25. 4256994.853348
    The formalism of generalized quantum histories allows a symmetrical treatment of space and time correlations, by taking different traces of the same history density matrix. We recall how to characterize spatial and temporal entanglement in this framework. An operative protocol is presented, to map a history state into the ket of a static composite system. We show, by examples, how the Leggett-Garg and the temporal CHSH inequalities can be violated in our approach.
    Found 1 month, 2 weeks ago on PhilSci Archive
  26. 4342493.853365
    Wilhelm (Forthcom Synth 199:6357–6369, 2021) has recently defended a criterion for comparing structure of mathematical objects, which he calls Subgroup. He argues that Subgroup is better than SYM , another widely adopted criterion. We argue that this is mistaken; Subgroup is strictly worse than SYM . We then formulate a new criterion that improves on both SYM and Subgroup, answering Wilhelm’s criticisms of SYM along the way. We conclude by arguing that no criterion that looks only to the automorphisms of mathematical objects to compare their structure can be fully satisfactory.
    Found 1 month, 2 weeks ago on James Owen Weatherall's site
  27. 4718619.853378
    Given synthetic Euclidean geometry, I define length λ(a, b) (of a segment ab), by taking equivalence classes with respect to the congruence relation, ≡: i.e., λ(a, b) = λ(c, d) ↔ ab ≡ cd. By geometric constructions and explicit definitions, one may define the Length structure, L = (L, ,⊕,⪯, ), “instantiated by Euclidean geometry”, so to speak. One may show that this structure is isomorphic to the set of non-negative elements of the one-dimensional linearly ordered vector space over R. One may define the notion of a numerical scale (for length) and a unit (for length). One may show how numerical scales for length are determined by Cartesian coordinate systems. One may also obtain a derivation of Maxwell’s quantity formula, Q = {Q}[Q], for lengths.
    Found 1 month, 3 weeks ago on PhilSci Archive
  28. 4834029.85339
    Let us consider an acyclic causal model M of the sort that is central to causal modeling (Spirtes et al. 1993/2000, Pearl 2000/2009, Halpern 2016, Hitchcock 2018). Readers familiar with them can skip this section. M = , F ⟩ is a causal model if, and only if, is a signature and = {F1 , . . . , Fn represents a set of n structural equations, for a finite natural number n. S = , , R is a signature if, and only if, is a finite set of exogenous variables, V = V1 , . . . ,Vn is a set of n endogenous variables that is disjoint from U, and R : U ∪ V → R assigns to each exogenous or endogenous variable X in U ∪ V its range (not co-domain) R (X) ⊆ R. F = F1 , . . . , Fn represents a set of n structural equations if, and only if, for each natural number i, 1 ≤ i ≤ n: Fi is a function from the Cartesian product i = X∈U∪V\{Vi R (X) of the ranges of all exogenous and endogenous variables other than Vi into the range R (Vi) of the endogenous variable Vi. The set of possible worlds of the causal model M is defined as the Cartesian productW = X∈U∪VR (X) of the ranges of all exogenous and endogenous variables.
    Found 1 month, 3 weeks ago on PhilSci Archive
  29. 4834347.853403
    To analyse contingent propositions, this paper investigates how branching time structures can be combined with probability theory. In particular, it considers assigning infinitesimal probabilities—available in non-Archimedean probability theory—to individual histories. This allows us to introduce the concept of ‘remote possibility’ as a new modal notion between ‘impossibility’ and ‘appreciable possibility’. The proposal is illustrated by applying it to a future contingent and a historical counterfactual concerning an infinite sequence of coin tosses. The latter is a toy model that is used to illustrate the applicability of the proposal to more realistic physical models.
    Found 1 month, 3 weeks ago on PhilSci Archive
  30. 5067097.853417
    A wide variety of stochastic models of cladogenesis (based on speciation and extinction) lead to an identical distribution on phylogenetic tree shapes once the edge lengths are ignored. By contrast, the distribution of the tree’s edge lengths is generally quite sensitive to the underlying model. In this paper, we review the impact of different model choices on tree shape and edge length distribution, and its impact for studying the properties of phylogenetic diversity (PD) as a measure of biodiversity, and the loss of PD as species become extinct at the present. We also compare PD with a stochastic model of feature diversity, and investigate some mathematical links and inequalities between these two measures plus their predictions concerning the loss of biodiversity under extinction at the present.
    Found 1 month, 3 weeks ago on Mike Steel's site