Quantum entanglement is a physical resource, like energy, associated
with the peculiar nonclassical correlations that are possible between
separated quantum systems. Entanglement can be measured, transformed,
and purified. A pair of quantum systems in an entangled state can be
used as a quantum information channel to perform computational and
cryptographic tasks that are impossible for classical systems. The
general study of the information-processing capabilities of quantum
systems is the subject of quantum information theory.
R.A. Fisher: February 17, 1890 – July 29, 1962
Continuing with posts in recognition of R.A. Fisher’s birthday, I post one from a few years ago on a topic that had previously not been discussed on this blog: Fisher’s fiducial probability. …
I develop an account of naturalness (that is, approximately: lack of extreme fine-tuning) in physics which demonstrates that naturalness assumptions are not restricted to narrow cases in high-energy physics but are a ubiquitous part of inter-level relations are derived in physics. After exploring how and to what extent we might justify such assumptions on methodological grounds or through appeal to speculative future physics, I consider the apparent failure of naturalness in cosmology and in the Standard Model. I argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested; I briefly review some currently-popular strategies that might avoid that crisis.
The thermal time hypothesis (TTH) is a proposed solution to the problem of time: a coarse-grained, statistical state determines a thermal dynamics according to which it is in equilibrium, and this dynamics is identified as the flow of physical time in generally covariant quantum theories. This paper raises a series of objections to the TTH as developed by Connes and Rovelli (1994). Two technical challenges concern the relationship between thermal time and proper time conjectured by the TTH and the implementation of the TTH in the classical limit. Three conceptual problems concern the flow of time in non-equilibrium states and the extent to which the TTH is background independent and gauge-invariant. While there are potentially viable strategies for addressing the two technical challenges, the three conceptual problems present a tougher hurdle for the defender of the TTH.
Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential favoring, even when that is inappropriate. The purpose of this paper is to study inductive inference in a very general setting where finding the truth is not necessarily the goal and where the measure of evidential favoring is not necessarily the likelihood. I use an accuracy argument to argue for probabilism and I develop a new kind of argument to argue for two general updating rules, both of which are reasonable in different contexts. One of the updating rules has standard Bayesian updating, Bissiri et al.’s (2016) general Bayesian updating, and Vassend’s (2019a) quasi-Bayesian updating as special cases. The other updating rule is novel.
This paper discusses the relevance of supertask computation for the determinacy of arithmetic. Recent work in the philosophy of physics has made plausible the possibility of supertask computers, capable of running through infinitely many individual computations in a finite time. A natural thought is that, if true, this implies that arithmetical truth is determinate (at least for e.g. sentences saying that every number has a certain decidable property). In this paper we argue, via a careful analysis of putative arguments from supertask computations to determinacy, that this natural thought is mistaken: supertasks are of no help in explaining arithmetical determinacy.
Suppose we have a group of perfect Bayesian agents with the same evidence who nonetheless disagree. By definition of “perfect Bayesian agent”, the disagreement must be rooted in differences in priors between these peers. …
Gao (2017) presents a new mentalistic reformulation of the well-known measurement problem affecting the standard formulation of quantum mechanics. According to this author, it is essentially a determinate-experience problem, namely a problem about the compatibility between the linearity of the Schrödinger’s equation, the fundamental law of quantum theory, and definite experiences perceived by conscious observers. In this essay I aim to clarify (i) that the well-known measurement problem is a mathematical consequence of quantum theory’s formalism, and that (ii) its mentalistic variant does not grasp the relevant causes which are responsible for this puzzling issue. The first part of this paper will be concluded claiming that the “physical” formulation of the measurement problem cannot be reduced to its mentalistic version. In the second part of this work it will be shown that, contrary to the case of quantum mechanics, Bohmian mechanics and GRW theories provide clear explanations of the physical processes responsible for the definite localization of macroscopic objects and, consequently, for well-defined perceptions of measurement outcomes by conscious observers. More precisely, the macro-objectification of states of experimental devices is obtained exclusively in virtue of their clear ontologies and dynamical laws without any intervention of human observers. Hence, it will be argued that in these theoretical frameworks the measurement problem and the determinate-experience problem are logically distinct issues.
De Finetti is one of the founding fathers of the subjective school of probability. He held that probabilities are subjective, coherent degrees of expectation, and he argued that none of the objective interpretations of probability make sense. While his theory has been influential in science and philosophy, it has encountered various objections. I argue that these objections overlook central aspects of de Finetti’s philosophy of probability and are largely unfounded. I propose a new interpretation of de Finetti’s theory that highlights these aspects and explains how they are an integral part of de Finetti’s instrumentalist philosophy of probability. I conclude by drawing an analogy between misconceptions about de Finetti’s philosophy of probability and common misconceptions about instrumentalism.
. As part of the week of posts on R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012, and 2017. See especially the comments from Feb 2017. ‘Fisher’s alternative to the alternative’
By: Stephen Senn
[2012 marked] the 50th anniversary of RA Fisher’s death. …
This article describes some recent work on ‘direct air capture’ of carbon dioxide—essentially, sucking it out of the air:
• Jon Gerntner, The tiny Swiss company that thinks it can help stop climate change, New York Times Magazine, 12 February 2019. …
The logical analysis of agency and games—for an expository introduction to the field see van der Hoek and Pauly’s overview paper 2007—has boomed in the last two decades giving rise to a plethora of different logics in particular within the multi-agent systems field. At the heart of these logics are always representations of the possible choices (or actions) of groups of players (or agents) and their powers to force specific outcomes of the game. Some logics take the former as primitives, like STIT (the logic of seeing to it that, [Bel-nap et al., 2001; Horty, 2001]), some take the latter like CL (coalition logic, [Pauly, 2002; Goranko et al., 2013]) and ATL (alternating-time temporal logic, [Alur et al., 2002]). In these formalisms the power of players is modeled in terms of the notion of effectivity. In a strategic game, the α-effectivity of a group of players consists of those sets of outcomes of the game for which the players have some collective action which forces the outcome of the game to end up in that set, no matter what the other players do [Moulin and Peleg, 1982]. So, if a set of outcomes X belongs to the α-effectivity of a set of players J , there exists an individual action for each agent in J such that, for all actions of the other players, the outcome of the game will be contained in X. If we keep the actions of the other agents fixed, then the selection of an individual action for each agent in J corresponds to a choice of J under the assumption that the other agents stick to their choices.
In this paper we attempt to shed light on the concept of an agent’s knowledge after a non-deterministic action is executed. We start by making a comparison between notions of non-deterministic choice, and between notions of sequential composition, of settings with dynamic and/or epistemic character; namely Propositional Dynamic Logic (PDL), Dynamic Epistemic Logic (DEL), and the more recent logic of Semi-Public Environments (SPE). These logics represent two different approaches for defining the aforementioned actions, and in order to provide unified frameworks that encompass both, we define the logics DELVO (DEL+Vision+Ontic change) and PDLVE (PDL+Vision+Epistemic operators). DELVO is given a sound and complete axiomatisation.
This note clarifies several details about the description of the measurement process in Bohmian mechanics and responds to a recent preprint by Shan Gao, wrongly claiming a contradiction in the theory.
The probability that intervals are related by a particular Allen relation is calculated relative to sample spaces Ωn given by the number n of, in one case, points, and, in another, interval names. In both cases, worlds in the sample space are assumed equiprobable, and Allen relations are classified as short, medium and long, A useful basis for relating intervals are the 13 relations described in (Allen, 1983) and widely applied to temporal relations in text and beyond (Liu et al., 2018; Verhagen et al., 2009; Allen and Ferguson, 1994; Kamp and Reyle, 1993, among many others). The present work proceeds from the following question.
I explore the logic of the conditional, using credence judgments to argue against Duality and in favor of Conditional Excluded Middle. I then explore how to give a theory of the conditional which validates the latter and not the former, developing a variant on Kratzer (1981)’s restrictor theory, as well as a proposal which combines Stalnaker (1968)’s theory of the conditional with the theory of epistemic modals I develop in Mandelkern 2019a. I argue that the latter approach fits naturally with a conception of conditionals as referential devices which allow us to talk about particular worlds.
Many philosophical discussions presuppose a picture of reality on which, fundamentally, there are objects which have properties and stand in relations. But if we look to how science describes the world, it might be more natural to bring (partial) functions in at the ground level. …
If you take the entries Pascal’s triangle mod 2 and draw black for 1 and white for 0, you get a pleasing pattern:
The th row consists of all 1’s. If you look at the triangle consisting of the first rows, and take the limit as you get a fractal called the Sierpinski gasket. …
Safet is considering the proposition $R$, which says that the handkerchief in his pocket is red. Now, suppose we take red to be a vague concept. And suppose we favour a supervaluationist semantics for propositions that involve vague concepts. …
This paper defends the use of quasi-experiments for causal estimation in economics against the widespread objection that quasi-experimental estimates lack external validity. The defence is that quasi-experimental replication of estimates can yield defeasible evidence for external validity. The paper then develops a different objection. The stable unit treatment value assumption (SUTVA), on which quasi-experiments rely, is argued to be implausible due to the influence of social interaction effects on economic outcomes. A more plausible stable marginal unit treatment value assumption (SMUTVA) is proposed, but it is demonstrated to severely limit the usefulness of quasi-experiments for economic policy evaluation.
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s () recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones.
Semantics of propositional logic can be formulated in terms of 2-player games of perfect information. In the present paper the question is posed what would a generalization of propositional logic to a 3-player setting look like. Two formulations of such a ‘3-player propositional logic’ are given, denoted PL and PL . An overview of some metalogical properties of these logics is provided.
4.8 All Models Are False
. . . it does not seem helpful just to say that all models are wrong. The very word model implies simplification and idealization. . . . The construction of idealized representations that capture important stable aspects of such systems is, however, a vital part of general scientific analysis. …
A month ago, I excerpted just the very start of Excursion 4 Tour I* on The Myth of the “Myth of Objectivity”. It’s a short Tour, and this continues the earlier post. 4.1 Dirty Hands: Statistical Inference Is Sullied with Discretionary Choices
If all flesh is grass, kings and cardinals are surely grass, but so is everyone else and we have not learned much about kings as opposed to peasants. …
The lesson to be learned from the paradoxical St. Petersburg game and Pascal’s Mugging is that there are situations where expected utility maximizers will needlessly end up (with high probability) poor and on death’s door, and hence we should not be expected utility maximizers. Instead, when it comes to decision-making, for possibilities that have very small probabilities of occurring, we should discount those probabilities down to zero, regardless of the utilities associated with those possibilities.
Let’s start with a puzzle:
Puzzle. You measure the energy and frequency of some laser light trapped in a mirrored box and use quantum mechanics to compute the expected number of photons in the box. Then someone tells you that you used the wrong value of Planck’s constant in your calculation. …
I came across an interesting letter in response to the ASA’s Statement on p-values that I hadn’t seen before. It’s by Ionides, Giessing, Ritov and Page, and it’s very much worth reading. I make some comments below. …
A prominent objection against the logicality of second-order logic is the so-called Overgeneration Argument. However, it is far from clear how this argument is to be understood. In the first part of the article, we examine the argument and locate its main source, namely, the alleged entanglement of second-order logic and mathematics. We then identify various reasons why the entanglement may be thought to be problematic. In the second part of the article, we take a metatheoretic perspective on the matter. We prove a number of results establishing that the entanglement is sensitive to the kind of semantics used for second-order logic. These results provide evidence that by moving from the standard set-theoretic semantics for second-order logic to a semantics which makes use of higher-order resources, the entanglement either disappears or may no longer be in conflict with the logicality of second-order logic.
one of the more obscure arguments for Rawls’ difference principle dubbed ‘the Pareto argument for inequality’ has been criticised by G. A. Cohen (1995, 2008) as being inconsistent. In this paper, we examine and clarify the Pareto argument in detail and argue (1) that justification for the Pareto principles derives from rational self-interest and thus the Pareto principles ought to be understood as conditions of individual rationality, (2) that the Pareto argument is not inconsistent, contra Cohen, and (3) that the kind of bargaining model required to arrive at the particular unequal distribution that the difference principle picks out is a model that is not based on bargaining according to one’s threat advantage.
Meat Eating: In the past, Jeff didn’t eat meat, since he was concerned with the harm that meat production does to animals. But then he did some calculations, and figured that the harm that he would do to animals by eating meat for a year equals the harm to animals that he could prevent by donating $200 to animal welfare charities. And he would much rather donate $200 more to charity and eat meat than neither make the extra donation nor eat meat. Given this, Jeff now eats meat, but each year donates $200 more than he otherwise would to animal welfare charities.