The modal properties of the principle of the causal closure of the physical have traditionally been said to prevent anything outside the physical world from affecting the physical universe and vice versa. This idea has been shown to be relative to the definition of the principle (Gamper 2017). A traditional definition prevents the one universe from affecting any other universe, but with a modified definition, e.g. (ibid.), the causal closure of the physical can be consistent with the possibility of one universe affecting the other universe. Gamper (2017) proved this modal property by implementing interfaces between universes. Interfaces are thus possible, but are they realistic? To answer this question, I propose a two-step process where the second step is scientific research. The first step, however, is to fill the gap between the principles or basic assumptions and science with a consistent theoretical framework that accommodates the modal properties of an ontology that matches the basic assumptions.
Robert Batterman and others have argued that certain idealizing explanations have an asymptotic form: they account for a state of affairs or behavior by showing that it emerges “in the limit”. Asymptotic idealizations are interesting in many ways, but is there anything special about them as idealizations? To understand their role in science, must we augment our philosophical theories of idealization? This paper uses simple examples of asymptotic idealization in population genetics to argue for an affirmative answer and proposes a general schema for asymptotic idealization, drawing on insights from Batterman’s treatment and from John Norton’s subsequent critique.
Game-theoretic approaches to social norms have flourished in the recent years, and on first inspection theorists seem to agree on the broad lines that such accounts should follow. By contrast, this paper aims to show that the main two interpretations of social norms are at odds on at least one aspect of social norms, and both fail to account for another aspect.
The primary objective of this paper is to introduce a new epistemic paradox that puts pressure on the claim that justification is closed under multi premise deduction. The first part of the paper will consider two well-known paradoxes—the lottery and the preface paradox—and outline two popular strategies for solving the paradoxes without denying closure. The second part will introduce a new, structurally related, paradox that is immune to these closure-preserving solutions. I will call this paradox, The Paradox of the Pill. Seeing that the prominent closure-preserving solutions do not apply to the new paradox, I will argue that it presents a much stronger case against the claim that justification is closed under deduction than its two predecessors. Besides presenting a more robust counterexample to closure, the new paradox also reveals that the strategies that were previously thought to get closure out of trouble are not sufficiently general to achieve this task as they fail to apply to similar closure-threatening paradoxes in the same vicinity.
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are regarded as undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence.
The previous two chapters have sought to show that the probability calculus cannot serve as a universally applicable logic of inductive inference. We may well wonder whether there might be some other calculus of inductive inference that can be applied universally. It would, perhaps, arise through a weakening of the probability calculus. The principal source of difficulty addressed in those chapters was the additivity of the probability calculus. Such a weakening seems possible as far as additivity is concerned. Something like it is achieved with the Shafer- Dempster theory of belief functions. However there is a second, lingering problem. Bayesian analyses require prior probabilities. As we shall see below, these prior probabilities are never benign. They always make a difference to the final result.
Erich Lehmann 20 November 1917 – 12 September 2009
Erich Lehmann was born 100 years ago today! (20 November 1917 – 12 September 2009). Lehmann was Neyman’s first student at Berkeley (Ph.D 1942), and his framing of Neyman-Pearson (NP) methods has had an enormous influence on the way we typically view them. …
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance.
Rosencrantz and Guildenstern Are Dead, they are betting on coin throws. Rosencrantz has a standing bet on heads, and he keeps winning, pocketing coin after coin. We soon learn that this has been going on for some time, and that no fewer than 76 consecutive heads have been thrown, and counting — a situation which is making Guildenstern increasingly uneasy. The coins don’t appear to be double-headed or weighted or anything like that — just ordinary coins — leading Guildenstern to consider several unsettling explanations: that he is subconsciously willing the coins to land heads in order to cleanse himself of some repressed sin, that they are both trapped reliving the same moment in time over and over again, that the coins are being controlled by some menacing supernatural force. He then proposes a fourth hypothesis, which suggests a change of heart: that nothing surprising is happening at all and no special explanation is needed. He says, “… each individual coin spun individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.” In the end 92 heads are thrown without a single tail, when the characters are interrupted.
The notion of preference has a central role in many disciplines,
including moral philosophy and decision theory. Preferences and their
logical properties also have a central role in rational choice theory,
a subject that in its turn permeates modern economics, as well as
other branches of formalized social science. The notion of preference
and the way it is analysed vary between these disciplines. A treatment
is still lacking that takes into account the needs of all usages and
tries to combine them in a unified approach. This entry surveys the
most important philosophical uses of the preference concept and
investigates their compatibilities and conflicts.
The Problem of Old Evidence is a perennial issue for Bayesian confirmation theory. Garber (1983) famously argues that the problem can be solved by conditionalizing on the proposition that a hypothesis deductively implies the existence of the old evidence. In recent work, Hartmann and Fitelson (2015) and Sprenger (2015) aim for similar, but more general, solutions to the Problem of Old Evidence. These solutions are more general because they allow the explanatory relationship between a new hypothesis and old evidence to be inductive, rather than deductive. In this paper, I argue that these solutions are either unsound or under-motivated, depending on the case of inductive explanation that we have in mind. This lends support to the broader claim that Garber-style Bayesian confirmation cannot capture the sense in which new hypotheses that do not deductively imply old evidence nevertheless seem to be confirmed via old evidence.
Hawking’s area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. This result tightens the conventional wisdom that quantum field theoretic violations of the null energy condition account for why the conclusion of the area theorem can be bypassed in the semi-classical context. Shown here is that violations of the null energy condition, though necessary, are not sufficient to violate the conclusion of the area theorem. As an added benefit, the specific form of the energy condition used here suggests that the area non-decrease behavior described by the area theorem is a quasi-local effect that depends, in large measure, on the energetic character of the relevant fields in the vicinity of the event horizon.
This paper considers states on the Weyl algebra of the canonical commutation relations over the phase space R2n. We show that a state is regular iff its classical limit is a countably additive Borel probability measure on R2n. It follows that one can “reduce” the state space of the Weyl algebra by altering the collection of quantum mechanical observables so that all states are ones whose classical limit is physical.
I subject the semantic claims of stage theory to scrutiny and show that it’s unclear how to make them come out true for a simple and deep reason: the stage theorist needs tensed elements to semantically modify the denotations of referring expressions to enable us to talk about past and future stages. But in the syntax of natural language, expressions carrying tense modify verbs and adjectives and not referring expressions. This mismatch between what the stage theorist needs, and what language provides, makes it hard to see how the stage theorist’s semantic claims could be true.
One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. …
Paul Humphreys, Emergence (OUP, 2016)Yesterday we saw, via an example from social psychology, that diachronic approaches to emergence can avoid some of the major problems of synchronic approaches. That motivating example is not wholly convincing as an example of transformational emergence. …
The purpose of this paper is to present a paraconsistent formal system and a corresponding intended interpretation according to which true contradictions are not tolerated. Contradictions are, instead, epistemically understood as conflicting evidence, where evidence for a proposition A is understood as reasons for believing that A is true. The paper defines a paraconsistent and paracomplete natural deduction system, called the Basic Logic of Evidence (BLE ), and extends it to the Logic of Evidence and Truth (LETJ). The latter is a logic of formal inconsistency and undeterminedness that is able to express not only preservation of evidence but also preservation of truth. LETJ is anti-dialetheist in the sense that, according to the intuitive interpretation proposed here, its consequence relation is trivial in the presence of any true contradiction. Adequate semantics and a decision method are presented for both BLE and LETJ , as well as some technical results that fit the intended interpretation.
Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges.
The problem of the direction of the electromagnetic arrow of time is perhaps the most perplexing of the major unsolved problems of contemporary physics, because the usual tools of theoretical physics cannot be used to investigate it. Even the clues provided by the CP violation of the K 2 meson, which have led to a profound insight into the dominance of matter over antimatter in the universe, have not shed any light on the problem of the origin of the electromagnetic arrow of time.
One response to the problem of logical omniscience in standard possible worlds models of belief is to extend the space of worlds so as to include impossible worlds. It is natural to think that essentially the same strategy can be applied to probabilistic models of partial belief, for which parallel problems also arise. In this paper, I note a difficulty with the inclusion of impossible worlds into probabilistic models. Under weak assumptions about the space of worlds, most of the propositions which can be constructed from possible and impossible worlds are in an important sense inexpressible; leaving the probabilistic model committed to saying that agents in general have at least as many attitudes towards inexpressible propositions as they do towards expressible propositions. If it is reasonable to think that our attitudes are generally expressible, then a model with such commitments looks problematic.
There are various equivalent formulations of the Church-Turing thesis. A common one is that every effective computation can be carried out by
a Turing machine. The Church-Turing thesis is often misunderstood,
particularly in recent writing in the philosophy of mind.
After a brief presentation of Feynman diagrams, we criticizise the idea that Feynman diagrams can be considered to be pictures or depictions of actual physical processes. We then show that the best interpretation of the role they play in quantum field theory and quantum electrodynamics is captured by Hughes' Denotation, Deduction and Interpretation theory of models (DDI), where “models” are to be interpreted as inferential, non-representational devices constructed in given social contexts by the community of physicists.
By perfectly fine I mean: not at all morally blameworthy. By aiming I mean: being ready to calibrate ourselves up or down to hit the target. I would contrast aiming with settling, which does not necessarily involve calibrating down if one is above target. …
Joseph Halpern and Judea Pearl () draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
September’s general elections have brought Germany its own Brexit/Trump moment. For the first time since 1945 a far right nationalist party is part of the German national parliament. The Alternative for Germany, AfD, gained 12,6 % of German votes. …
This paper is about the putative theoretical virtue of strength, as it might be used in abductive arguments to the correct logic in the epistemology of logic. It argues for three theses. The first is that the well-defined property of logical strength is neither a virtue nor a vice, so that logically weaker theories are not—all other things being equal—worse or better theories than logically stronger ones. The second thesis is that logical strength does not entail the looser characteristic of scientific strength, and the third is that many modern logics are on a par—or can be made to be on a par—with respect to scientific strength.
As Feynman (1982) observed, “we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents” (471). Among the perplexing aspects of quantum mechanics is its seeming, on a wide variety of presently live realist interpretations (including but not limited to the so-called ‘orthodox’ interpretation), to violate the classical supposition of ‘value definiteness’, according to which the properties—a.k.a. ‘observables’—of a given particle or system have precise values at all times. Indeed, value indefiniteness lies at the heart of what is supposed to be distinctive about quantum phenomena, as per the following classic cases:
Facts, philosophers like to say, are opposed to theories and to values
(cf. Rundle 1993) and are to be distinguished from things, in
particular from complex objects, complexes and wholes, and from
relations. They are the objects of certain mental states and acts,
they make truth-bearers true and correspond to truths, they are part
of the furniture of the world. We present and discuss some
philosophical and formal accounts of facts.
“Intuitionistic logic” is a term that unfortunately gains
ever greater currency; it conveys a wholly false view on
intuitionistic mathematics. —Freudenthal 1937
Intuitionistic logic is an offshoot of L.E.J. Brouwer’s
intuitionistic mathematics. A widespread misconception has it that
intuitionistic logic is the logic underlying Brouwer’s
intuitionism; instead, the intuitionism underlies the logic, which is
construed as an application of intuitionistic mathematics to language. Intuitionistic mathematics consists in the act of effecting mental
constructions of a certain kind. These are themselves not linguistic
in nature, but when acts of construction and their results are
described in a language, the descriptions may come to exhibit
The method of explication has been somewhat of a hot topic in the last ten years. Despite the multifaceted research that has been directed at the issue, one may perceive a lack of step-by-step procedural or structural accounts of explication. This paper aims at providing a structural account of the method of explication in continuation of the works of Geo Siegwart. It is enhanced with a detailed terminology for the assessment and comparison of explications. The aim is to provide means to talk about explications including their criticisms and their interrelations. There is hope that this treatment will be able to serve as a foundation to a step-by-step guide to be established for explicators. At least it should help to frame and mediate explicative disputes. In closing the enterprise will be considered an explication of ‘explication’, though consecutive explications improving on this one are undoubtedly conceivable.