. The allegation that P-values overstate the evidence against the null hypothesis continues to be taken as gospel in discussions of significance tests. All such discussions, however, assume a notion of “evidence” that’s at odds with significance tests–generally Bayesian probabilities of the sort used in Jeffrey’s-Lindley disagreement (default or “I’m selecting from an urn of nulls” variety). …
Herbert Spencer (1820–1903) is typically, though quite wrongly,
considered a coarse social Darwinist. After all, Spencer, and not
Darwin, coined the infamous expression “survival of the
fittest”, leading G. E. Moore to conclude erroneously in
Principia Ethica (1903) that Spencer committed the
naturalistic fallacy. According to Moore, Spencer's practical
reasoning was deeply flawed insofar as he purportedly conflated mere
survivability (a natural property) with goodness itself (a non-natural
property). Roughly fifty years later, Richard Hofstadter devoted an entire
chapter of Social Darwinism in American Thought (1955) to
Spencer, arguing that Spencer's unfortunate vogue in late
nineteenth-century America inspired Andrew Carnegie and William Graham
Sumner's visions of unbridled and unrepentant capitalism.
The talk gives a formal analysis of public lies, and explains how public lying is related to public announcement.
Recently, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson published a paper in the British Journal of Philosophy of Science in which they claim that the Principal Principle entails the Principle of Indifference -- indeed, the paper is called 'The Principal Principle implies the Principle of Indifference'. …
All Bayesian epistemologists agree on two claims. The first — which we might call Precise Credences — says that an agent’s doxastic state at a given time t in her epistemic life can be represented by a single credence function Pt, which assigns to each proposition A about which she has an opinion a precise numerical value Pt(A) that is at least 0 and at most 1. Pt(A) is the agent’s credence in A at t. It measures how strongly she believes A at t, or how confident she is at t that A is true. The second — which is typically called Probabilism — says that an agent’s credence function at a given time should be a probability function — that is, for all times t, Pt( ) = 1 for any tautology , Pt(⊥) = 0 for any contradiction ⊥, and Pt(A ∨ B) = Pt(A) + Pt(B) − Pt(AB) for any propositions A and B.
Biologist Steve Jones claims that a piece of research cannot be science if the person who did the research does not communicate their findings. He then dismisses Fermat’s proof of his last theorem as something that Fermat might as well not have done. I give reasons to reject the argument Jones offers for his communication requirement, the requirement itself and what he says about Fermat’s last theorem.
It is well-known that the invocation of ‘equilibrium processes’ in thermodynamics is oxymoronic. However, their prevalence and utility particularly in elementary accounts presents a problem. We consider a way in which their role can be played by curves carrying the property of accessibility. We also examine the vexed question of whether equilibrium processes can be considered to be reversible and the revision of this property in relation to curves of accessibility.
Computer programs are particular kinds of texts. It is therefore
natural to ask what is the meaning of a program or, more generally,
how can we set up a formal semantical account of a programming
language. There are many possible answers to such questions, each motivated by
some particular aspect of programs. So, for instance, the fact that
programs are to be executed on some kind of computing machine gives
rise to operational semantics, whereas the similarities of programming
languages with the formal languages of mathematical logic has
motivated the denotational approach that interprets programs and their
constituents by means of set-theoretical models.
A small probability space representation of quantum mechanical probabilities is defined as a collection of Kolmogorovian probability spaces, each of which is associated with a context of a maximal set of compatible measurements, that portrays quantum probabilities as Kolmogorovian probabilities of classical events. Bell’s theorem is stated and analyzed in terms of the small probability space formalism.
Inspired by possible connections between gravity and foundational question in quantum theory, we consider an approach for the adaptation of objective collapse models to a general relativistic context. We apply these ideas to a list of open problems in cosmology and quantum gravity, such as the emergence of seeds of cosmic structure, the black hole information issue, the problem of time in quantum gravity and, in a more speculative manner, to the nature of dark energy and the origin of the very special initial state of the universe. We conclude that objective collapse models offer a rather promising path to deal with all of these issues.
Suppose a Newtonian universe where an elastic and perfectly round ball is dropped. At some point in time, the surface of the ball will no longer be spherical. If an object is F at one time and not F at another, while existing all the while, at least normally the object changes in respect of being F. I am not claiming that that is what change in respect of F is (as I said recently in a comment, I think there is more to change than that), but only that normally this is a necessary and sufficient condition for it. …
Roger White has drawn my attention to an interesting problem, having to do with what to believe in a situation in which you have evidence that the world is infinite. I will build up to the situation in stages.
On a naive Humean picture of action, we have beliefs and desires and together these yield our actions. But how do beliefs and desires yield beliefs? There are many (abstractly speaking, infinitely many, but perhaps only a finite subset is physically possible for us) maps from beliefs and desires to actions. …
Logicism is typically defined as the thesis that mathematics reduces to, or is an extension of, logic. Exactly what “reduces” means here is not always made entirely clear. (More definite articulations of logicism are explored in section 5 below.) While something like this thesis had been articulated by others (e.g., Dedekind 1888 and arguably Leibniz 1666), logicism only became a widespread subject of intellectual study when serious attempts began to be made to provide complete deductions of the most important principles of mathematics from purely logical foundations. This became possible only with the development of modern quantifier logic, which went hand in hand with these attempts. Gottlob Frege announced such a project in his 1884 Grundlagen der Arithmetik (translated as Frege 1950), and attempted to carry it out in his 1893–1902 Grundgesetze der Arithmetik (translated as Frege 2013). Frege limited his logicism to arithmetic, however, and it turned out that his logical foundation was inconsistent.
Simplistic accounts of its history sometimes portray logic as having stagnated in the West completely from its origins in the works of Aristotle all the way until the 19th Century. This is of course nonsense. The Stoics and Megarians added propositional logic. Medievals brought greater unity and systematicity to Aristotle’s system and improved our understanding of its underpinnings (see e.g., Henry 1972), and important writings on logic were composed by thinkers from Leibniz to Clarke to Arnauld and Nicole. However, it cannot be denied that an unprecedented sea change occurred in the 19th Century, one that has completely transformed our understanding of logic and the methods used in studying it. This revolution can be seen as proceeding in two main stages. The first dates to the mid-19th Century and is owed most signally to the work of George Boole (1815–1864). The second dates to the late 19th Century and the works of Gott-lob Frege (1848–1925). Both were mathematicians primarily, and their work made it possible to bring mathematical and formal approaches to logical research, paving the way for the significant meta-logical results of the 20th Century. Boolean algebra, the heart of Boole’s contributions to logic, has also come to represent a cornerstone of modern computing. Frege had broad philosophical interests, and his writings on the nature of logical form, meaning and truth remain the subject of intense theoretical discussion, especially in the analytic tradition. Frege’s works, and the powerful new logical calculi developed at the end of the 19th Century, influenced many of its most seminal figures, such as Bertrand Russell, Ludwig Wittgenstein and Rudolf Carnap. Indeed, Frege is sometimes heralded as the “father” of analytic philosophy, although he himself would not live to become aware of any such movement.
In a choice between saving five people or saving another person, is it better to save the five, other things being equal? According to utilitarianism, it would be better to save the five if the combined gain in well-being for them would be greater than the loss for the one. A standard objection is that adding up the gains or losses of different people in this manner is a problematic form of interpersonal aggregation. It is far from clear, however, what more precisely is supposed to be problematic about utilitarian aggregation. The aggregation sceptics—that is, among others, John Rawls, Robert Nozick, Thomas Nagel, John M. Taurek, and T. M. Scanlon—have not offered a clear criterion for what counts as a morally problematic form of aggregation and what does not. Hence it is hard to know what to make of this objection.
Parsons’ characterization of structuralism makes it, roughly, the view that (i) mathematical objects come in structures, and (ii) the only properties we may attribute to mathematical objects are those pertaining to their places in their structures. The chief motivation for (ii) appears to be the observation that in the case of those mathematical objects that most clearly come in structures, mathematical practice generally attributes to them no properties other than those pertaining to their places in structures. I argue that in mathematical practice there are exceptions to (i), though how many depends on how strictly one takes (i), and that there is an alternative interpretation available for the facts about mathematical practice motivating (ii).
In this paper, I will examine the representative halfer and thirder solutions to the Sleeping Beauty problem. Then by properly applying the event concept in probability theory and examining similarity of the Sleeping Beauty problem to the Monty Hall problem, it is concluded that the representative thirder solution is wrong and the halfers are right, but that the representative halfer solution also contains a wrong logical conclusion.
This paper raises a simple continuous spectrum issue in many-worlds interpretation of quantum mechanics, or Everettian interpretation. I will assume that Everettian interpretation refers to many-worlds understanding based on quantum decoherence. The fact that some operators in quantum mechanics have continuous spectrum is used to propose a simple thought experiment based on probability theory. Then the paper concludes it is untenable to think of each possibility that wavefunction Ψ gives probability as actual universe. While the argument that continuous spectrum leads to inconsistency in the cardinality of universes can be made, this paper proposes a different argument not relating to theoretical math that actually has practical problems.
Pusey-Barrett-Rudolph theorem claims that ψ-epistemic understanding of quantum mechanics is in trouble. Not considering whether the theorem only applies for realist understanding of quantum theory, this paper instead shows that the actual issue the theorem exposes is whether every quantum state should be interpreted as representing all sub-ensemble possibilities. For example, if |+ was “measured” at time t = 0 where √ |+ = (|0 +|1 )/ 2, should we consider this quantum state as being solely |+ , or representing all possible sub-ensembles such as (+, 0), (+, 1)? This question suggests that PBR theorem does not rule out realist/non-realist ψ-epistemic theory.
David Deutsch provided us one possible solution to the grandfather paradox, Deutsch’s closed timelike curves, or simply Deutsch CTC. Deutsch states that this gives us a tool to test many-worlds (Everettian) hypothesis since Deutsch CTC requires Everettian understanding. This paper explores the possibility of co-existence of Deutsch CTC with contextual/epistemic understanding of quantum mechanics. Then this paper presents the irrelevance hypothesis and the hypothetical application to quantum complexity theory.
Is it ever rational to calculate expected utilities? Posted on Wednesday, 04 Jan 2017
Decision theory says that faced with a number of options, one
should choose an option that maximizes expected utility. …
Just as in the past 5 years since I’ve been blogging, I revisit that spot in the road at 11p.m., just outside the Elbar Room, get into a strange-looking taxi, and head to “Midnight With Birnbaum”. (The pic on the left is the only blurry image I have of the club I’m taken to.) …
In this paper, I examine the relationship between physical quantities and physical states in quantum theories. I argue against the claim made by Arageorgis (1995) that the approach to interpreting quantum theories known as Algebraic Imperialism allows for “too many states”. I prove a result establishing that the Algebraic Imperialist has very general resources that she can employ to change her abstract algebra of quantities in order to rule out unphysical states.
Social scientists use many different methods, and there are often substantial disagreements about which method is appropriate for a given research question. In response to this uncertainty about the relative merits of different methods, W. E. B. Du Bois advocated for and applied “methodological triangulation”. This is to use multiple methods simultaneously in the belief that, where one is uncertain about the reliability of any given method, if multiple methods yield the same answer that answer is confirmed more strongly than it could have been by any single method. Against this, methodological purists believe that one should choose a single appropriate method and stick with it.
When logical fallacies of statistics go uncorrected, they are repeated again and again…and again. And so it is with the limb-sawing fallacy I first posted in one of my “Overheard at the Comedy Hour” posts. …
What is the relationship between the ordinary language conditional and the material conditional which standard first-order logic uses as its counterpart, surrogate, or replacement? Let’s take it as agreed for present purposes that there is a distinction to be drawn between two kinds of conditional, traditionally “indicative” and “subjunctive” (we can argue the toss about the aptness of these labels for the two kinds, and argue further about where the boundary between the two kinds is to be drawn: but let’s set such worries aside). …
It is argued that if the non-unitary measurement transition, as codified by Von Neumann, is a real physical process, then the ‘probability assumption’ needed to derive the Second Law of Thermodynamics naturally enters at that point. The existence of a real, indeterministic physical process underlying the measurement transition would therefore provide an ontological basis for Boltzmann’s Stosszahlansatz and thereby explain the unidirectional increase of entropy against a backdrop of otherwise time-reversible laws. It is noted that the Transactional Interpretation (TI) of quantum mechanics provides such a physical account of the non-unitary measurement transition, and TI is brought to bear in finding a physically complete, non-ad hoc grounding for the Second Law.
Quantum mechanics arguably provides the best evidence we have for strong emergence. Entangled pairs of particles apparently have properties that fail to supervene on the properties of the particles taken individually. But at the same time, quantum mechanics is a terrible place to look for evidence of strong emergence: the interpretation of the theory is so contested that drawing any metaphysical conclusions from it is risky at best. I run through the standard argument for strong emergence based on entanglement, and show how it rests on shaky assumptions concerning the ontology of the quantum world. In particular, I consider two objections: that the argument involves Bell’s theorem, whose premises are often rejected, and that the argument rests on a contested account of parts and wholes. I respond to both objections, showing that, with some important caveats, the argument for emergence based on quantum mechanics remains intact.
The perfectly natural properties and relations are special – they are all and only those that “carve nature at its joints”. They act as reference magnets; form a minimal supervenience base; figure in fundamental physics and in the laws of nature; and never divide duplicates within or between worlds. If the perfectly natural properties are the (metaphysically) important ones, we should expect being a perfectly natural property to itself be one of the (perfectly) natural properties. This paper argues that being a perfectly natural property is not a very natural property, and examines the consequences.