A new “voucher” program aims to shrink the US waiting list for kidney transplants (Veale, 2016). The waiting list is long, hovering in 2017 at around 95,000 (United Network for Organ Sharing, 2017). During 2016, approximately 19,000 kidney transplants took place, meeting only approximately one fifth of the demand. For patients with end stage renal disease (ESRD), transplantation has greater health benefits than dialysis, both in terms of length and quality of life (Tonelli et al, 2011). Transplantation from living donors is optimal: it tops both dialysis and transplantation from deceased donors in terms of health outcomes and cost-effectiveness (LaPointe Rudow et al, 2015, 914). The new voucher program involves live donation.
Computer simulation of an epistemic landscape model, modified to include explicit representation of a centralised funding body, show the method of funding allocation has significant effects on communal trade-off between exploration and exploitation, with consequences for the community’s ability to generate significant truths. The results show this effect is contextual, and depends on the size of the landscape being explored, with funding that includes explicit random allocation performing significantly better than peer-review on large landscapes. The paper proposes a way of incorporating external institutional factors in formal social epistemology, and offers a way of bringing such investigations to bear on current research policy questions.
11 August 1895 – 12 June 1980
Continuing with my Egon Pearson posts in honor of his birthday, I reblog a post by Aris Spanos: “Egon Pearson’s Neglected Contributions to Statistics“. Egon Pearson (11 August 1895 – 12 June 1980), is widely known today for his contribution in recasting of Fisher’s significance testing into the Neyman-Pearson (1933) theory of hypothesis testing. …
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016. …
The Holodeck - Star Trek
There is an apple in front of me. I can see it, but I can’t touch it. The reason is that the apple is actually a 3-D rendered model of an apple. It looks like an apple, but exists only within a virtual environment — one that is projected onto the computer screen in front of me. …
The topic of unity in the sciences can be explored through the
following questions: Is there one privileged, most basic or
fundamental concept or kind of thing, and if not, how are the
different concepts or kinds of things in the universe related? Can the
various natural sciences (e.g.,physics, astronomy, chemistry, biology)
be unified into a single overarching theory, and can theories within a
single science (e.g., general relativity and quantum theory in
physics, or models of evolution and development in biology) be
unified? Are theories or models the relevant connected units? What
other connected or connecting units are there?
As Harvey Brown emphasizes in his book Physical Relativity, inertial motion in general relativity is best understood as a theorem, and not a postulate. Here I discuss the status of the “conservation condition”, which states that the energy-momentum tensor associated with non-interacting matter is covariantly divergence-free, in connection with such theorems.
In this paper I discuss the delayed choice quantum eraser experiment by giving a straightforward account in standard quantum mechanics. At first glance, the experiment suggests that measurements on one part of an entangled photon pair (the idler) can be employed to control whether the measurement outcome of the other part of the photon pair (the signal) produces interference fringes at a screen after being sent through a double slit. Significantly, the choice whether there is interference or not can be made long after the signal photon encounters the screen. The results of the experiment have been alleged to invoke some sort of ‘backwards in time influences’. I argue that in the standard collapse interpretation the issue can be eliminated by taking into account the collapse of the overall entangled state due to the signal photon. Likewise, in the de Broglie-Bohm picture the particle’s trajectories can be given a well-defined description at any instant of time during the experiment. Thus, there is no need to resort to any kind of ‘backwards in time influence’. As a matter of fact, the delayed choice quantum eraser experiment turns out to resemble a Bell-type measurement, and so there really is no mystery.
E.S. Pearson (11 Aug, 1895-12 June, 1980)
This is a belated birthday post for E.S. Pearson (11 August 1895-12 June, 1980). It’s basically a post from 2012 which concerns an issue of interpretation (long-run performance vs probativeness) that’s badly confused these days. …
There’s a new paper on the arXiv that claims to solve a hard problem:
• Norbert Blum, A solution of the P versus NP problem. Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. …
Schupbach and Sprenger (2011) introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not (without revision) adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power.
In this chapter, I will discuss what it takes for a dynamical collapse theory to provide a reasonable description of the actual world. I will start with discussions of what is required, in general, of the ontology of a physical theory, and then apply it to the quantum case. One issue of interest is whether a collapse theory can be a quantum state monist theory, adding nothing to the quantum state and changing only its dynamics. Although this was one of the motivations for advancing such theories, its viability has been questioned, and it has been argued that, in order to provide an account of the world, a collapse theory must supplement the quantum state with additional ontology, making such theories more like hidden-variables theories than would first appear. I will make a case for quantum state monism as an adequate ontology, and, indeed, the only sensible ontology for collapse theories. This will involve taking dynamical variables to possess, not sharp values, as in classical physics, but distributions of values.
I discuss a game-theoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. (2014) have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational credit-maximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded.
Illustration by Slate
Last week a team of 72 scientists released the preprint of an article attempting to address one aspect of the reproducibility crisis, the crisis of conscience in which scientists are increasingly skeptical about the rigor of our current methods of conducting scientific research. …
A core question of contemporary social morality concerns how we ought to handle racial categorization. By this we mean, for instance, classifying or thinking of a person as Black, Korean, Latino, White, etc.² While it is widely FN:2 agreed that racial categorization played a crucial role in past racial oppression, there remains disagreement among philosophers and social theorists about the ideal role for racial categorization in future endeavors. At one extreme of this disagreement are short-term eliminativists who want to do away with racial categorization relatively quickly (e.g. Appiah, 1995; D’Souza, 1996; Muir, 1993; Wasserstrom, 2001/1980; Webster, 1992; Zack, 1993, 2002), typically because they view it as mistaken and oppressive. At the opposite end of the spectrum, long-term conservationists hold that racial identities and communities are beneficial, and that racial categorization —suitably reformed —is essential to fostering them (e.g. Outlaw, 1990, 1995, 1996). While extreme forms of conservationism have fewer proponents in academia than the most radical eliminativist positions, many theorists advocate more moderate positions. In between the two poles, there are many who believe that racial categorization is valuable (and perhaps necessary) given the continued existence of racial inequality and the lingering effects of past racism (e.g. Haslanger, 2000; Mills, 1998; Root, 2000; Shelby, 2002, 2005; Sundstrom, 2002; Taylor, 2004; Young, 1989). Such authors agree on the short-term need for racial categorization in at least some domains, but they often differ with regard to its long-term value.
Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero. What if I throw an infinite number of independent darts at the target? …
It is suggested that the apparently disparate cosmological phenomena attributed to so-called ‘dark matter’ and ‘dark energy’ arise from the same fundamental physical process: the emergence, from the quantum level, of spacetime itself. This creation of spacetime results in metric expansion around mass points in addition to the usual curvature due to stress-energy sources of the gravitational field. A recent modification of Einstein’s theory of general relativity by Chadwick, Hodgkinson, and McDonald incorporating spacetime expansion around mass points, which accounts well for the observed galactic rotation curves, is adduced in support of the proposal. Recent observational evidence corroborates a prediction of the model that the apparent amount of ‘dark matter’ increases with the age of the universe. In addition, the proposal leads to the same result for the small but nonvanishing cosmological constant, related to ‘dark energy,’ as that of the causet model of Sorkin et al.
Psychophysical supervenience requires that the mental properties of a system cannot change without the change of its physical properties. For a system with many minds, the principle requires that the mental properties of each mind of the system cannot change without the change of the physical properties of the system. In this paper, I argue that Everett’s theory seems to violate this principle of psychophysical supervenience. The violation results from the three key assumptions of the theory: (1) the completeness of the physical description by the wave function, (2) the linearity of the dynamics for the wave function, and (3) multiplicity. For a post-measurement state with two decoherent result branches, multiplicity means that each result branch corresponds to a mindful observer, whose mental properties supervene on the branch, and in particular, whose mental content contains a definite record corresponding to the result branch. Under certain unitary evolution which swaps the two result branches, the post-measurement state does not change, and the completeness of the physical description by the wave function then means that the physical state of the composite system does not change. While the linearity of the dynamics for the wave function requires that each result branch changes, and correspondingly the mental properties of the observer which supervene on the branch also change. Thus the principle of psychophysical supervenience as defined above is violated by Everett’s theory.
Questions about the value of the humanities and the relationship between the sciences and humanities have been very much in the news recently. Just a brief review in the public press shows scientists and humanists weighing in and responding to one another. Public opinion is shifting in favor of science and technological education. There are two related challenges that have been leveled about the value of the humanities.
The essay begins with a taxonomy of the major contexts in which the
notion of ‘style’ in mathematics has been appealed to
since the early twentieth century. These include the use of the notion
of style in comparative cultural histories of mathematics, in
characterizing national styles, and in describing mathematical
practice. These developments are then related to the more familiar
treatment of style in history and philosophy of the natural sciences
where one distinguishes ‘local’ and
‘methodological’ styles. It is argued that the natural
locus of ‘style’ in mathematics falls between the
‘local’ and the ‘methodological’ styles
described by historians and philosophers of science.
The claim of inflationary cosmology to explain certain observable facts, which the Friedmann-Roberston-Walker models of ‘Big-Bang’ cosmology were forced to assume, has already been the subject of significant philosophical analysis. However, the principal empirical claim of inflationary cosmology, that it can predict the scale-invariant power spectrum of density perturbations, as detected in measurements of the cosmic microwave background radiation, has hitherto been taken at face value by philosophers. The purpose of this paper is to expound the theory of density perturbations used by inflationary cosmology, to assess whether inflation really does predict a scale-invariant spectrum, and to identify the assumptions necessary for such a derivation. The first section of the paper explains what a scale-invariant power-spectrum is, and the requirements placed on a cosmological theory of such density perturbations. The second section explains and analyses the concept of the Hubble horizon, and its behaviour within an inflationary space-time. The third section expounds the inflationary derivation of scale-invariance, and scrutinises the assumptions within that derivation. The fourth section analyses the explanatory role of ‘horizon-crossing’ within the inflationary scenario.
In the context of superintelligent AI systems, the term “oracle” has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.
Scientists and philosophers frequently speak about levels of description, levels of explanation, and ontological levels. This paper proposes a unified framework for modelling levels. I give a general definition of a system of levels and show that it can accommodate descriptive, explanatory, and ontological notions of levels. I further illustrate the usefulness of this framework by applying it to some salient philosophical questions: (1) Is there a linear hierarchy of levels, with a fundamental level at the bottom? And what does the answer to this question imply for physicalism, the thesis that everything supervenes on the physical? (2) Are there emergent properties? (3) Are higher-level descriptions reducible to lower-level ones? (4) Can the relationship between normative and non-normative domains be viewed as one involving levels? Although I use the terminology of “levels”, the proposed framework can also represent “scales”, “domains”, or “subject matters”, where these are not linearly but only partially ordered by relations of supervenience or inclusion.
A central proposition of this book is that there are no universal rules for inductive inference. The chapters so far have sought to argue for this proposition and to illustrate it by showing how several popular accounts of inductive inference fail to provide universally applicable rules. Many in an influential segment of the philosophy of science community will judge these efforts to be mistaken and futile. In their view, the problem has been solved, finally and irrevocably.
What is an epiphenomenal property? This question needs to be settled before we get to decide whether higher-level properties are epiphenomenal or not. In this paper, I offer an account of what it is for a property to have some causal power. From this, I derive a characterisation of the notion of an epiphenomenal property. I then argue that physically realized higher-level properties are not epiphenomenal because laws of nature impose causal similarities on the bearers of such properties, and these similarities figure as powers in the causal profiles of these properties.
In 1986 David Gauthier proposed an arbitration scheme for two player cardinal bargaining games based on interpersonal comparisons of players’ relative concessions. In Gauthier’s original arbitration scheme, players’ relative concessions are defined in terms of Raiffa-normalized cardinal utility gains, and so it cannot be directly applied to ordinal bargaining problems. In this paper I propose a relative benefit equilibrating bargaining solution (RBEBS ) for two and n-player ordinal and quasiconvex ordinal bargaining problems with finite sets of feasible basic agreements based on the measure of players’ ordinal relative individual advantage gains. I provide an axiomatic characterization of this bargaining solution and discuss the conceptual relationship between RBEBS and ordinal egalitarian bargaining solution (OEBS ) proposed by Conley and Wilkie (2012). I show the relationship between the measurement procedure for ordinal relative individual advantage gains and the measurement procedure for players’ ordinal relative concessions, and argue that the proposed arbitration scheme for ordinal games can be interpreted as an ordinal version of Gauthier’s arbitration scheme.
Algebra is a branch of mathematics sibling to geometry, analysis
(calculus), number theory, combinatorics, etc. Although algebra has
its roots in numerical domains such as the reals and the complex
numbers, in its full generality it differs from its siblings in
serving no specific mathematical domain. Whereas geometry treats
spatial entities, analysis continuous variation, number theory integer
arithmetic, and combinatorics discrete structures, algebra is equally
applicable to all these and other mathematical domains. Elementary algebra, in use for centuries and taught in
secondary school, is the arithmetic of indefinite quantities or
variables \(x, y,\ldots\).
Suppose a blind man can tell by touch the difference between a sphere and a cube: Suppose then the cube and sphere placed on a table, and the blind man to be made to see. Quaere, whether by his sight, before he touched them, he could now distinguish, and tell, which is the globe, which the cube.
University of Massachusetts Amherst 1. Introduction. Humeans have a problem with quantities. A core principle of any Humean account of modality is that fundamental entities can freely recombine. But determinate quantities, if fundamental, seem to violate this core principle: determinate quantities belonging to the same determinable necessarily exclude one another. Call this the problem of exclusion. Prominent Humeans have responded in various ways. Wittgenstein (1929), when he resurfaced to philosophy, gave the problem of exclusion as a reason to abandon the logical atomism of the Tractatus with its free recombination of elementary propositions. Armstrong (1978) and (1989) promoted a mereological solution to the problem of exclusion; but his account fails in manifold ways to provide a general solution to the problem. Lewis studiously avoided committing to any one solution, trusting simply that, since Humeanism was true, there had to be some solution. Abandonment; failure; avoidance: we Humeans need to do better. It is high time we Humeans confronted and dispatched this elephant in the room.
Philosophy is always going to be the default home of non-naturalists and antinaturalists. Since no other discipline will take them seriously, they gravitate toward philosophy and find each other. Antinaturalism is like the tide; you can try to beat it back, but another wave will arrive with each new crop of thinkers. And each generation tries to find a flaw in naturalism and raises one banner or another before retiring, literally, in defeat with honor. I view this the same way I view Las Vegas: it’s actually a very "green" installation, like the red-light district in Amsterdam. Every society has a subpopulation that loves trashy, glittery entertainment; porn; gambling and it would be foolish to despoil some beautiful area with it. Plunk it in the middle of some otherwise irredeemably in— hospitable and infertile desert—concentrate the glitz and sleaze in one place where it can be indulged in with a minimal impact on the rest of the world. What happens in Vegas stays in Vegas! It can be policed efficiently, so that most of the "evil" is just make—believe evil, carnival evil.