There is a familiar philosophical position – sometimes called the doctrine of the open future – according to which future contingents (claims about underdetermined aspects of the future) systematically fail to be true. For instance: supposing that there are ways things could develop from here in which Trump is impeached, and in which he is not, it is not now true that Trump will be impeached, and not now true that Trump will not be impeached. For well over 2000 years, however, open futurists have been accused of denying certain logical laws – bivalence, excluded middle, or both – for entirely ad hoc reasons, most notably, that their denials are required for the preservation of something we hold dear. In a recent paper, however, I sought to argue that this deeply entrenched narrative ought to be overturned. My thought was this: given a popular, plausible approach to the semantics of future contingents, we can reduce the question of their status to the Russell/Strawson debate concerning presupposition failure, definite descriptions, and bivalence. In that case, we will see that open futurists in fact needn’t deny bivalence (Russell), or, if they do, they will do so for perfectly general (Strawsonian) reasons – reasons for which we all must deny bivalence. Of course, the metaphysical objections to the open futurist’s model of the future will remain just as they were. However, the millennia-old “semantic” or “logical” objections to the doctrine would be answered.
A new “voucher” program aims to shrink the US waiting list for kidney transplants (Veale, 2016). The waiting list is long, hovering in 2017 at around 95,000 (United Network for Organ Sharing, 2017). During 2016, approximately 19,000 kidney transplants took place, meeting only approximately one fifth of the demand. For patients with end stage renal disease (ESRD), transplantation has greater health benefits than dialysis, both in terms of length and quality of life (Tonelli et al, 2011). Transplantation from living donors is optimal: it tops both dialysis and transplantation from deceased donors in terms of health outcomes and cost-effectiveness (LaPointe Rudow et al, 2015, 914). The new voucher program involves live donation.
Carl Tollef Solberg and Espen Gamlund have recently suggested that in allocating scarce, life-saving resources we ought to consider how bad death would be for those who would die if left untreated (Solberg and Gamlund 2016, 8). We have moral reason, they intimate, to prioritize persons for whom death would be very bad over persons for whom it would be less bad (or not bad at all). In particular, we should in our allocation decisions consider how bad death would be for persons according to the “Time-Relative Interest Account,” developed by Jeff McMahan (Solberg and Gamlund 2016, 2).
Computer simulation of an epistemic landscape model, modified to include explicit representation of a centralised funding body, show the method of funding allocation has significant effects on communal trade-off between exploration and exploitation, with consequences for the community’s ability to generate significant truths. The results show this effect is contextual, and depends on the size of the landscape being explored, with funding that includes explicit random allocation performing significantly better than peer-review on large landscapes. The paper proposes a way of incorporating external institutional factors in formal social epistemology, and offers a way of bringing such investigations to bear on current research policy questions.
In this paper I investigate whether certain substructural theories are able to dodge paradox while at the same time containing what might be viewed as a naive validity predicate. To this end I introduce the requirement of internalization, roughly, that an adequate theory of validity should prove that its own metarules are validity-preserving. The main point of the paper is that substructural theories fail this requirement in various ways.
in this paper i defend anti- realism about race and a new theory of racialization. i argue that there are no races, only racialized groups. many social constructionists about race have adopted racial formation theory to explain how ‘races’ are formed. However, anti- realists about race cannot adopt racial formation theory, because it assumes the reality of race. i introduce interactive constructionism about racialized groups as a theory of racialization for anti- realists about race. interactive constructionism moves the discussion away from the dichotomous (social vs. biological) metaphysics that has marred this debate, and posits that racialized groups are the joint products of a broad range of non- racial factors, which interact.
What ought you believe? According to a traditional view, it depends on your evidence: you ought to believe (only) what your evidence supports. recently, however, some have claimed that what you ought to believe depends not on your evidence but simply on what is true: you ought to believe (only) the truth. This disagreement parallels one in ethics, between so- called perspectivists and objectivists. Perspectivists in ethics hold that how you ought to act depends on your epistemic position, whereas objectivists hold that it depends on all the facts, regardless of your epistemic position with respect to them. The view that what you ought to believe depends on your evidence can be thought of as a version of perspectivism about the epistemic ought; the view that what you ought to believe depends only on what is true can be thought of as a version of objectiv-ism about the epistemic ought.1
The author of this book is a professor of philosophy and of the classics; the book is a classicist literary history of sorts. Its novelty is in its author’s invitation to readers to argue with him on the Internet through an e-link that he provides. The book’s other novelty is its choice to view Plato more as a writer than as a philosopher—with a philosophical purpose in mind, of course. Until recently, discussions of the greatness of Plato as a philosopher eclipsed discussions of his artistic greatness as a writer. Thus, though his Symposium is a major literary masterpiece of almost unequalled loveliness, commentators on it discuss its aesthetics, tending to ignore it as art. The book at hand discusses some works of Plato as literary masterpieces while discussing a famous historical problem, namely, the Socratic problem: what part of Plato’s output expresses the opinions of his teacher Socrates? Unfortunately, the book is apologetic, and so its value is more that of a pioneering work than of a serious contribution. Its apologetic aspect shows when it skirts the unpleasant fact that whereas Socrates was a staunch defender of democracy, Plato was an elitist who preferred meritocracy.
Normative non-naturalism is the view that normativity has its source in irreducible, non-natural matters of fact. Here I use ‘normativity' broadly to include phenomena like rationality, reasons, oughts and shoulds, good and bad, right and wrong, etc. Thus, if we interpret G. E. Moore as proposing that the property of goodness is sui generis in the sense that it’s irreducible and isn’t identical to any natural property, he would count as a normative non-naturalist. And Scanlon (2014) recently defended a non-naturalist view on which the relation of being a reason for is sui generis in the same sense. Non-naturalism has also been defended recently by Oddie (2005), Parfit (2006, 2011), Wedgwood (2007), FitzPatrick (2008, 2014), and Enoch (2011).
11 August 1895 – 12 June 1980
Continuing with my Egon Pearson posts in honor of his birthday, I reblog a post by Aris Spanos: “Egon Pearson’s Neglected Contributions to Statistics“. Egon Pearson (11 August 1895 – 12 June 1980), is widely known today for his contribution in recasting of Fisher’s significance testing into the Neyman-Pearson (1933) theory of hypothesis testing. …
In a number of posts over the past several years, I’ve explored various ways to make a countably infinite fair lottery machine (assuming causal finitism is false), typically using supertasks in some way. …
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016. …
The Holodeck - Star Trek
There is an apple in front of me. I can see it, but I can’t touch it. The reason is that the apple is actually a 3-D rendered model of an apple. It looks like an apple, but exists only within a virtual environment — one that is projected onto the computer screen in front of me. …
The topic of unity in the sciences can be explored through the
following questions: Is there one privileged, most basic or
fundamental concept or kind of thing, and if not, how are the
different concepts or kinds of things in the universe related? Can the
various natural sciences (e.g.,physics, astronomy, chemistry, biology)
be unified into a single overarching theory, and can theories within a
single science (e.g., general relativity and quantum theory in
physics, or models of evolution and development in biology) be
unified? Are theories or models the relevant connected units? What
other connected or connecting units are there?
As Harvey Brown emphasizes in his book Physical Relativity, inertial motion in general relativity is best understood as a theorem, and not a postulate. Here I discuss the status of the “conservation condition”, which states that the energy-momentum tensor associated with non-interacting matter is covariantly divergence-free, in connection with such theorems.
I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:
Euthanasia would at most be permissible in cases of valid consent and great suffering. …
guest post by
'Dial 888,' Rick said as the set warmed. 'The desire to watch TV, no matter what's on it. 'I don't feel like dialling anything at all now,' Iran said. 'Then dial 3,' he said. …
This article attempts to revise solidarity from its primary historical meaning as a relationship binding all the members of a single cohesive group or society toward a conception more suitable for the new forms of transnational interrelationships that mark contemporary globalization. It considers the supportive relations we can come to develop with people at a distance, given the interconnections that are being established through work or other economic ties, through participation in Internet forums and other new media, or indirectly through environmental impacts. Solidarity relations will be reconceptualized here as potentially contributing to the emergence of more democratic forms of transnational interaction within regional or more fully global frameworks of human rights, for which I have argued previously. Beyond this, I will also argue that affective relations of solidarity are in fact an essential complement to the recognition of these human rights themselves. This new notion of solidarity is understood here as one of overlapping solidarity networks. It will be seen that this conception also engages the idea of justice, and indeed perhaps of global justice, in an important way.
The emergence of cross-border communities and transnational associations requires new ways of thinking about the norms involved in democracy in a globalized world. Given the significance of human rights fulfillment, including social and economic rights, I argue here for giving weight to the claims of political communities while also recognizing the need for input by distant others into the decisions of global governance institutions that affect them. I develop two criteria for addressing the scope of democratization in transnational contextsFcommon activities and impact on basic human rightsFand argue for their compatibility. I then consider some practical implications for institutional transformation and design, including new forms of transnational representation.
This article argues that Thomas Pogge’s important theory of global justice does not adequately appreciate the relation between interactional and institutional accounts of human rights, along with the important normative role of care and solidarity in the context of globalization. It also suggests that more attention needs to be given critically to the actions of global corporations and positively to introducing democratic accountability into the institutions of global governance. The article goes on to present an alternative approach to global justice based on a more robust conception of human rights grounded in a conception of equal positive freedom, in which these rights are seen to apply beyond the coercive political institutions to which Pogge primarily confines them (e.g. to prohibiting domestic violence), and in which they can guide the development of economic, social and political forms to enable their fulfillment.
The practical context for the theoretical reflections in this article is set by two apparently conflicting tendencies: On one side, we have the progression of global economic, technological, and, to a degree, legal and political integration, where this entails a certain diminution of sovereignty. Sovereign nation-states of the so-called Westphalian paradigm, possessing ultimate authority within a territory, are increasingly overwhelmed by the cross-border interconnections or networks that escape their purview; or they are legitimately constrained by new human rights regimes across borders. On the other side, especially in view of the hegemonic activities of the United States, but also in the European Union, new calls for the reestablishment of the sovereignty of nation-states can be heard. This may take the form of a reassertion of a right of states against military interference and a retreat from ideas of humanitarian intervention; or again, it may take the form of an assertion of the priority of nation-states from the standpoint of the administration of welfare or that of the distinctiveness of particular cultures that they sometimes embody. Indeed, a third tendency can also be discerned in present practice: In the face of economic globalization of the first sort, diagnosed as U.S.- led and one-sidedly serving the interests of large industrial societies, but also with an understandable fear of the power of coercive and sometimes violent sovereign nation-states, some actors in the global justice movement seek what they call autonomy, as a self-organization of societies or communities in a diversity of more local forms.
The spectrum argument purports to show that the better-than relation is not transitive, and consequently that orthodox value theory is built on dubious foundations. The argument works by constructing a sequence of increasingly less painful but more drawn-out experiences, such that each experience in the spectrum is worse than the previous one, yet the final experience is better than the experience with which the spectrum began. Hence the betterness relation admits cycles, threatening either transitivity or asymmetry of the relation. This paper examines recent attempts to block the spectrum argument, using the idea that it is a mistake to affirm that every experience in the spectrum is worse than its predecessor: an alternative hypothesis is that adjacent experiences may be incommensurable in value, or that due to vagueness in the underlying concepts, it is indeterminate which is better. While these attempts formally succeed as responses to the spectrum argument, they have additional, as yet unacknowledged costs that are significant. In order to effectively block the argument in its most typical form, in which the first element is radically inferior to the last, it is necessary to suppose that the incommensurability (or indeterminacy) is particularly acute: what might be called radical incommensurability (radical indeterminacy). We explain these costs, and draw some general lessons about the plausibility of the available options for those who wish to save orthodox axiology from the spectrum argument.
The need for expressing temporal constraints in conceptual models is well-known, but it is unclear which representation is preferred and what would be easier to understand by modellers. We assessed five different modes of representing temporal constraints, being the formal semantics, Description logics notation, a coding-style notation, temporal EER diagrams, and (pseudo-)natural language sentences. The same information was presented to 15 participants in an experimental evaluation. Principally, it showed that 1) there was a clear preference for diagrams and natural language versus a dislike for other representations; 2) diagrams were preferred for simple constraints, but the natural language rendering was preferred for more complex temporal constraints; and 3) a multi-modal modelling tool will be needed for the data analysis stage to be effective.
According to priority monism there are many concrete entities and there is one, the cosmos, that is ontologically prior to all the others. I begin by clarifying this thesis as well as its main rival, priority atomism. I show how the disagreement between the priority monist and atomist ultimately turns on how the thesis of concrete foundationalism is implemented. While it’s standard to interpret priority monism as being metaphysically non-contingent, I show that there are two competing, prima facie plausible conceptions of metaphysical necessity—the essence-based and law-based conceptions—on which it is reasonable to view its modal status differently. This, I suggest, is good for the priority monist—various objections to the thesis presuppose that it’s metaphysically non-contingent, while there are arguments for the thesis that don’t make the presupposition.
In this paper I discuss the delayed choice quantum eraser experiment by giving a straightforward account in standard quantum mechanics. At first glance, the experiment suggests that measurements on one part of an entangled photon pair (the idler) can be employed to control whether the measurement outcome of the other part of the photon pair (the signal) produces interference fringes at a screen after being sent through a double slit. Significantly, the choice whether there is interference or not can be made long after the signal photon encounters the screen. The results of the experiment have been alleged to invoke some sort of ‘backwards in time influences’. I argue that in the standard collapse interpretation the issue can be eliminated by taking into account the collapse of the overall entangled state due to the signal photon. Likewise, in the de Broglie-Bohm picture the particle’s trajectories can be given a well-defined description at any instant of time during the experiment. Thus, there is no need to resort to any kind of ‘backwards in time influence’. As a matter of fact, the delayed choice quantum eraser experiment turns out to resemble a Bell-type measurement, and so there really is no mystery.
E.S. Pearson (11 Aug, 1895-12 June, 1980)
This is a belated birthday post for E.S. Pearson (11 August 1895-12 June, 1980). It’s basically a post from 2012 which concerns an issue of interpretation (long-run performance vs probativeness) that’s badly confused these days. …
In the comments on the previous post I was alerted, by Matthias Michel, to a couple of papers that I had not yet read. The first was a paper in Neuroscience Research which came out in 2016:
Using category theory to assess the relationship between consciousness and integrated information theory by Naotsugu Tsuchiya, Shigeru Taguchi, and Hayato Saigo
And the second was a paper in Philosophy Compass that came out in March 2017:
“What is it like to be a bat?”—a pathway to the answer from the integrated information theory by Naotsugu Tsuchiya
After reading these I realized that I had heard an early version of this stuff when I was part of a plenary session with Tsuchiya in Tucson back in April of 2016. …
There’s a new paper on the arXiv that claims to solve a hard problem:
• Norbert Blum, A solution of the P versus NP problem. Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. …
It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void. …
It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently. This is usually promoted with respect to natural languages. …