The no go result applies to thermodynamically reversible processes implemented in molecular scale systems. In ordinary thermodynamics, a reversible process is one that passes through a sequence of states, such that the states come arbitrarily close to states in equilibrium with each other. These last equilibrium states are of equal thermodynamic entropy S. The process can only proceed if there is a very slight entropy increase along the sequence of actual states. If they were to realize the equality of entropy, then the process would be frozen. There would be no entropic forces to advance the process.
William Edmundson has written a very necessary book. John Rawls: Reticent Socialist makes the case for Rawlsian socialism in light of Rawls’s complete corpus, and manages to do it thoroughly in under 200 pages. Of course there have been many discussions and interrogations of Rawls’s socialism, though none which have attempted to tie everything together so neatly in light of everything Rawls has written. One reason may be that interest in Rawls’s socialism has roughly tracked the political fashions in the US and UK, with a higher interest in defending socialism in the 1970s and 80s and then a waning interest through the 90s and aughts, and only now picking up steam again. Another reason may be that all of our interpretive energy about how to fit together the later Rawls and the earlier Rawls was so taken up with the questions of global justice that we were too exhausted for anything else. In this vacuum the defenses of Rawlsian capitalism have flourished, perhaps culminating in John Tomasi’s Free Market Fairness. In light of this, Edmundson has provided a very welcome counterweight, which I hope to see gain a wide audience. For better or worse, Rawls has served as the lingua franca for significant sectors of academic political philosophy, meaning that the question of whether Rawls was a socialist is also the question of whether there can be a legitimate argument for socialism at all, at least in some circles.
It is said that if an agent has inconsistent credences, she is Dutch Bookable. Whether this is true depends on how the agent calculates expected utilities. After all, expected utilities normally are Lebesgue integrals over a probability measure, but the inconsistent agent’s credences are not a probability measure, so strictly speaking there is no such thing as a Lebesgue integral over them. …
There is long standing agreement both among philosophers and linguists that the term ‘counterfactual conditional’ is misleading if not a misnomer. Speakers of both non-past subjunctive (or ‘would’) conditionals and past subjunctive (or ‘would have’) conditionals need not convey counterfactuality. The relationship between the conditionals in question and the counterfactuality of their antecedents is thus not one of presupposing. It is one of conversationally implicating. This paper provides a thorough examination of the arguments against the presupposition view as applied to past subjunctive conditionals and finds none of them conclusive. All the relevant linguistic data, it is shown, are compatible with the assumption that past subjunctive conditionals presuppose the falsity of their antecedents. This finding is not only interesting on its own. It is of vital importance both to whether we should consider antecedent counterfactuality to be part of the conventional meaning of the conditionals in question and to whether there is a deep difference between indicative and subjective conditionals.
Marking one year since the appearance of my book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP), let’s continue to the second stop (1.2) of Excursion 1 Tour 1. …
Human social intelligence includes a remarkable power to evaluate what people know and believe, and to assess the quality of well- or ill-formed beliefs. Epistemic evaluations emerge in a great variety of contexts, from moments of deliberate private reflection on tough theoretical questions, to casual social observations about what other people know and think. We seem to be able to draw systematic lines between knowledge and mere belief, to distinguish justified and unjustified beliefs, and to recognize some beliefs as delusional or irrational. This article outlines the main types of epistemic evaluations, and examines how our capacities to perform these evaluations develop, how they function at maturity, and how they are deployed in the vital task of sorting out when to believe what others say.
One of the most striking features of Classical Indian skepticism is the degree to which it provides intellectual delight. Ethan Mills offers an insightful treatment of each of the three pivotal figures he locates in this tradition, but he also succeeds in conveying that sense of delight, both in his sympathetic depictions of the tradition’s great skeptical arguments, and in his own creative interpretations of their significance.
The lottery paradox exposes some tensions in our natural ways of thinking about probabilities, and in how we think about belief itself. This chapter explores the paradox from a psychological angle, arguing that it arises from the flexibility of our cognitive capacities to represent (and reason about) the empirical realm. A better understanding of these capacities can give us a clearer sense of our theoretical options. Ultimately, I take a broad view of the paradox: in my view, it can be triggered not only by discussion of games with stipulated odds, but by topics of all sorts. However, it will be simplest to start with an example inspired by Kyburg’s classic (1961) discussion, in which you hold one ticket in a fair lottery, with odds of (let us say) a million to one, in which the draw has been held but the single winner not yet announced. It is very likely that your ticket has lost, but what is the significance of this high likelihood for the rationality of believing that your ticket has lost? If we insist that a threshold of .999999 is not high enough for rational belief, it may seem we are trapping ourselves in skepticism: surely many of the ordinary things we rationally believe about the world are less certain than logical truths. On the other hand, if we do believe that this ticket has lost, by symmetry we should say the same for any of the other tickets in the lottery, and as long as conjuncttion of rational beliefs is a rational operation, it seems we would be rational to deduce that all the tickets have lost, in contradiction to our other beliefs about this fair lottery.
[The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book]
The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. …
Some people, most notably Robin Collins, have run teleological arguments from the discoverability of the laws of nature. But I doubt that we know that the laws of nature are discoverable. After all, it seems we haven’t discovered the laws of physics yet. …
This week marks one year since the general availability of my book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). Here’s how it begins (Excursion 1 Tour 1 (1.1)). …
Humean accounts of modality, like Sider’s, work as follows. We first take some privileged truths, including all the mathematical ones, and an appropriate collection of others (e.g., ones about natural kind membership or the fundamental truths of metaphysics). …
I detect at least two unspoken assumptions in Birch’s project, and I question, indeed, reject both of them. One is that welfare is the primary concern of animal ethics. I think liberation is. Birch’s other assumption is that the scientific investigation of animal sentience is key to promoting animal ethics. I think science is largely irrelevant to progress on this front and can even be counterproductive.
In ethics, we seek a theory of obligation whose predictions match our best intuitions. Suppose that explorers on the moon find a booklet with pages of platinum that contains an elegant collection of moral precepts that match our best intuitions to an incredible degree, better than anything that has been seen before. …
« From quantum supremacy to classical fallacy
Book Review: ‘The AI Does Not Hate You’ by Tom Chivers
A couple weeks ago I read The AI Does Not Hate You: Superintelligence, Rationality, and the Race to Save the World, the first-ever book-length examination of the modern rationalist community, by British journalist Tom Chivers. …
We argue that definite noun phrases give rise to uniqueness inferences characterized by a pattern we call definiteness projection. Definiteness projection says that the uniqueness inference of a definite projects out unless there is an indefinite antecedent in a position that filters presuppositions. We argue that definiteness projection poses a serious puzzle for e-type theories of (in)definites; on such theories, indefinites should filter existence presuppositions but not uniqueness presuppositions. We argue that definiteness projection also poses challenges for dynamic approaches, which have trouble generating uniqueness inferences and predicting some filtering behavior, though unlike the challenge for e-type theories, these challenges have mostly been noted in the literature, albeit in a piecemeal way. Our central aim, however, is not to argue for or against a particular view, but rather to formulate and motivate a generalization about definiteness which any adequate theory must account for.
A referee pointed out this paper to me:
• Uffe Engberg and Glynn Winskel, Petri nets as models of linear logic, Colloquium on Trees in Algebra and Programming, Springer, Berlin, 1990. It contains a nice observation: we can get a commutative quantale from any Petri net. …
One leading approach to justification comes from the reliabilist tradition, which maintains that a belief is justified provided that it is reliably formed. Another comes from the ‘Reasons First’ tradition, which claims that a belief is justified provided that it is based on reasons that support it. These two approaches are typically developed in isolation from each other; this essay motivates and defends a synthesis. On the view proposed here, justification is understood in terms of an agent’s reasons for belief, which are in turn analyzed along reliabilist lines: an agent’s reasons for belief are the states that serve as inputs to their reliable processes. I show that this ‘Reasons First Reliabilism’ allows each tradition to profit from the other’s explanatory resources. It enables reliabilists to explain epistemic defeat, and it enables Reasons Firsters to give a predictive and naturalistic epistemology. I go on to compare Reasons First Reliabilism with other hybrid versions of reliabilism that have been proposed in the literature.
Here is a tension in the views of some theistic Aristotelian philosophers. On the one hand, we argue:
That the mathematical elegance and discoverability of the laws of physics is evidence for the existence of God
but we also think:
There are higher-level (e.g., biological and psychological) laws that do not reduce to the laws of physics. …
What is it reasonable to hope for from a philosophical argument? Soundness would be nice -- a true conclusion that logically follows from true premises. But soundness isn't enough. Also, in another way, soundness is sometimes too much to demand. …
I introduce a new method for validating models – including stochastic models – that gets at the reliability of a model’s predictions under intervention or manipulation of its inputs and not merely at its predictive reliability under passive observation. The method is derived from philosophical work on natural kinds, and turns on comparing the dynamical symmetries of a model with those of its target, where dynamical symmetries are interventions on model variables that commute with time evolution. I demonstrate that this method succeeds in testing aspects of model validity for which few other tools exist.
Detecting causality between variables in a time series is a challenge, particularly when the relationship is nonlinear and the dataset is noisy. Here, we present a novel tool for detecting causality that leverages the properties of symmetry transformations. The aim is to develop an algorithm with the potential to detect both unidirectional and bidirectional coupling for nonlinear systems in the presence of significant sampling noise. Most of the existing tools for detecting causality can make determinations of directionality, but those determinations are relatively fragile in the presence of noise. The novel algorithm developed in the present study is robust and very conservative in that it reliably detects causal structure with a very low rate of error even in the presence of high sampling noise. We demonstrate the performance of our algorithm and compare it with two popular
« Scott’s Supreme Quantum Supremacy FAQ! From quantum supremacy to classical fallacy
Maybe I should hope that people never learn to distinguish for themselves which claimed breakthroughs in building new forms of computation are obviously serious, and which ones are obviously silly. …
I am glad that David Chalmers has now come round to the 6 view that explaining the ‘problem intuitions’ about consciousness is 7 the key to a satisfactory philosophical account of the topic. I find it 8 surprising, however, given his previous writings, that Chalmers does 9 not simply attribute these intuitions to the conceptual gap between physical and phenomenal facts. Still, it is good that he doesn’t, given that this was always a highly implausible account of the problem intuitions. Unfortunately, later in his paper Chalmers slides back into his misguided previous emphasis on the conceptual gap, in his objections to orthodox a posteriori physicalism. Because of this he fails to appreciate how this orthodox physicalism offers a natural solution to the challenges posed by consciousness.
The criminal law is broadly retributive insofar as it predicates censure and sanction on culpable or responsible wrongdoing. 1 Wrongdoing for which the agent is not responsible and, hence, not culpable (in this sense) is excused. Responsibility and excuse are scalar phenomena, because the capacities constitutive of the normative competence required for responsibility can be had to different degrees and their impairment can be a matter of degree. Ideally, the criminal law would aim to deliver just deserts in cases of partial responsibility, making censure and sanction proportional to the degree of culpable wrongdoing. However, with some qualifi cations, American criminal law is bivalent about responsibility and excuse. It treats responsibility as all or nothing, and it is very stingy with excuse, in effect treating many cases of partial responsibility as if the individuals were fully responsible. It is normatively problematic to treat responsibility and excuse as bivalent when the underlying facts about them are scalar in nature. 2 In this essay, I want to explain this concern about * It is my pleasure to contribute an essay honoring Larry Alexander, who has been a friend for two decades and from whom I have learned so much about the philosophy of law, especially legal interpretation and criminal jurisprudence. My debt is all the greater because of our frequent disagreements.
Part I of the dissertation argues for the production view of mental representation, on which that mental representation is a product of the mind rather than a relation to things in the world. I argue that the production view allows us to make best sense of cases of reliable misrepresentation. I also argue that there are various theoretical benefits of distinguishing representation from the tracking and other relations that representations might enter into. Part II is about the relationship between representational content and phenomenal character, the “what it’s like” to be in certain states. I argue for what I call the phenomenal-intentional identity theory (PIIT), the view that phenomenal character is identical with representational content. In the course of arguing for PIIT, I argue that we need to distinguish representational content from what we might call “computational content,” the type of content a state might be said to have solely in virtue of its role in a computational system.
The recent Brexit referendum in the UK, Donald Trump’s election as President of the United States, the rise of populism and far-right politics in several European countries, and the current wave of Islamophobia across Europe, prompted by migratory pressure and an unstable Middle East, have brought hate speech to the forefront of both public and academic debate. Offensive speech, especially that directed at members of religious minorities, also continues to elicit debate, as shown by the 2006 Jyllands-Posten Muhammad cartoons controversy and, more recently, by the Charlie Hebdo controversies and attacks.
So, if my arm is a proper part of 'me', then my arm is informed by the form that informs 'me', and my arm's being informed by that form is derivative of 'me' being informed by it. But if my arm is informed by the same form that informs 'me', wouldn't it follow that my arm has a form that's informed by the same form that informs 'me'? …
Alice is a two-dimensional object. Suppose Alice’s simple parts fill a round region of space. Then Alice is round, right? Perhaps not! Imagine that Alice started out as an extended simple in the shape of a solid square and inside the space occupied by her there was an extended simple, Barbara, in the shape of a circle. …
In this paper, we present a Bayesian approach to natural language semantics. Our main focus is on the inference task in an environment where judgments require probabilistic reasoning. We treat nouns, verbs, adjectives, etc. as unary predicates, and we model them as boxes in a bounded domain. We apply Bayesian learning to satisfy constraints expressed as premises. In this way we construct a model, by specifying boxes for the predicates. The probability of the hypothesis (the conclusion) is evaluated against the model that incorporates the premises as constraints.