My grad student Christian Williams and I finished this paper just in time for him to talk about it at SYCO:
• John Baez and Christian Williams, Enriched Lawvere theories for operational semantics. Abstract. …
At first glance there does not seem to be anything philosophically
problematic about human enhancement. Activities such as physical
fitness routines, wearing eyeglasses, taking music lessons and prayer
are routinely utilized for the goal of enhancing human capacities. This
entry is not concerned with every activity and intervention that might
improve people’s embodied lives. The focus of this entry is a
cluster of debates in practical ethics that is conventionally labeled
as “the ethics of human enhancement”. These debates include
clinicians’ concerns about the limits of legitimate health care,
parents’ worries about their reproductive and rearing
obligations, and the efforts of competitive institutions like sports to
combat cheating, as well as more general questions about distributive
justice, science policy, and the public regulation of medical
In politics, representation is as representation does. Or – it is the contingent product of what is done with it, or in its name. Against this background, efforts by theorists to extract representation’s essence from its contexts and functions do not necessarily advance our understanding (Derrida 1982, 301). Likewise, neat distinctions between (e.g.) two or more types, forms or qualities of representation are common in democratic theory, but the practices which produce representation often traverse and disrupt static and neat distinctions. Consider the example of “self-appointed representation” (SAR) (Montanaro 2012) and its implied opposite “other-appointed representation” (OAR). SAR, to be representation, depends in some form on recognition by others. OAR, to be representation, depends on a presentation of a self adequate to representation. This is one instance of representation’s diverse and common liminal qualities, which see it traversing and complicating neat categorisations.
« On the scientific accuracy of “Avengers: Endgame”
The SSL Certificate of Damocles
Ever since I “upgraded” this website to use SSL, it’s become completely inaccessible once every three months, because the SSL certificate expires. …
We demonstrate how deep and shallow embeddings of functional programs can coexist in the Coq proof assistant using meta-programming facilities of MetaCoq. While deep embeddings are useful for proving meta-theoretical properties of a language, shallow embeddings allow for reasoning about the functional correctness of programs.
In his On the Genealogy of Morality Nietzsche famously discusses a psychological condition he calls ressentiment, a form of toxic, vengeful anger. In this paper, I offer a free-standing theory in philosophical psychology of what is characteristic of this state. My view takes some inspiration from Nietzsche, but this paper will not be a work of exegesis. In the process of developing my account, I will try to chart the terrain around ressentiment and closely-related and sometimes overlapping states (ordinary moral resentment, envy, vengefulness, anger, and the like) and also seek to explain what’s ethically objectionable as well as psychologically pernicious about ressentiment. Ressentiment, I shall contend in this paper, is not simply a ten dollar word substitutable for ‘resentment,’ though it is indeed a species of that genus. On the account I develop, the perception of being slighted, insulted, or demeaned figures centrally in cases of ressentiment.
The Four Ages of Man - Nicolas Lancret
There’s an oft-repeated ‘fact’ thrown around in debates about retirement and old age. The details can vary but it’s something to the effect that when the pension entitlement age was set at 65 in the early part of the 20th century, very few people could expect to collect it, and those that did could only expect to collect for a few years (probably no more than 5). …
Sometimes theists wonder how God’s beliefs track particular portions of reality, e.g. contingent states of affairs, or facts regarding future free actions. In this article I sketch a general model for how God’s beliefs track reality. God’s beliefs track reality in much the same way that propositions track reality, namely via grounding. Just as the truth values of true propositions are generally or always grounded in their truthmakers, so too God’s true beliefs are grounded in the subject matters of those beliefs (i.e. God believes that p in virtue of the fact that p). This is not idle speculation, since my proposal allows the theist to account for God’s true beliefs regarding causally inert portions of reality.
Paul Busch has emphasized on various occasions the importance for physics of going beyond a merely instrumentalist view of quantum mechanics. Even if we cannot be sure that any particular realist interpretation describes the world as it actually is, the investigation of possible realist interpretations helps us to develop new physical ideas and better intuitions about the nature of physical objects at the micro level. In this spirit, Paul Busch himself pioneered the concept of “unsharp quantum reality”, according to which there is an objective non-classical indeterminacy—a lack of sharpness—in the properties of individual quantum systems. We concur with Busch’s motivation for investigating realist interpretations of quantum mechanics and with his willingness to move away from classical intuitions. In this article we try to take some further steps on this road. In particular, we pay attention to a number of prima facie implausible and counter-intuitive aspects of realist interpretations of unitary quantum mechanics. We shall argue that from a realist viewpoint, quantum contextuality naturally leads to “perspectivalism” with respect to properties of spatially extended quantum systems, and that this perspectivalism is important for making relativistic covariance possible.
Cultural evolutionary theory has been alternatively compared to a theory of forces, such as Newtonian mechanics, or the kinetic theory of gases. In this article, I clarify the scope and significance of these metatheoretical characterisations. First, I discuss the kinetic analogy, which has been recently put forward by Tim Lewens. According to it, cultural evolutionary theory is grounded on a bottom-up methodology, which highlights the additive effects of social learning biases on the emergence of large-scale cultural phenomena. Lewens supports this claim by arguing that it is a consequence of cultural evolutionists’ widespread commitment to population thinking. While I concur with Lewens that cultural evolutionists often actually conceive cultural change in aggregative terms, I think that the kinetic framework does not properly account for the explanatory import of population-level descriptions in cultural evolutionary theory. Starting from a criticism of Lewens’ interpretation of population thinking, I argue that the explanatory role of such descriptions is best understood within a dynamical framework – that is, a framework according to which cultural evolutionary theory is a theory of forces. After having spelled out the main features of this alternative interpretation, I elucidate in which respects it helps to outline a more accurate characterisation of the overarching structure of cultural evolutionary theory.
Is it possible to introduce a small number of agents into an environment, in such a way that an equilibrium results in which almost everyone (including the original agents) cooperates almost all the time? This is a compelling question for those interested in the design of beneficial game-theoretic AI, and it may also provide insights into how to get human societies to function better. We investigate this broad question in the specific context of finitely repeated games, and obtain a mostly positive answer. Our main novel technical tool is the use of limited altruism (LA) types, which behave altruistically towards other LA agents but not towards selfish agents. The uncertainty about which type of agent one is facing turns out to be essential in establishing cooperation. We provide characterizations in several families of games of which LA types are effective for our purposes.
Let Gp be the law of gravitation that states that F = Gm1m2/rp, for some real number p. There was a time when it was rational to believe G2. But here is a problem. When 0 < |p − 2|<10−100 (say), Gp is practically empirically indistinguishable from G2, in the sense that within the accuracy of our instruments it predicts exactly the same observations. …
John Stuart Mill famously wrote:
We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience. …
What are the epistemic benefits of democracy? According to the ‘epistemic democrats’, democratic procedures such as deliberation and voting are valuable in part because they produce epistemically valuable outcomes. Indeed, epistemic democrats claim the legitimacy of democracy depends, at least in part, on the epistemic quality of the outcomes of political decision-making processes. In this paper, I want to consider two epistemic factors that might figure into the value of democracy, namely, veritistic and non-veritistic epistemic goals.
According to a conventional view, there exists no common cause model of quantum correlations satisfying locality requirements. Indeed, Bell’s inequality is derived from some locality requirements and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argued that in the derivation of the inequality, the existence of a common common-cause for multiple correlations is implicitly assumed and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, Graßhoff et al. showed that when there are three pairs of perfectly correlated events and a common cause of each correlation exist, we cannot construct a common cause model that is consistent with quantum mechanical prediction and also meets several locality requirements. In this paper, first, as a consequence of the fact shown by Graßhoff et al., we will confirm that there exists no local common cause model when a two-particle system is in any maximally entangled state. After that, based on Hardy’s famous argument, we will prove that there exists no local common cause model when a two-particle system is in any non-maximally entangled state. Therefore, it will be concluded that for any entangled state, there exists no local common cause model. It will be revealed that the non-existence of a common cause model satisfying locality is not limited to a particular state like the singlet state.
Absolutism about mass within Newtonian Gravity claims that mass ratios obtain in virtue of absolute masses. Comparativism denies this. Defenders of comparativism promise to recover all the empirical and theoretical virtues of absolutism, but at a lower ‘metaphysical cost’. This paper develops a Machian form of comparativism about mass in Newtonian Gravity, obtained by replacing Newton’s constant in the law of Universal Gravitation by another constant divided by the sum over all masses. Although this form of comparativism is indeed empirically equivalent to the absolutist version of Newtonian Gravity—thereby meeting the challenge posed by the comparativist’s bucket argument—it is argued that the explanatory power and metaphysical parsimony of comparativism (and especially its Machian form) are highly questionable.
There’s been a lot of excitement about the new gene-editing tool CRISPR-Cas9. Discussion of the technology has largely focused on its precision, accuracy, customizability, and affordability. But the CRISPR-Cas system from which the technology was derived has a fascinating life of its own. The work of Eugene V. Koonin’s lab is mapping the rich histories of CRISPR-Cas systems in microbial populations. In “CRISPR: A New Principle of Genome Engineering Linked to Conceptual Shifts in Evolutionary Biology,” Koonin argues that fundamental research studying adaptive immune mechanisms has (among other things) illuminated “fundamental principles of genome manipulation.” I think Koonin’s discussion provides important philosophical insights for how we should understand the significance of CRISPR-Cas systems, and the technologies derived from them. Yet the analysis he provides is only part of a larger story that fully captures the biological significance that CRISPR-Cas systems represent. There is also a human element to the CRISPR-Cas story that concerns its development as a technology. Accounting for the human history of CRISPR-Cas reveals that the story Koonin provides requires greater nuance. I’ll show how CRISPR-Cas technologies are not “natural” genome editing systems but are partly artifacts of human ingenuity. Furthermore, I’ll argue that when it comes to the story of CRISPR-Cas, fundamental and applied research are importantly intertwined.
In this paper, I develop and defend a new adverbial theory of perception. I first present a semantics for direct-object perceptual reports that treats their object-positions as supplying adverbial modifiers, and I show how this semantics definitively solves the many-property problem for adverbialism. My solution is distinctive in that it articulates adverbialism from within a well-established formal semantic framework and ties adverbialism to a plausible semantics for perceptual reports in English. I then go on to present adverbialism as a theory of the metaphysics of perception. The metaphysics I develop treats adverbial perception as a directed activity: it is an activity with success conditions. When perception is successful, the agent bears a relation to a concrete particular, but perception need not be successful; this allows perception to be fundamentally non-relational. The result is a novel formulation of adverbialism that eliminates the need for representational contents, but also treats successful and unsuccessful perceptual events as having a fundamental common factor.
Suppose something bad happens to my friend, and while I am properly motivated in the right degree to alleviate the bad, I just don’t feel bad about it (nor do I feel good about). Common sense says I am morally defective. …
We present an inferentialist account of the epistemic modal operator might. Our starting point is the bilateralist programme. A bilateralist explains the operator not in terms of the speech act of rejection; we explain the operator might in terms of weak assertion, a speech act whose existence we argue for on the basis of linguistic evidence. We show that our account of might provides a solution to certain well-known puzzles about the semantics of modal vocabulary whilst retaining classical logic. This demonstrates that an inferentialist approach to meaning can be successfully extended beyond the core logical constants.
According to rationalists, synthetic a priori propositions convey new knowledge, whereas analytic propositions are non-informative or vacuous conceptual truths. However, as we argue in this article, each a priori proposition is necessarily true because of its semantic constituents and the way they are combined, and hence can be transformed into its equivalent analytic form. So each synthetic a priori proposition conveys only non-informative conceptual truths like analytic propositions.
Richard Hare left behind at his death a long essay titled “A
Philosophical Autobiography”, which was published after his
death. Its opening is striking:
I had a strange dream, or half-waking vision, not long ago. I found
myself at the top of a mountain in the mist, feeling very pleased with
myself, not just for having climbed the mountain, but for having
achieved my life’s ambition, to find a way of answering moral
questions rationally. But as I was preening myself on this
achievement, the mist began to clear, and I saw that I was surrounded
on the mountain top by the graves of all those other philosophers,
great and small, who had had the same ambition, and thought they had
In recent years there has been an explosion of philosophical work on blame. Much of this work has focused on explicating the nature of blame or on examining the norms that govern it, and the primary motivation for theorizing about blame seems to derive from blame’s tight connection to responsibility. However, very little philosophical attention has been given to praise and its attendant practices. In this paper, I identify three possible explanations for this lack of attention. My goal is to show that each of these lines of thought is mistaken and to argue that praise is deserving of careful, independent analysis by philosophers interested in theorizing about responsibility.
What are restaurants, and what is their relationship to the buildings they occupy? I will explore two puzzles that arise when trying to answer these questions. The first puzzle is that, while there is good reason to think that restaurants are constituted by the buildings they occupy, there also is good reason to think that they can exist without being constituted by anything and that nothing that’s constituted can ever become unconstituted. The second is that, while there is good reason to think that restaurants are material objects, there also is good reason to think that they exhibit a certain kind of mind-dependence that no material object can have.
I maintain that intrinsic value is the fundamental concept of axiology. Many contemporary philosophers disagree; they say the proper object of value theory is final value. I examine three accounts of the nature of final value: the first claims that final value is non-instrumental value; the second claims that final value is the value a thing has as an end; the third claims that final value is ultimate or non-derivative value. In each case, I argue that the concept of final value described is either identical with the classical notion of intrinsic value or is not a plausible candidate for the primary concept of axiology.
In research on action explanation, philosophers and developmental psychologists have recently proposed a teleological account according to which we typically don’t explain an agent’s action by appealing to her mental states but by referring to the objective, publically accessible facts of the world that count in favor of performing the action. Advocates of the teleological account claim that this strategy is our main way of understanding people’s actions. I argue that common motivations mentioned to support the teleological account are insufficient to sustain its generalization from children to adults. Moreover, social psychological studies, combined with theoretical considerations, suggest that we do not explain actions mainly by invoking publically accessible, reasoning-giving facts alone but by ascribing mental states to the agent. The point helps advance the theorizing on the teleological account and on the nature of action explanation.
In this article, it is argued that, for a classical Hamiltonian system which is closed, the ergodic theorem emerge from the Gibbs-Liouville theorem in the limit that the system has evolved for an infinitely long period of time. In this limit, from the perspective of an ignorant observer, who do not have perfect knowledge about the complete set of degrees of freedom for the system, distinctions between the possible states of the system, i.e. the information content, is lost leading to the notion of statistical equilibrium where states are assigned equal probabilities. Finally, by linking the concept of entropy, which gives a measure for the amount of uncertainty, with the concept of information, the second law of thermodynamics is expressed in terms of the tendency of an observer to loose information over time.
In this article, it is argued that the Gibbs- Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never lost. Thus, it is proposed that the Gibbs-Liouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the Gibbs-Liouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation.
The human being is a paradox. We, a result of evolution, have developed the theory of evolution. Namely, the evolutionary process, in an unprecedented attempt, has been thought by one of its products — the bootstrapping is in place: the explanandum nominates itself as the explanans . Yet, the concept of evolution is one thing, while evolution itself is another. Upfront, this is an attempt to rescue Bergson’s intuitionsii on heterogeneous continuity, his notion of multiplicity, so as to recover that which, being at the core of evolution, has been lost by our habitual ways of thinking about it.
Say that a chain C is a collection of nodes with the following properties:
Each node is connected to at most two other nodes. If x is connected to y then y is connected to x (symmetry). C is globally connected in the sense that for any proper subset S of C, there is a node in S and a node outside of S that are connected to each other. …