Assume a Thomistic metaphysics, including the primary/secondary causation model from Aquinas. Thus, whenever a created cause has an effect, it has the effect it does only because God, through primary causation, cooperates with the created cause. …
How do you make decisions under ignorance, that is, when you are ignorant of any probabilities for different states of nature? According to the Laplace Rule, you should assign an equal probability to each state of nature (that is, use the Principle of Insufficient Reason), and then maximize expected utility. The most influential objection to this rule is that it is sensitive to the individuation of states of nature. This is problematic since the individuation of states seems arbitrary. In this paper, I show that this objection proves too much. I show that all plausible rules for decisions under ignorance will be sensitive to the individuation of states of nature.
Recently, attention has returned to the nowfamous 1932 thought experiment in which John von Neumann establishes the form of the quantum mechanical Von Neumann entropy −Tr ρ ln ρ SVN¿ ), supposedly by arguing for its correspondence with the phenomenological thermodynamic entropy ( STD .) Hemmo and Shenker (2006) reconstruct von Neumann’s thought experiment and argue that it fails to establish this desired correspondence. Prunkl (2019) and Chua (2019) challenge Hemmo and Shenker’s result in turn. This paper aims to provide a new foundation for the current debate by revisiting the original text (von Neumann (1996, 2018)). A thorough exegesis of von Neumann’s cyclical gas transformation is put forth, along with a reconstruction of two additional thought experiments from the text. This closer look reveals that von Neumann’s goal is not to establish a link between S VN and STD , as is assumed throughout the current debate, but rather to establish a correspondence between S VN and the Gibbs statistical mechanical entropy SG . On these grounds I argue that the existing literature misunderstands and misrepresents his goals. A revised understanding is required before the success of von Neumann’s reversible gas transformation can be definitively granted or denied.
The concept of “representation” is used broadly and uncontroversially throughout neuroscience, in contrast to its highly controversial status within the philosophy of mind and cognitive science. In this paper I first discuss the way that the term is used within neuroscience, in particular describing the strategies by which representations are characterized empirically. I then relate the concept of representation within neuroscience to one that has developed within the field of machine learning (in particular through recent work in deep learning or “representation learning”). I argue that the recent success of artificial neural networks on certain tasks such as visual object recognition reflects the degree to which those systems (like biological brains) exhibit inherent inductive biases that reflect on the structure of the physical world. I further argue that any system that is going to behave intelligently in the world must contain representations that reflect the structure of the world; otherwise, the system must perform unconstrained function approximation which is destined to fail due to the curse of dimensionality, in which the number of possible states of the world grows exponentially with the number of dimensions in the space of possible inputs. An analysis of these concepts in light of philosophical debates regarding the ontological status of representations suggests that the representations identified within both biological and artificial neural networks qualify as first-class representations.
The Lies that Bind is a moving and humbling book. It demonstrates incredible erudition, depth of insight, and command of narrative. Its philosophical points are powerful and subtle, but it also speaks to a broad public about the challenges of identity, social inclusion, and social conflict. The strategy of the book is to offer a general account of social identity, situated within a history explaining the growing importance of identity; it then uses this account to question a kind of essentialism about five forms of identity: creed (religion), country (nationality), color (race), class, and culture. The project is ambivalent about identity: identity is necessary for us as social beings, but at least these particular identities are confused, mistaken, even incoherent. (xvi) By the end, it is tempting to wonder what identities would be sufficient to situate us each in society and also be free of such confusions.
I discuss three aspects of the notion of agency from the standpoint of physics: (i) what makes a physical system an agent; (ii) the reason for agency’s time orientation; (iii) the source of the information generated in choosing an action. I observe that agency is the breaking of an approximation under which dynamics appears closed. I distinguish different notions of agency, and observe that the answer to the questions above differ in different cases. I notice a structural similarity between agency and memory, that allows us to model agency, trace its time asymmetry to thermodynami-cal irreversibility, and identify the source of the information generated by agency in the growth of entropy. Agency is therefore a physical mechanism that transforms low entropy into information. This may be the general mechanism at the source of the whole information on which biology builds.
In 1951, Leonid Hurwicz, a Polish-American economist who would go on to share the Nobel prize for his work on mechanism design, published a series of short notes as part of the Cowles Commission Discussion Paper series, where he introduced a new decision rule for choice in the face of massive uncertainty. …
In Ashes of Our Fathers: Racist Monuments and the Tribal Right, Dan Demetriou makes a novel tribalist case for the preservation of racist monuments. He and I arrived at radically different conclusions in our respective chapters and we may be further apart on this issue than most “opponents” in this text. For this reason, I want to first emphasize some points where our positions overlap.
Philosophical discussions of mental illness fall into three
families. First, there are topics that arise when we treat psychiatry
as a special science and deal with it using the methods and concepts of
philosophy of science. This includes discussion of such issues as
explanation, reduction and classification. Second, there are conceptual
issues that arise when we try to understand the very idea of mental
illness and its ethical and experiential dimensions. Third, there are
interactions between psychopathology and the philosophy of mind;
philosophers have used clinical phenomena to illuminate issues in the
philosophy of mind, and philosophical findings to try to understand
While most people believe the best possible life they could lead would be an immortal one, so-called “immortality curmudgeons” disagree. Following Bernard Williams, they argue that, at best, we have no prudential reason to live an immortal life, and at worst, an immortal life would necessarily be bad for creatures like us. In this article, we examine Bernard Williams' seminal argument against the desirability of immortality and the subsequent literature it spawned. We first reconstruct and motivate Williams' somewhat cryptic argument in three parts. After that, we elucidate and motivate the three best (and most influential) counterarguments to Williams' seminal argument. Finally, we review, and critically examine, two further distinct arguments in favor of the anti-immortality position.
In attempting to do the most good, should you, at a given time, perform the act that is part of the best series of acts you can perform over the course of your life, or should you perform the act that would be best, given what you would actually do later? Possibilists say you should do the former, whereas actualists say you should do the latter. In this chapter, Travis Timmerman explores the debate between possibilism and actualism, and its implications for effective altruism. Each of these two alternatives, he argues, is implausible in its own right as well as at odds with typical effective altruist commitments. Timmerman argues that the best way out of this dilemma is to adopt a hybrid view. Timmerman’s preferred version of hybridism is possibilist at the level of criterion of right action but actualist at the level of decision procedure.
Agnieszka Jaworska and Julie Tannenbaum recently developed the ingenious and novel person-rearing account of moral status, which preserves the commonsense judgment that humans have a higher moral status than nonhuman animals. It aims to vindicate speciesist judgments while avoiding the problems typically associated with speciesist views. We argue, however, that there is good reason to reject person-rearing views. Person-rearing views have to be coupled with an account of flourishing, which will (according to Jaworska and Tannenbaum) be either a species norm or an intrinsic potential account of flourishing. As we show, however, person-rearing accounts generate extremely implausible consequences when combined with the accounts of flourishing Jaworska and Tannenbaum need for the purposes of their view.
A standard argument for one-boxing in Newcomb’s Problem is ‘Why Ain’cha Rich?’, which emphasizes that one-boxers typically make a million dollars compared to the thousand dollars that two-boxers can expect. A standard reply is the ‘opportunity defence’: the two-boxers who made a thousand never had an opportunity to make more. The paper argues that the opportunity defence is unavailable to anyone who grants that in another case—a Frankfurt case—the agent is deprived of opportunities in the way that advocates of Frankfurt cases typically claim.
Do computer simulations advance our knowledge and if so, how? This paper approaches these questions by drawing on distinctions and insights from the philosophical study of knowledge. I focus on propositional knowledge obtained by simulations and address two key issues: How do computer simulations give rise to propositional content? And how can we be justified in believing the corresponding propositions? To answer these questions, I describe schematically how propositional content may be constructed from the inputs and outputs of computer simulations. I further argue that this propositional content has an inferential justification. I provide the premises and the conclusion of the inference. But in the end, this inference proves insufficient for knowledge from computer simulation. What is needed too is that there are reasons to believe that the right sort of inference is carried out. This is compatible with a variety of internalism regarding justification and also makes sense of the practice of verification.
What is it like to be a bat? What is it like to be sick? These two questions are much closer to one another than has hitherto been acknowledged. Indeed, both raise a number of related, albeit very complex, philosophical problems. In recent years, the phenomenology of health and disease has become a major topic in bioethics and the philosophy of medicine, owing much to the work of Havi Carel (2007, 2011, 2018). Surprisingly little attention, however, has been given to the phenomenology of animal health and suffering. This omission shall be remedied here, laying the groundwork for the phenomenological evaluation of animal health and suffering.
The question whether a constitutive linguistic norm can be prescriptive is central to the debate on the normativity of meaning. Recently, the author has attempted to defend an affirmative answer, pointing to how speakers sporadically invoke constitutive linguistic norms in the service of linguistic calibration. Such invocations are clearly prescriptive. However, they are only appropriate if the invoked norms are applicable to the addressed speaker. But that can only be the case if the speaker herself generally accepts them. This qualification has led critics to argue that if an addressed speaker’s acceptance is a necessary condition for legitimate prescriptions (and reproach for failure to adhere to them), then the account becomes unable to underwrite actual normativity. Moreover, critics argue, a danger of vicious circularity arises from the calibration account. This paper shows that once a vantage point within the calibration practice is accepted, the criticisms lose their force. It then explores why a theorist might reject such a perspective and suggests, as a plausible candidate, implicit Humean assumptions about the proper explanation of (linguistic) action. The paper ends by sketching a way forward for the debate on the normativity of meaning in light of this diagnosis.
Truth pluralists say that the nature of truth varies between domains of discourse: while ordinary descriptive claims or those of the hard sciences might be true in virtue of corresponding to reality, those concerning ethics, mathematics, institutions (or modality, aesthetics, comedy…) might be true in some non-representational or “anti-realist” sense. Despite pluralism attracting increasing amounts of attention, the motivations for the view remain underdeveloped. This paper investigates whether pluralism is well-motivated on ontological grounds: that is, on the basis that different discourses are concerned with different kinds of entities. Arguments that draw on six different ontological contrasts are examined: (i) concrete vs. abstract entities; (ii) mind-independent vs. mind-dependent entities; (iii) sparse vs. merely abundant properties; (iv) objective vs. projected entities; (v) natural vs. non-natural entities; and (vi) ontological pluralism (entities that literally exist in different ways). I argue that the additional premises needed to move from such contrasts to truth pluralism are either implausible or unmotivated, often doing little more than to bifurcate the nature of truth when a more theoretically conservative option is available. If there is a compelling motivation for pluralism, I suggest, it’s likely to lie elsewhere.
This is a critical exploration of the relation between two common assumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely related, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation.
This chapter presents a typology of the different kinds of inductive inferences we can draw from our evidence, based on the explanatory relationship between evidence and conclusion. Drawing on the literature on graphical models of explanation, I divide inductive inferences into (a) downwards inferences, which proceed from cause to effect, (b) upwards inferences, which proceed from effect to cause, and (c) sideways inferences, which proceed first from effect to cause and then from that cause to an additional effect. I further distinguish between direct and indirect forms of downwards and upwards inferences. I then show how we can subsume canonical forms of inductive inference mentioned in the literature, such as inference to the best explanation, enumerative induction, and analogical inference, under this typology.
Many writers have recently urged that the epistemic rationality of beliefs can depend on broadly pragmatic (as opposed to truth-directed) factors. Taken to an extreme, this line of thought leads to a view on which there is no such thing as a distinctive epistemic form of rationality. A series of papers by Susanna Rinard develops the view that something like our traditional notion of pragmatic rationality is all that is needed to account for the rationality of beliefs. This approach has undeniable attractions. But examining different versions of the approach uncovers problems. The problems help reveal why epistemic rationality is an indispensable part of understanding rationality—not only of beliefs, but of actions. We may or may not end up wanting to make a place, in our theories of epistemic rationality, for factors such as the practical or moral consequences of having beliefs. But a purely pragmatic notion of rationality—one that’s stripped of any component of distinctively epistemic evaluation—cannot do all the work that we need done.
According to an influential view that I call agentialism, our capacity to believe and intend directly on the basis of reasons—our rational agency—has a normative significance that distinguishes it from other kinds of agency. Agentialists maintain that insofar as we exercise rational agency, we bear a special kind of responsibility for our beliefs and intentions, and those attitudes are truly our own. In this paper I challenge these agentialist claims. My argument centers on a case in which a thinker struggles to align her belief to her reasons, and succeeds only by resorting to non-rational methods. I argue that she is responsible for the attitude generated by this struggle; that this process expresses her capacities for rationality and agency; and that the belief she eventually arrives at is truly her own. So rational agency is not distinctive in the ways that agentialists contend.
Many philosophers are attracted to the interventionist slogan: “No causation without manipulability, no manipulability without causation”. Roughly speaking, on an interventionist account, X is a (type-level) cause of Y with respect to a variable set V if and only if an intervention that changes the value of X would also change the value of Y when all other relevant variables in V are held fixed at some value. The interventionist approach captures an important difference between genuine causation and mere correlation: if X causes Y, a proper intervention that changes X would also change Y; if X is merely correlated with Y, Y would not change under suitable manipulation of X.
Suppose you harm, offend, or otherwise wrong another person. Confronted with the possibility of sanction, you might say any of the
following in an attempt avoid blame: “I couldn’t help
it.” “Someone made me do it.” “I had no
choice.” “It was unavoidable.” “There was no
other option.” There’s a natural reading of such defenses
on which they appeal to the principle at the center of this entry, the
“Principle of Alternative Possibilities” (cp. Frankfurt
Principle of Alternative Possibilities (PAP) : a
person is morally responsible for what she has done only if she could
have done otherwise.
The stoics, the academic sceptics and the epicureans all to various degrees basically agreed—or at least largely lived as if they agreed—that happiness was ataraxia, imperturbable calm and tranquility. …
Until recently, armchair philosophy was in a state of innocence. According to standard philosophical practice (SPP), one can use one’s own intuitions about hypothetical cases as evidence for or (more frequently) against philosophical definitions of important philosophical categories, e.g. knowledge, justification, truth, freedom of will, responsibility, personal identity, causation etc. If the definition’s implications about particular cases are in line with one’s intuitive judgments, the definition is taken to be confirmed; and if its implications are in conflict with one’s intuitions, the definition is considered to be refuted. In general, SPP centrally involves testing philosophical theories about the nature of philosophical categories against one’s intuitions about particular cases. This method is deeply entrenched in our current philosophical practice (but see Deutsch 2010, Cappelen 2012).
Intellectual humility has attracted attention in both philosophy and psychology. Philosophers have clarified the nature of intellectual humility as an epistemic virtue; and psychologists have developed scales for measuring people’s intellectual humility. Much less attention has been paid to the potential effects of intellectual humility on people’s negative attitudes and to its relationship with prejudice-based epistemic vices. Here we fill these gaps by focusing on the relationship between intellectual humility and prejudice. To clarify this relationship, we conducted four empirical studies. The results of these studies show three things. First, people are systematically prejudiced towards members of groups perceived as dissimilar. Second, intellectual humility weakens the association between perceived dissimilarity and prejudice. Third, more intellectual humility is associated with more prejudice overall. We show that this apparently paradoxical pattern of results is consistent with the idea that it is both psychologically and rationally plausible that one person is at the same time intellectually humble, epistemically virtuous and strongly prejudiced.
The eliminative view of gauge degrees of freedom—the view that they arise solely from descriptive redundancy and are therefore eliminable from the theory— is a lively topic of debate in the philosophy of physics. Recent work attempts to leverage properties of the QCD θYM-term to provide a novel argument against the eliminative view. The argument is based on the claim that the QCD θYM-term changes under “large” gauge transformations. Here we review geometrical propositions about fiber bundles that unequivocally falsify these claims: the θYM-term encodes topological features of the fiber bundle used to represent gauge degrees of freedom, but it is fully gauge-invariant. Nonetheless, within the essentially classical viewpoint pursued here, the physical role of the θYM-term shows the physical importance of bundle topology (or superpositions thereof) and thus weighs against (a naive) eliminativism.
The concept of “fact” has a history. Over the past centuries, physicists have appropriated it in various ways. In this article, we compare Ernst Mach and Albert Einstein’s interpretations of the concept. Mach, like most nineteenth-century German physicists, contrasted fact and theory. He understood facts as real and complex combinations of natural events. Theories, in turn, only served to order and communicate facts efficiently. Einstein’s concept of fact was incompatible with Mach’s, since Einstein believed facts could be theoretical too, just as he ascribed mathematical theorizing a leading role in representing reality. For example, he used the concept of fact to refer to a generally valid result of experience. The differences we disclose between Mach and Einstein were symbolic for broader tensions in the German physics discipline. Furthermore, they underline the historically fluid character of the category of the fact, both within physics and beyond.
My daughter Kate eats desserts slowly -- has done so as long as I can remember. She is what I'll call an extreme savorer. In other words, she is a completely irrational moral monster, as I will now endeavor to show. …
Policymaking during a pandemic can be extremely challenging. As COVID-19 is a new disease and its global impacts are unprecedented, decisions need to be made in a highly uncertain, complex and rapidly changing environment. In such a context, in which human lives and the economy are at stake, we argue that using ideas and constructs from modern decision theory, even informally, will make policymaking more a responsible and transparent process.