In this discussion, we look at three potential problems that arise for Whiting’s account of normative reasons. The first has to do with the idea that objective reasons might have a modal dimension. The second and third concern the idea that there is some sort of direct connection between sets of reasons and the deliberative ought or the ought of rationality. We can see that we might be better served using credences about reasons (i.e., creasons) to characterise any ought that is distinct from the objective ought than possessed or apparent reasons.
Some dualists say that the soul is a fundamental entity. I think we’re not in a position to think that. Compare this. We have no reason to think electrons are not elementary particles. They certainly aren’t made of any of the other particles we know of, so they are, we might say, “relatively elementary” with respect to the particles we know. …
In a number of recent posts, I argued that mid-sized objects like ourselves start microscopic. All my arguments so far relied on relativity. Here is one that doesn’t. Biological entities are unlikely to have a perfectly flat macroscopic geometrical face (biological things tend to be rounded, rough, pointy, but not perfectly flat). …
Consider this fairly standard version of the argument from hiddenness:
If God exists, he produces everything that is necessary for a personal relationship with every nonresisting person. Belief in the existence of x is necessary for a personal relationship with x. …
From purely geometrical facts, it follows that every spatially extended entity is arbitrarily small at its beginning and at its end in almost every reference frame. A stronger result is possible in the special case of the beginnings of substances in simple Aristotelian substantial change. …
Aristotle’s definition of syllogism in his Prior Analytics is usually charged of being too vague and, more specifically, of not being adequate for its supposed definiendum. Aristotle is supposed to define the stricter notion of “syllogism” (a sort of argument in which the premises are an appropriate pair of sentences formulated in predicative form, in which the conclusion is a different sentence in predicative form) but he seems to produce a definition for some broader notion of valid argument or deduction in general. 1 I believe that this charge, however likely, is not correct. Aristotle’s definition of syllogism is far from being clear, since it is phrased in his jargon and with his usual laconism. However, once we are in a better position to understand what he means with his peculiar phrasing, we can see that the definition he offers is appropriate for the very notion of syllogism. I mean that Aristotle is really defining that form of argument in which the conclusion is a predicative (or categorical) form attained by means of a premise-pair of the appropriate sort (with a middle term relating to each extreme in each premise ).
Perturbative expansions have played a peculiarly central role in quantum field theory, not only in extracting empirical predictions but also in investigations of the theory’s mathematical and conceptual foundations. This paper brings the special status of QFT perturbative expansions into focus by tracing the history of mathematical physics work on perturbative QFT and situating a contemporary approach, perturbative algebraic QFT, within this historical context. Highlighting the role that perturbative expansions have played in foundational investigations helps to clarify the relationships between the formulations of QFT developed in mathematical physics and high-energy phenomenology.
[Editor’s Note: The following new entry by Carl Knight
on this topic by the previous author.]
If you believe that conduct in some case is right or wrong, you have a
moral judgment or intuition. Perhaps you have many such judgments
about different cases. You might, nevertheless, consider that
judgments alone do not justify the moral views they express. You and your moral interlocutors might be concerned that “what
we actually accept is fraught with idiosyncrasy and vulnerable to
vagaries of history and personality” (Elgin 1996: 108) or
displays “irregularities and distortions” (Rawls 1971:
In his recent The Parmenidean Ascent, Michael Della Rocca develops a regress-theoretic case, reminiscent of F.H. Bradley’s famous argument in Appearance and Reality, against the intelligibility of relations and in favor of a monistic conception of reality. I argue that Della Rocca illicitly supposes that “internal” relations – in one sense of that word – lead to a “chain” regress, a regress of relations relating relations and relata. In contrast, I contend that if “internal” or grounded relations lead to a regress at all, it is a kind of “fission” regress within the relata themselves, and that a chain regress for relations only arises, if at all, for so-called “external” relations, relations not grounded in their relata. In this way, I contend that Della Rocca pursues a regress for so-called “internal” or grounded relations that only arises, if at all, for so-called “external” relations, relations not grounded in their relata. I compare Della Rocca’s case against relations with Bradley’s reasoning in Appearance and Reality, and suggest in this context that Bradley may, perhaps, have the upper hand.
In the last couple of decades, the most prominent argument from evil is base on the idea that God couldn’t allow a gratuitous evil. Here is one way to define a gratuitous evil, paraphrasing Rowe:
- E is gratuitous if and only if there is no greater or equal good G that is only obtainable by God if God permits E or something equal or worse. …
There are two main kinds of arguments against abortion: Those based on the idea that we begin existing at conception and those based on the idea that personhood begins at conception. One of the main objections to thinking that our existence begins at conception is the incredulous stare: How can that single cell be me?! …
This paper introduces the axiom of Negative Dominance, stating that, if a lottery f is strictly preferred to a lottery g, then some outcome in the support of f is strictly preferred to some outcome in the support of g. It is shown that, if preferences are incomplete on a sufficiently rich domain, then this plausible axiom, which holds for complete preferences, is incompatible with an array of otherwise plausible axioms for choice under uncertainty. In particular, in this setting, Negative Dominance conflicts with the standard Independence axiom. A novel theory, which includes Negative Dominance, and rejects Independence, is developed and shown to be consistent.
I respond to Tim Smartt’s (2023) skepticism about epistemic blame. Smartt’s skepticism is based on the claims that i) mere negative epistemic evaluation can better explain everything proponents of epistemic blame say we need epistemic blame to explain; and ii) no existing account of epistemic blame provides a plausible account of the putative force that any response deserving the label “blame” ought to have. He focuses primarily on the prominent “relationship-based” account of epistemic blame to defend these claims, arguing that the account is explanatorily idle, and cannot distinguish between epistemically excused and epistemically blameworthy agents. I argue that Smartt mischaracterizes the account’s role for judgments of epistemic relationship impairment, leading to mistaken claims about the account’s predictions. I also argue that the very feature of the account that Smartt mischaracterizes is key to understanding what epistemic blame does for our epistemic responsibility practices that mere negative epistemic evaluation cannot.
Atomic and close-to-atomic scale manufacturing (ACSM) is the core competence of Manufacturing III. Unlike other conceptions or terminologies that only focus on the atomic level precision, ACSM defines a new realm of manufacturing where quantum mechanics plays the dominant role in the atom/molecule addition, migration and removal, considering the uncertainty principle and the discrete nature of particles. As ACSM is still in its infant stage, only little has been systematically elaborated at the core proposition of ACSM by now, hence there is a need to understand its concept and vision. This article elucidates the development of ACSM and clarifies its proposition, which aims to achieve a clearer understanding on ACSM and direct more effective efforts toward this promising area.
For the many friends who’ve asked me to comment on the OpenAI drama: while there are many things I can’t say in public, I can say I feel relieved and happy that OpenAI still exists. This is simply because, when I think of what a world-leading AI effort could look like, many of the plausible alternatives strike me as much worse than OpenAI, a company full of thoughtful, earnest people who are at least asking the right questions about the ethics of their creations, and who—the real proof that they’re my kind of people—are racked with self-doubts (as the world has now spectacularly witnessed). …
can solve a wide array of problems, and the models and proofs of unsatisfiability emitted by SAT solvers can be checked by verified software. In this way, the SAT toolchain is trustworthy. However, many applications are not expressed natively in SAT and must instead be encoded into SAT. These encodings are often subtle, and implementations are error-prone. Formal correctness proofs are needed to ensure that implementations are bug-free. In this paper, we present a library for formally verifying SAT encodings, written using the Lean interactive theorem prover. Our library currently contains verified encodings for the parity, at-most-one, and at-most-k constraints. It also contains methods of generating fresh variable names and combining sub-encodings to form more complex ones, such as one for encoding a valid Sudoku board. The proofs in our library are general, and so this library serves as a basis for future encoding efforts.
This is an English translation of the records of Lu Cheng (Lu Cheng lu 陸澄錄) in the first volume (juan shang 卷上) of the Record of Instructions for Practice (Chuan xi lu 傳習錄). Wang Yangming’s followers kept records of statements he made and conversations he held when discussing his Ruist learning with them. During and after his lifetime, these records were compiled in one or more volumes and titled Record of Instructions for Practice (or something similar). Many versions, each with different content, were published over the course of the sixteenth and seventeenth centuries. Some editions included a volume with a compilation of important correspondence and pedagogical writings. Among these, the three-juan version included in the
The evolution of complex life forms, such as multicellular organisms, is the result of a number of evolutionary transitions in individuality (ETIs). Several attempts have been made to explain their origins, many of which have been internalist (i.e., based largely on internal properties of these life form’s ancestors). Here, we show how an externalist perspective, via the ecological scaffolding model in which properties of complex life forms arise from an external scaffold, can shed new light on the question of ETIs. Ultimately, we anticipate progress in the field will occur by recognizing the importance of both the internalist and externalist modes of explanation for ETIs. We illustrate this by considering an extension of the ecological scaffolding model by niche construction in which particles modify the environment which later becomes the scaffold giving rise to collective-level individuality.
In the first volume of Law, Legislation and Liberty (1973, Chaps. 1 and 2), F.A. Hayek exposes his famous criticism of the constructivist (or rationalist) approach to human history. As Hayek puts it, the latter approach assumes that humans are fully rational and thus can construct perfect social institutions because reason can advise them on how to impeccably do so. In this regard, Hayek (1973, Chap. 1, p. 12) writes: “Complete rationality of action in the Cartesian sense demands complete knowledge of all the relevant facts. A designer or engineer needs all the data and full power to control or manipulate them if he is to organize the material objects to produce the intended result. But the success of action in society depends on more particular facts than anyone can possibly know. And our whole civilization in consequences rests, and must rest, on our believing much that we cannot know [Hayek’s italics] to be true in the Cartesian sense.”
If there is any consensus about knowledge in contemporary epistemology, it is that there is one primary kind: knowledge-that. I put forth a view, one I find in the works of Aristotle, on which knowledge-of – construed in a fairly demanding sense, as being well-acquainted with things – is the primary, fundamental kind of knowledge. As to knowledge-that, it is not distinct from knowledge-of, let alone more fundamental, but instead a species of it. To know that such-and-such, just like to know a person or place, is to be well-acquainted with a portion of reality – in this case a fact. In part by comparing classic Gettier cases to cases in which one has true impressions of but fails to know a person, I argue that this account not only respects our intuitions about knowledge-that – in particular that it is or entails non-accidentally true justified belief – but also explains them, providing a compelling analysis.
Conflict over who belongs in womenonly spaces is now part of mainstream political debate. Some think womenonly spaces should exclude on the basis of sex, and others think they should exclude on the basis of a person’s selfdetermined gender identity. Many who take the latter view appear to believe that the only reason for taking the former view could be antipathy towards men who identify as women. In this paper, we’ll revisit the secondwave feminist literature on separatism, in order to uncover the reasons for womenonly spaces as feminists originally conceived them. Once these reasons are understood, those participating in debates over womenonly spaces will be in a better position to adjudicate on whether shifting from sex to gender identity puts any significant interests at stake.
The alienation constraint on theories of well-being has been influentially expressed thus: ‘what is intrinsically valuable for a person must have a connection with what he would find in some degree compelling or attractive …. It would be an intolerably alienated conception of someone’s good to imagine that it might fail in any such way to engage him’ (Railton 1986: 9). Many agree this claim expresses something true, but there is little consensus on how exactly the constraint is to be understood. Here, I clarify the sense in which the quote offers a basic constraint on theories of well-being—a constraint that should be adopted by (e.g.) hedonists, desire satisfactionists, and objective list theorists alike. This constraint focuses on affective engagement, or positive affective stances in connection with a proposed good. I show that the constraint explains a near-universal intuition, and rules out a number of well-known theories of well-being.
Conditional statements are ubiquitous, from promises and threats to reasoning and decision making. By now, logicians have studied them from many different angles, both semantic and proof-theoretic. This paper suggests two more perspectives on the meaning of conditionals, one dynamic and one geometric, that may throw yet more light on a familiar and yet in some ways surprisingly elusive and many-faceted notion.
Supersymmetry (SUSY) has long been considered an exceptionally promising theory. A central role for the promise has been played by naturalness arguments. Yet, given the absence of experimental findings it is questionable whether the promise will ever be fulfilled. Here, I provide an analysis of the promises associated with SUSY, employing a concept of pursuitworthiness. A research program like SUSY is pursuitworthy if (1) it has the plausible potential to provide high epistemic gain and (2) that gain can be achieved with manageable research efforts. Naturalness arguments have been employed to support both conditions (1) and (2). First, SUSY has been motivated by way of analogy: the proposed symmetry between fermions and bosons is supposed to ’protect’ the small Higgs mass from large quantum corrections just as the electron mass is protected through the chiral symmetry. Thus, SUSY held the promise of solving a major problem of the Standard Model of particle physics. Second, naturalness arguments have been employed to indicate that such gain is achievable at relatively low cost: SUSY discoveries seemed to be well in reach of upcoming high-energy experiments. While the first part of the naturalness argument may have the right form to facilitate considerations of pursuitworthiness, the second part of the argument has been problematically overstated.
Records of the past are a pervasive feature of our world which are referred to in many areas of both physics and philosophy, but there is no widespread consensus on what records are. I will present a new account of records in terms of robustness against noise. This account highlights previously overlooked features of records: their use of redundancy and that they must be treated macroscopically. These features have implications the use of records in quantum mechanics and thermodynamics and shows how the misuse of records has caused problems in the philosophy of physics literature.
Records frequently appear in explanations of the emergence of classicality from quantum mechanics. Quantum Darwinism in particular argues that the interaction between a system and its environment produces redundant records in the environment. The records allow the state of the system to be determined independently by many observers, which is identified as a key criterion for classicality. This models the emergence of the classical world in an information theoretic framework. This differs from the more commonly used standard for emergence in the philosophy of physics literature which relies on the instantiation of classical dynamics, motivated by the focus on the dynamics of the reduced density matrix in quantum decoherence. The goal of this paper is to examine the use of records in quantum darwinism and show how understanding what a record is allows us to relate the information theoretic emergence described by quantum darwinism to the accounts of the emergence which focus on dynamical laws. This tells us why records play such a central role in emergent classicality.
A number of authors, including me, have argued that the output of our most complex climate models, that is, of global climate models and Earth system models, should be assessed possibilistically. Worries about the viability of doing so have also been expressed. I examine the assessment of the output of relatively simple climate models in the context of discovery and point out that this assessment is of epistemic possibilities. At the same time, I show that the concept of epistemic possibility used in the relevant studies does not fit available analyses of this concept. Moreover, I provide an alternative analysis that does fit the studies and broad climate modelling practices as well as meshes with my existing view that climate model assessment should typically be of real possibilities. On my analysis, to assert that a proposition is epistemically possible is to assert that it is not known to be false and is consistent with at least approximate knowledge of the basic way things are. I, finally, consider some of the implications of my discussion for available possibilistic views of climate model assessment and for worries about such views. I conclude that my view helps to address worries about such assessment and permits using the full range of climate models in it.
A formal theory of causal reasoning is presented that encompasses both Pearl’s approach to causality and several key formalisms of nonmono-tonic reasoning in Artificial Intelligence. This theory will be derived from a single rationality principle of causal acceptance for propositions. However, this principle will also set the theory of causal reasoning apart from common representational approaches to reasoning formalisms.
In the last ten years there has been an increase in using artificial neural networks to model brain mechanisms, giving rise to a deep learning revolution in neuroscience. This chapter focuses on the ways convolutional deep neural networks (DCNNs) have been used in visual neuroscience. A particular challenge in this developing field is the measurement of similarity between DCNNs and the brain. We survey similarity measures neuroscientists use, and analyse their merit for the goals of causal explanation, prediction and control. In particular, we focus on two recent intervention-based methods of comparing DCNNs and the brain that are based on linear mapping (Bashivan et al., 2019, Sexton and Love, 2022), and analyse whether this is an improvement. While we conclude explanation has not been reached for reasons of underdetermination, progress has been made with regards to prediction and control.
Well-being, happiness, and quality of life are now established objects of social and medical research. Does this science produce knowledge that is properly about well-being? I call this The Question of Value-Aptness and over the course of the book, Alexandrova 2017, defend the following answers to this question.