Nietzsche characterizes the Third Essay of On the Genealogy of Morality as “offer[ing] the answer to the question whence the ascetic ideal […] derives its tremendous power although it is the harmful ideal par excellence” (EH GM). What draws people to ideals of self-denial and self-punishment? In short, I will argue, according to Nietzsche, the same as what draws many to physical self-harm: to stop feeling like you’re going to burst out of your skin. “The ascetic ideal,” in Nietzsche’s sense, is an ideal of categorically denying certain desires (instincts, impulses, etc.). What distinguishes this type of ideal is a certain “valuation[al]” (GM III:11) stance — a stance of condemnation (demonization, mistrust) of certain of one’s desires, and correspondingly of oneself for having them or “giving in” to them (cf. III:8, 10). Perhaps one feels a pang of guilt at the first glimpse of ill-will or unforgiveness in oneself. Or one is moved in confession to include simply that one “felt lust,” jealousy, anger. Merely having the desire is treated as problematic, something to feel bad about, reason for punishment.
. We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft on errorstatistics.com. …
When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves’ (or AJBT) test to answer this question (Thaler & Sunstein, 2008, 5). In a recent paper, L. A. Paul and Sunstein (ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally trans-formative experience, that is, one that results in our values changing (Paul, 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Pettigrew, 2019, Chapters 6 and 7).
Anicius Manlius Severinus Boethius (born: circa 475–7 C.E.,
died: 526? C.E.) has long been recognized as one of the most important
intermediaries between ancient philosophy and the Latin Middle Ages
and, through his Consolation of Philosophy, as a talented
literary writer, with a gift for making philosophical ideas dramatic
and accessible to a wider public. He had previously translated
Aristotle’s logical works into Latin, written commentaries on
them as well as logical textbooks, and used his logical training to
contribute to the theological discussions of the time. All these
writings, which would be enormously influential in the Middle Ages,
drew extensively on the thinking of Greek Neoplatonists such as
Porphyry and Iamblichus.
Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious. Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them. …
We propose that measures of information integration can be more straightforwardly interpreted as measures of agency rather than of consciousness. This may be useful to the goals of consciousness research, given how agency and consciousness are “duals” in many (though not all) respects.
The notion of growth is one of the most studied notions within economic theory and, traditionally, it is accounted for on the basis of a positivist thesis according to which assumptions are not relevant, as long as economic models have acceptable predictive power. Following this view, it does not matter whether assumptions are realistic or not. Arguments against this principle may involve a defense of the realistic assumptions over highly idealized or false ones. This article aims in a different direction. Instead of demanding more realism, we can accept the spirit of the mentioned thesis, but, instead, criticize the circularity that may arise by combining different assumptions that are necessary for the explanation of economic growth in mainstream economics. Such a circularity is a key aspect of the well-known problem of providing microfoundations for macroeconomic properties. It is here suggested that the notion of emergence could be appropriate to arrive at a better understanding of growth, clarifying the issues related to circularity, but without totally rejecting the usefulness of unrealistic assumptions.
Pereboom and Caruso propose the quarantine model as an alternative to existing models of criminal justice. They appeal to the established public health practice of quarantining people, which is believed to be effective and morally justified, to explain why -in criminal justice- it is also morally acceptable to detain wrongdoers, without assuming the existence of a retrospective moral responsibility. Wrongdoers in their model are treated as carriers of dangerous diseases and as such should be preventively detained (or rehabilitated) until they no longer pose a threat to society. Our main concern in this paper is that Pereboom and Caruso adopt an idiosyncratic meaning of quarantine regulations. We highlight a set of important disanalogies between their quarantine model and the quarantine regulations currently adopted in public health policies. More specifically, we argue that the similarities that Pereboom and Caruso propose to substantiate their analogy are not consistent—despite what they claim—with the regulations underlying quarantine as an epidemiological process. We also notice that certain quarantine procedures adopted in public health systems are inadequate to deal with criminal behaviors. On these grounds, we conclude that Pereboom and Caruso should not appeal to the quarantine analogy to substantiate their view, unless they address the issues and criticism we raise in this paper.
Normativity is a fundamental feature of selfhood. In the Modern, Enlightenment tradition, being a self is having the capacity to be autonomous, that is, to be responsible for one's own actions and beliefs. To be a self, an organism must be capable of freely following cognitive, behavioural and linguistic norms. It must be able to justify its positions by giving reasons to others -- and to itself. These capacities are contrasted not only with mechanical causation in the physical world, but with the heteronomous status of slaves, of individuals controlled by hypnosis or evil neurosurgeons, or of those manipulated by propaganda or advertising. Thoughts and actions are only mine in so far as I, as an independent individual, take responsibility for them. Only by such deliberative and conscious activity can I take ownership of them. Descartes' rejection of received opinion and Kant's insistence that a Subject must be self-regulating exemplify this Modern tradition.
In this paper, I expand on Sarkar’s (2019) view that the term ‘biodiversity’ should be understood primarily as a normative concept with a descriptive component molded to the evaluation; hence, ‘biodiversity’ is a thick term. The idea of inseparability is advocated for by using Bernard William’s example of thick terms as context-oriented whilst taking issue with McDowell’s “anti-disentangling” argument and other contemporary arguments for separability. Compared to other papers in the area of environmental pragmatism, this paper argues that conservation scientists will achieve greater success in conservation efforts by framing ‘biodiversity’ as a primarily normative concept to the value system of the local community.
The historically-influential perceptual analogy states that intuitions and perceptual experiences are alike in many important respects. Phenomenalists defend a particular reading of this analogy according to which intuitions and perceptual experiences share a common phenomenal character. Call this the 'phenomenalist thesis'. The phenomenalist thesis has proven highly influential in recent years. However, insufficient attention has been given to the challenges it raises for theories of intuition. In this paper, I first develop one such challenge. I argue that if we take the idea that intuitions and perceptual experiences have a common phenomenal character seriously, then a version of the familiar problem of perceptual presence arises for intuitions. I call this the 'problem of intuitive presence'. In the second part of the paper I sketch a novel enactivist solution to this problem.
Aesthetic values have featured in scientific practice for centuries, shaping what theories and experiments are pursued, what explanations are considered satisfactory and whether theories are trusted. How do such values enter in the different levels of scientific practice and should they influence our epistemic attitudes? In this chapter I explore these questions and how throughout scientific progress the questions we ask about the role of aesthetic values might change. I start this chapter with an overview of the traditional philosophical distinction between context of discovery and context of justification, showing how aesthetic values were taken to be relevant to scientific discovery and not scientific evaluation, which was regarded value-free. I then proceed with an exploration of different levels of scientific activities, from designing experiments and reconstructing fossils to evaluating data. In this discussion we will see that the traditional distinction between context of discovery and justification seems to break down, as aesthetic values shape all levels of scientific activity. I then turn our attention to the epistemological question: can beauty play an epistemic role, is it to be trusted, or is it a suspect value that might bias scientific inquiry? I explore how we could justify the epistemic import of aesthetic values and present some concerns as well. In the last section I ask whether we should expect the questions surrounding aesthetic values in scientific practice to change with scientific progress, as we enter the era of post-empirical physics, big data science, and make more and more discoveries using AI.
I provide a critical commentary regarding the attitude of the logician and the philosopher towards the physicist and physics. The commentary is intended to showcase how a general change in attitude towards making scientific inquiries can be beneficial for science as a whole. However, such a change can come at the cost of looking beyond the categories of the disciplines of logic, philosophy and physics. It is through self-inquiry that such a change is possible, along with the realization of the essence of the middle that is otherwise excluded by choice. The logician, who generally holds a reverential attitude towards the physicist, can then actively contribute to the betterment of physics by improving the language through which the physicist expresses his experience. The philosopher, who otherwise chooses to follow the advancement of physics and gets stuck in the trap of sophistication of language, can then be of guidance to the physicist on intellectual grounds by having the physicist’s experience himself. In course of this commentary, I provide a glimpse of how a truthful conversion of verbal statements to physico-mathematical expressions unravels the hitherto unrealized connection between Heisenberg uncertainty relation and Cauchy’s definition of derivative that is used in physics. The commentary can be an essential reading if the reader is willing to look beyond the categories of logic, philosophy and physics by being ‘nobody’.
As with most topics in philosophy, there is no consensus about what experimental philosophy is. Most broadly, experimental philosophy involves using scientific methods to collect empirical data for the purpose of casting light on philosophical issues. Such a definition threatens to be too broad, however: Taking the nature of matter to be a philosophical issue, research at the Large Hadron Collider would count as experimental philosophy. Others have suggested more narrow definitions, characterizing experimental philosophy in terms of the use of scientific methods to investigate intuitions. This threatens to be too narrow, however, excluding such work as Eric Schwitzgebel’s comparison of the rates of theft of ethics books to similar volumes from other areas of philosophy for the purpose of finding out whether philosophical training in ethics promotes moral behavior. While restricting experimental philosophy to the study of intuitions is too narrow, this nonetheless covers most of the research in this area. Focusing on this research, we begin by discussing some of the methods that have been used by experimental philosophers. We then distinguish between three types of goals that have guided experimental philosophers, illustrating these goals with some examples.
Eugen Fischer and John Collins have brought together an impressive, and important, series of essays concerning the methodological debates between rationalists and naturalists, and how these debates have been impacted by work in experimental philosophy. The work at issue concerns the evidential value of intuitions, and as such is only a small part of the experimental philosophy corpus as I understand it. In fact, Fischer and Collins define experimental philosophy in this narrow sense in their introduction. On their view, experimental philosophy ‘‘builds on the assumption that, for better or worse, intuitions are crucially involved in philosophical work’’ (3). The parenthetical serves to emphasize that such work could either be pursued from a positive perspective aiming to vindicate the use of intuitions in philosophy or from a negative perspective aiming to undermine that use. Noting these two perspectives, it might then seem that experimental philosophy is neutral with regard to methodological debate: ‘‘experimental philosophy is not a party to the dispute between methodological rationalism and naturalism, but offers a new framework for settling it’’ (23).
In this paper, we use the case of the COVID-19 pandemic in Europe to address the question of what kind of knowledge we should incorporate into public health policy. We show that policy-making in Europe during the COVID-19 pandemic has been biomedicine-centric in that its evidential basis marginalised input from non-biomedical disciplines. We then argue that in particular the social sciences could contribute essential expertise and evidence to public health policy in times of biomedical emergencies and that we should thus strive for a tighter integration of the social sciences in future evidence-based policy-making. This demand faces challenges on different levels, which we identify and discuss as potential inhibitors for a more pluralistic evidential basis.
Writing comments on a post about adversarial collaboration feels like a place where I should be adversarial (if in a collaborative spirit). But I agree with basically everything Eric says here. Frankly, this is all spot on. You probably don’t want to read 500 words from me just saying “yep, this” and agreeing with his excellent, sensible advice, though. So, let me attempt to be provocative: Eric doesn’t go far enough! (Not that he was trying to, of course.) All philosophers should be asking themselves what empirical evidence would actually test their views. Collaboration should be the rule, not the exception. And we should expect collaborations to have an adversarial element, treating this as a feature, not a bug.
The field that has come to be known as the Critical Philosophy of
Race is an amalgamation of philosophical work on race that largely
emerged in the late 20th century, though it draws from earlier work. It
departs from previous approaches to the question of race that dominated
the modern period up until the era of civil rights. Rather than
focusing on the legitimacy of the concept of race as a way to
characterize human differences, Critical Philosophy of Race approaches
the concept with a historical consciousness about its function in
legitimating domination and colonialism, engendering a critical
approach to race and hence the name of the sub-field.
One body of research in experimental philosophy indicates that non-philosophers by and large do not employ the concept of phenomenal consciousness. Another body of research, however, suggests that people treat phenomenal consciousness as essential for having free will. In this chapter, we explore the tension between these findings. We suggest that the dominant, ordinary usages of ‘consciousness’ concern notions of being awake, aware, and exercising control, all of which bear a clear connection to free will. Based on this, we argue that findings purporting to show that people take the capacity for phenomenal consciousness to be necessary for free will are better interpreted in terms of a non-phenomenal understanding of consciousness. We explore this suggestion by calling on extant work on the dimensions of mind perception, and we expand on it, presenting the results of a new study employing a global sample.
Most authors who discuss willpower assume that everyone knows what it is, but our assumptions differ to such an extent that we talk past each other. We agree that willpower is the psychological function that resists temptations – variously known as impulses, addictions, or bad habits; that it operates simultaneously with temptations, without prior commitment; and that ’s skill at exec-use of it is limited by its cost, commonly called effort, as well as by the person utive functioning. However, accounts are usually not clear about how motivation functions during the application of willpower, or how motivation is related to effort. Some accounts depict willpower as the perceiving or formation of motivational contingencies that outweigh the temptation, and some depict it as a continuous use of mechanisms that interfere with reweighing the temptation. Some others now suggest that impulse control can bypass motivation altogether, although they refer to this route as habit rather than willpower.
I argue against the claim that the fundamental form of trust is a 2-place relation of A trusting B and in favour of the fundamental form being a 4- place relation of A, by ψ-ing, trusting B to φ. I characterize trusting behaviour as behaviour that knowingly makes one reliant on someone doing what they are supposed to do in the collaborative enterprise that the trusting behaviour belongs to. I explain how trust is involved in the following collaborative enterprises: knowledge transfer – i.e. telling someone something; maintaining a relationship; and passing responsibility for an action on to someone else. And I finish by showing how our talk of trust in non-collaborative contexts – e.g. trusting a branch to support one’s weight – may be explained by reference to the central sort of collaborative trust. Key words: collaboration; reliance; communication; Faulkner; Simpson; Jones.
Scholars, journalists, and activists working on climate change often distinguish between “individual” and “structural” approaches to decarbonization. The former concern choices individuals can make to reduce their “personal carbon footprint” (e.g., eating less meat). The latter concern changes to institutions, laws, and other social structures. These two approaches are often framed as oppositional, representing a mutually exclusive forced choice between alternative routes to decarbonization. After presenting representative samples of this oppositional framing of individual and structural approaches in environmental communication, we identify four problems with oppositional thinking and propose five ways to conceive of individual and structural reform as symbiotic and interdependent.
As the presence of artificial intelligence (AI) becomes increasingly common in the workplace, Human Resources (HR) professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. In this paper, our aim is to examine this concern, which has received little attention in debates about the ethics of algorithms. We begin by discussing a recent survey of HR professionals, which reports current attitudes about the use of such AI-based tools in the workplace. In general, expectations are such that in the next few years, AI will play a prominent role in the HR toolkit, especially for hiring and onboarding purposes. Perhaps the most common objection to the use of hiring algorithms or algorithmic decision-making systems in general is that they have the potential to be biased or lead to objectionable patterns of discrimination. However, while HR professionals have registered concerns about bias and discrimination, interestingly the most cited worry in the recent survey that we consider is that hiring algorithms will “dehumanize” the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e., removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e., conceiving of other humans as subhuman), we argue that fears about dehumanizing the hiring process can be fruitfully investigated by considering its potentially negative impact on the employee-employer relationship. We argue that there are good independent reasons to accept a genuine, substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We go on to argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms. However, as the use of recruitment algorithms becomes more widespread, difficult trade-offs may need to be made, since the human element, as it features in the hiring process, can also be the source of its own problems.
In his early work on the problem of coordination, Hans Reichenbach introduced axioms of coordination to describe the relationship between theory and observation. His insistence that these axioms are determinable a priori, however, causes him to ignore the normative dimensions of scientific inquiry and, in turn, generates a misleading interpretation of the theory-observation relationship. In response, I propose an alternative approach that describes this relationship through the framework of scientific practices. My argument will draw on two examples that have not been explored by the philosophical literature in the context of coordination problems: the clinical definition of death and Stanley Prusiner’s prion hypothesis.
warming—originally organized by readily identifiable vested interests—has by now recruited a large popular constituency of declared “skeptics” increasingly disposed to “take a stand”: some of them opposed to government regulation in general, some resistant to any claims to intellectual authority (perhaps especially scientific), and some mobilized by a version of the right to individual freedom of opinion. As a result, confidence in the expertise of scientists has reached an all time low: Internet sites, radio talk shows, and television channels preferentially transmit “contrarian” aacks on the credibility of climate scientists. Even our most responsible newspapers and journals, in their very commitment to the traditional ethic of “balance,” sometimes contribute to the widespread misimpression that climate scientists are deeply divided about both the extent of the dangers we face and the relevance of human activity to global warming. Not knowing who or what to believe, the natural response for most people is to do nothing, and the consequence, as Thomas Homer-Dixon wrote last year for the New York Times: “Climate policy is gridlocked, and there’s virtually no chance of a breakthrough” (2010). Meanwhile, as evidence both of the role of human contributions to global warming and the dangers of that warming continues to mount, consensus among climate scientists grows ever stronger, and those of us who aend to that evidence are increasingly alarmed.
Astronomer Charles Piazzi Smyth’s 1864–65 expedition to measure the Great Pyramid of Giza was planned around a system of linear measures designed to guarantee the validity of his measurements and settle ongoing uncertainties as to the Pyramid’s true size. When the intended system failed to come together, Piazzi Smyth was forced to improvise a replacement that presented a fundamental challenge to the metrological enterprise upon which his system had been based. The astronomer’s new system centered around a small lump of basalt, now held at Cambridge’s Whipple Museum of the History of Science, which nucleated a wide array of material and scientific considerations. Through a bipartite analysis of the physical and narrative dimensions of Piazzi Smyth’s basalt Standard, I develop the implications of its use and construction for understanding the material constitution of scientific instruments. In particular, I illustrate how instruments are locally constituted through co-accountable systems and how their material features become integrally implicated in both their uses and meanings.
In 1854 the biologist Thomas Henry Huxley pointed to a significant change in the way that reviewers were treating books that endorsed deeply flawed scientific theories. In the past, “when a book had been shown to be a mass of pretentious nonsense,” it “quietly sunk into its proper limbo. But these days appear, unhappily, to have gone by.” Due to the “uer ignorance of the public mind as to the methods of science and the criterion of truth,” scientists were now forced to review such books in order to expose their deficiencies (Huxley 1903, 1). Huxley’s observation indicates how the development of a mass reading audience in mid-nineteenth century Britain transformed the very nature of scientific controversy. Scientists were compelled to debate the validity of theories in new public sites, not just in exclusive scientific societies or in specialized scientific journals with limited circulation. It was during the nineteenth century that public controversy—not limited to science alone—became possible for the first time. In this short piece I will discuss how the “communications revolution” produced a public space for the debate over evolutionary theory in mid-nineteenth century Britain. I will focus on periodicals as one of those public spaces in which the debate took place.1
In this essay, I explore Justus Buchler’s ordinal naturalism with the goal of establishing how his phenomenological approach extends the range of human inquiry to include the many and varied traits of natural phenomena that are not “simply” the result of sensate experience or material functions. To achieve this goal I critically assess Buchler’s notion of “ontological parity”–the idea that abstract phenomena such as values, relations, ideals, and other mental contents are just as relevant as sense-data when one attempts to provide an adequate description of the world in naturalistic terms. I argue that certain phenomena, subsisting within what Buchler calls the “proceptive domain,” are legitimate objects of knowledge as they are part of a larger domain of phenomenological analysis: nature more broadly and justly understood. It is my view that in the attempt to describe the natural world Buchler’s ordinal naturalism succeeds where other forms of naturalism fail because his form of naturalism offers a more capacious view of nature that attempts to describe whatever is in any way, not just focus on what is readily apparent to specific forms of observation that may privilege one domain of analysis over another. I draw the conclusion that because Buchler’s ordinal naturalism contains within it a working principle of ontological parity, his approach to nature fulfills the criteria of the phenomenological method, and so I title his ordinal naturalism an ordinal phenomenology (Corrington 1992, 1-6, 9-14). Ultimately it is my aim to bring Buchler’s thought into closer connection with continental phenomenology, as well as to illustrate a more just and open understanding of nature through an analysis of his unique variety of philosophical naturalism.
I believe that tenured historians, philosophers, and sociologists of science— when presented with the opportunity—have a professional obligation to get involved in public controversies over what should count as science. I stress ‘tenured’ because the involved academics need to be materially protected from the consequences of their involvement, given the amount of misrepresentation and abuse that is likely to follow, whatever position they take. Indeed, the institution of academic tenure justifies itself most clearly in such heat‐seeking situations, where one may appear to offer a reasoned defense for views that many consider indefensible. To be sure, the opportunities for involvement will vary in kind and number, but I believe that we are obliged to embrace them. In the specific case of ‘demarcation’ questions of what counts as science, the people who possess the sort of general and comparative knowledge most relevant for adducing this matter are historians, philosophers, and sociologists of science—not professional scientists unschooled in these areas.
The last decade of the Qing dynasty (1644-1911) and Republican period (1912-1949) saw intensive efforts to revise the Qing Code, promulgate modern legal codes based on Japanese and German law, establish a modern system of courts, and develop a professional corps of lawyers and jurists (Huang 2001; Xu 2001; Yeung 2003; Young 2004; Neighbors 2004). These institutional reforms were implemented as part of the drive to have extraterritoriality rescinded and safeguard the sovereignty of the Qing dynasty and then Republic of China. The reforms were accompanied by new categories within civil and criminal law (including a new conceptual distinction between the two), new conceptions of legal knowledge and expertise, and rich discussions over sources of law which took place within the legal realm as well as the readership of Republican newspapers and journals (Young 2004; Lean 2007). If, as Roger Berkowitz (2005, 1) writes in his study of scientific codification in continental Europe, “in a legal system, there must be some way that the law comes to be known,” how did ways of knowing law change during this period of legal reform and broader intellectual change? Through a survey of jurisprudence textbooks and other legal publications, this paper argues that writers in early 20th-century China came to define jurisprudence (faxue 法學, falixue 法理學) in positivistic terms, ultimately using new conceptions of science (kexue 科學) and social science (shehui kexue 社會科學) to identify its place within a new ordering of modern knowledge.