1. 645111.788556
    There are two main strands of arguments regarding the value-free ideal (VFI): desirability and achievability (Reiss and Sprenger 2020). In this essay, I will argue for what I will call a compatibilist account of upholding the VFI focusing on its desirability even if the VFI is unachievable. First, I will explain what the VFI is. Second, I will show that striving to uphold the VFI (desirability) is compatible with the rejection of its achievability. Third, I will demonstrate that the main arguments against the VFI do not refute its desirability. Finally, I will provide arguments on why it is desirable to strive to uphold the VFI even if the VFI is unachievable and show what role it can play in scientific inquiry. There is no single definition of the VFI, yet the most common way to interpret it is that non-epistemic values ought not to influence scientific reasoning (Brown 2024, 2). Non-epistemic values are understood as certain ethical, social, cultural or political considerations. Therefore, it is the role of epistemic values, such as accuracy, consistency, empirical adequacy and simplicity, to be part of and to ensure proper scientific reasoning.
    Found 1 week ago on PhilSci Archive
  2. 645128.788697
    There is an overwhelmingly abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between ‘AI’ and ‘ethics’ can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and set the grounds for more informed discussions about scope, methods, and trainings of AI ethicists.
    Found 1 week ago on PhilSci Archive
  3. 658307.788748
    According to the principle of existential inertia: - If x exists at t1 and t2 > t1 and there is no cause of x’s not existing at t2, then x exists at t2. This sounds weird, and one way to get at the weirdness for me is to put it in terms of relativity theory. …
    Found 1 week ago on Alexander Pruss's Blog
  4. 658307.788772
    A simplified version of Goedel’s first incompleteness theorem (it’s really just a special case of Tarski’s indefinability of truth) goes like this: - Given a sound semidecidable system of proof that is sufficiently rich for arithmetic, there is a true sentence g that is not provable. …
    Found 1 week ago on Alexander Pruss's Blog
  5. 673521.788787
    Critical theory arose as a response to perceived inadequacies in Marxist theory, and perceived changes in modern capitalism. Critical theorists emphasized the ability of capitalism to shape the thought and experience of individuals: it distorts how modern society and its products appear to us, and how we think about them. So, aesthetic experience – like all other experience – is moulded to and compromised by capitalism. For critical theory, if we seek to understand aesthetics we need to acknowledge this distorting effect. Critical theorists ask us to pay attention to how art, and aesthetic experience, suffer under capitalism, and become part of the way in which capitalism prevents the formation of a better life.
  6. 680336.7888
    In this paper we provide an ontological analysis of so-called “artifactual functions” by deploying a realizable-centered approach to artifacts which we have recently developed within the framework of the upper ontology Basic Formal Ontology (BFO). We argue that, insofar as material artifacts are concerned, the term “artifactual function” can refer to at least two kinds of realizable entities: novel intentional dispositions and usefactual realized entities. They inhere, respectively, in what we previously called “canonical artifacts” and “usefacts”. We show how this approach can help to clarify functions in BFO, whose current elucidation includes reference to the term “artifact”. In our framework, having an artifactual function implies being an artifact, but not vice versa; in other words, there are artifacts that lack an artifactual function.
    Found 1 week ago on Kathrin Koslicki's site
  7. 731193.788819
    Prioritarianism is generally understood as a kind of moral axiology. An axiology provides an account of what makes items, in this case outcomes, good or bad, better or worse. A moral axiology focuses on moral value: on what makes outcomes morally good or bad, morally better or worse. Prioritarianism, specifically, posits that the moral-betterness ranking of outcomes gives extra weight (“priority”) to well-being gains and losses affecting those at lower levels of well-being. It differs from utilitarianism, which is indifferent to the well-being levels of those affected by gains and losses.[ 1 ] Although it is possible to construe prioritarianism as a non-axiological moral view, this entry follows the prevailing approach and trains its attention on axiological prioritarianism.
    Found 1 week, 1 day ago on Stanford Encyclopedia of Philosophy
  8. 731207.78883
    Dehumanization is widely thought to occur when someone is treated or regarded as less than human. However, there is an ongoing debate about how to develop this basic characterization. Proponents of the harms-based approach focus on the idea that to dehumanize someone is to treat them in a way that harms their humanity; whereas proponents of the psychological approach focus on the idea that to dehumanize someone is to think of them as less than human. Other theorists adopt a pluralistic view that combines elements of both approaches. In addition to explaining different views on what it means to dehumanize someone, this article focuses on related issues, such as how to resolve the so-called “paradox of dehumanization”; the causes and consequences of dehumanization; the sorts of contexts in which dehumanization typically occurs; and the relation between dehumanization and objectification.
    Found 1 week, 1 day ago on Stanford Encyclopedia of Philosophy
  9. 744110.788842
    This paper defends the view that logic gives norms for reasoning. This view is often thought to be problematic given that logic is not itself a theory of reasoning and that valid inferences can lead to silly or pointless beliefs. To defend it, I highlight an overlooked distinction between norms for reasoning and norms for belief. With this distinction in hand, I motivate and defend a straightforward account of how logic gives norms for reasoning, showing that it avoids standard objections. I also show that, given some substantive assumptions, we can offer an attractive account of why logic gives norms for reasoning in the way I propose, and of how it is (also) relevant to norms for belief.
    Found 1 week, 1 day ago on Conor McHugh's site
  10. 746709.78886
    Statistics play an essential role in an extremely wide range of human reasoning. From theorizing in the physical and social sciences to determining evidential standards in legal contexts, statistical methods are ubiquitous, and thus various questions about their application inevitably arise. As tools for making inferences that go beyond a given set of data, they are inherently a means of reasoning ampliatively, and so it is unsurprising that philosophers interested in the notions of evidence and inductive inference have been concerned to utilize statistical frameworks to further our understanding of these topics. However, the field of statistics has long been the subject of heated philosophical controversy. Given that a central goal for philosophers of science is to help resolve problems about evidence and inference in scientific practice, it is important that they be involved in current debates in statistics and data science. The purpose of this topical collection is to promote such philosophical interaction. We present a cross-section of these subjects, written by scholars from a variety of fields in order to explore issues in philosophy of statistics from different perspectives.
    Found 1 week, 1 day ago on Elay Shech's site
  11. 746726.788872
    Robert W. Batterman’s A Middle Way: A Non-Fundamental Approach to Many-Body Physics is an extraordinarily insightful book, far-reaching in its scope and significance, interdisciplinary in character due to connections made between physics, materials science and engineering, and biology, and groundbreaking in the sense that it reflects on important scientific domains that are mostly absent from current literature. The book presents a hydrodynamic methodology, which Batterman explains is pervasive in science, for studying many-body systems as diverse as gases, fluids, and composite materials like wood, steel, and bone. Following Batterman, I will call said methodology the middle-out strategy. Batterman’s main thesis is that the middle-out strategy is superior to alternatives, solves an important autonomy problem, and, consequently, implies that certain mesoscale structures (explained below) ought to be considered natural kinds. In what follows, I unpack and flesh out these claims, starting with a discussion of the levels of reality and its representation. Afterward, I briefly outline the contents of the book’s chapters and then identify issues that seem to me to merit further clarification.
    Found 1 week, 1 day ago on Elay Shech's site
  12. 758203.788892
    I will give an argument for causal finitism from a premise I don’t accept: - Necessary Arithmetical Alethic Incompleteness (NAAI): Necessarily, there is an arithmetical sentence that is neither true nor false. …
    Found 1 week, 1 day ago on Alexander Pruss's Blog
  13. 760496.788905
    I take a pragmatist perspective on quantum theory. This is not a view of the world described by quantum theory. In this view quantum theory itself does not describe the physical world (nor our observations, experiences or opinions of it). Instead, the theory offers reliable advice—on when to expect an event of one kind or another, and on how strongly to expect each possible outcome of that event. The event’s actual outcome is a perspectival fact—a fact relative to a physical context of assessment. Measurement outcomes and quantum states are both perspectival. By noticing that each must be relativized to an appropriate physical context one can resolve the measurement problem and the problem of nonlocal action. But if the outcome of a quantum measurement is not an absolute fact, then why should the statistics of such outcomes give us any objective reason to accept quantum theory? One can describe extensions of the scenario of Wigner’s friend in which a statement expressing the outcome of a quantum measurement would be true relative to one such context but not relative to another. However, physical conditions in our world prevent us from realizing such scenarios. Since the outcome of every actual quantum measurement is certified at what is essentially a single context of assessment, the outcome relative to that context is an objective fact in the only sense that matters for science. We should accept quantum theory because the statistics these outcomes display are just those it leads us to expect.
    Found 1 week, 1 day ago on PhilSci Archive
  14. 760514.788919
    Extrapolating causal effects is becoming an increasingly important kind of inference in Evidence-Based Policy, development economics, and microeconometrics more generally. While several strategies have been proposed to aid with extrapolation, the existing methodological literature has left our understanding of what extrapolation consists of and what constitutes successful extrapolation underdeveloped. This paper addresses this lack in understanding by offering a novel account of successful extrapolation. Building on existing contributions pertaining to the challenges involved in extrapolation, this more nuanced and comprehensive account seeks to provide tools that facilitate the scrutiny of specific extrapolative inferences and general strategies for extrapolation. Offering such resources is important especially in view of the increasing amounts of real-world decision-making in policy, development, and beyond that involve extrapolation.
    Found 1 week, 1 day ago on PhilSci Archive
  15. 760535.788932
    In a recent publication, Kukla (2014) has argued that we should we abandon naturalistic and social constructivist considerations in attempts to define health due to their alleged failure to account for their normativity and instead define them purely in terms of ‘social justice’. Here, I shall argue that such a purely normativist project is self-defeating, and hence, that health and disease cannot be defined through recourse to social justice alone.
    Found 1 week, 1 day ago on PhilSci Archive
  16. 760575.788945
    I dispute the conventional claim that the second law of thermodynamics is saved from a "Maxwell's Demon" by the entropy cost of information erasure, and show that instead it is measurement that incurs the entropy cost. Thus Brillouin, who identified measurement as savior of the second law, was essentially correct, and putative refutations of his view, such as Bennett's claim to measure without entropy cost, are seen to fail when the applicable physics is taken into account. I argue that the tradition of attributing the defeat of Maxwell's Demon to erasure rather than to measurement arose from unphysical classical idealizations that do not hold for real gas molecules, as well as a physically ungrounded recasting of physical thermodynamical processes into computational and information-theoretic conceptualizations. I argue that the fundamental principle that saves the second law is the quantum uncertainty principle applying to the need to localize physical states to precise values of observables in order to eQect the desired disequilibria aimed at violating the second law. I obtain the specific entropy cost for localizing a molecule in the Szilard engine, which coincides with the quantity attributed to Landauer's principle. I also note that an experiment characterized as upholding an entropy cost of erasure in a "quantum Maxwell's Demon" actually demonstrates an entropy cost of measurement.
    Found 1 week, 1 day ago on PhilSci Archive
  17. 841510.788957
    Political disagreement tends to display a “radical” nature that is partly related to the fact that political beliefs and judgments are generally firmly held. This makes people unlikely to revise and compromise on them. …
    Found 1 week, 2 days ago on The Archimedean Point
  18. 935710.788968
    Common moral intuitions are an unprincipled mess. That’s “the trolley problem” in a nutshell. It’s also demonstrated by attempts to distinguish Singer’s drowning child case from our everyday failures to donate to life-saving charities. …
    Found 1 week, 3 days ago on Good Thoughts
  19. 965802.78898
    A coda to my post about Christian Wiman on despair: his book led me to the poetry of William Bronk, a metaphysician whose claim to fame is that he is not as famous as he ought to be. Henry Weinfield’s preface to Bronk’s selected poems is typical in its combination of disappointment and wishful thinking: This edition of the selected poetry of William Bronk spans the more than fifty-year career of a writer who, though still too little known, is destined to be regarded as one of the great American poets of our century. …
    Found 1 week, 4 days ago on Under the Net
  20. 991252.788991
    We draw a distinction between the traditional reference class problem which describes an obstruction to estimating a single individual probability—which we re-term the individual reference class problem—and what we call the reference class problem at scale, which can result when using tools from statistics and machine learning to systematically make predictions about many individual probabilities simultaneously. We argue that scale actually helps to mitigate the reference class problem, and purely statistical tools can be used to efficiently minimize the reference class problem at scale, even though they cannot be used to solve the individual reference class problem.
    Found 1 week, 4 days ago on PhilSci Archive
  21. 991268.789005
    Modal Empiricism in philosophy of science proposes to understand the possibility of modal knowledge from experience by replacing talk of possible worlds with talk of possible situations, which are coarse-grained, bounded and relative to background conditions. This allows for an induction towards objective necessity, assuming that actual situations are representative of possible ones. The main limitation of this epistemology is that it does not account for probabilistic knowledge. In this paper, we propose to extend Modal Empiricism to the probabilistic case, thus providing an inductivist epistemology for probabilistic knowledge. The key idea is that extreme probabilities, close to 1 and 0, serve as proxies for testing mild probabilities, using a principle of model combination.
    Found 1 week, 4 days ago on PhilSci Archive
  22. 991288.789017
    In operational quantum mechanics two measurements are called operationally equivalent if they yield the same distribution of outcomes in every quantum state and hence are represented by the same operator. In this paper, I will show that the ontological models for quantum mechanics and, more generally, for any operational theory sensitively depend on which measurement we choose from the class of operationally equivalent measurements, or more precisely, which of the chosen measurements can be performed simultaneously. To this goal, I will take first three examples—a classical theory, the EPR-Bell scenario and the Popescu-Rochlich box; then realize each example by two operationally equivalent but different operational theories—one with a trivial and another with a non-trivial compatibility structure; and finally show that the ontological models for the different theories will be different with respect to their causal structure, contextuality, and fine-tuning.
    Found 1 week, 4 days ago on PhilSci Archive
  23. 1106595.78903
    QBism explicitly takes the subjective view: probabilities of events are defined solely by past experiences, i.e. the record of observations. As shown by the authors (Fuchs et al, 2013), this: “... removes the paradoxes, conundra, and pseudo-problems that have plagued quantum foundations for the past nine decades”. It is criticised for its lack of ontology and anthropocentric nature. However, if Everett's (1957) formulation is taken at face value, exactly the features of QBism are the result, and the ontology is inherent. The anthropocentric nature of the solution is simply an indication that the quantum state is relative, as is central to Everett. Problems of measurement and locality do not arise.
    Found 1 week, 5 days ago on PhilSci Archive
  24. 1106613.789042
    In Part 1 the properties of QBism are shown to be natural consequences of taking quantum mechanics at face value, as does Everett in his Relative State Formulation (1957). In Part 2 supporting evidence is presented. Parmenides' (Palmer, 2012) notion that the physical world is static and unchanging is vividly confirmed in the new physics. This means the time evolution of the physical world perceived by observers only occurs at the level of appearances as noted by Davies (2002). In order to generate this appearance of time evolution, a moving frame of reference is required: this is the only possible explanation of the enactment of the dynamics of physics in a static universe.
    Found 1 week, 5 days ago on PhilSci Archive
  25. 1106629.789053
    Despite the simplicity of Weyl's solution to the paradox of the passage of time in the static block universe, virtually no interest is shown in this approach although as shown in Part 2, the problem of the Now could be taken as evidence for his solution being correct. A moving frame of reference is required to explain the experience of the enactment of any of the dynamics of physics, and the experiencing consciousness supervenes on this phenomenon. Given the logic involved is straightforward, it seems that the reasons all this has been ignored may be less so. Here it is suggested, based on Davies' (2006) research, that this might well involve a horror of even the possibility of deity and mysticism being dignified by discussion, let alone endorsement. The objective here is to demonstrate that this approach does validate certain archetypal myths of the great spiritual traditions, but at the same time fully supports and reinforces the objective basis of the science of physics. The myths are exploded to reveal simply scientific principles, and a complete absence of gods or mystical phenomena, indeed such things are categorically ruled out. The scientific principles illustrated by the third logical type which have languished unexamined turn out to be powerful knowledge which serves only to reinforce and emphasise how deeply flawed were the key principles of the religious preoccupations which our culture had to relinquish in order to move forward.
    Found 1 week, 5 days ago on PhilSci Archive
  26. 1106648.789065
    The localization problem in relativistic quantum theory has persisted for more than seven decades, yet it is largely unknown and continues to perplex even those well-versed in the subject. At the heart of this problem lies a fundamental conflict between localizability and relativistic causality, which can also be construed as part of the broader dichotomy between measurement and unitary dynamics. This article provides a historical review of the localization problem in one-particle relativistic quantum mechanics, clarifying some persistent misconceptions in the literature, and underscoring the antinomy between causal dynamics and localized observables.
    Found 1 week, 5 days ago on PhilSci Archive
  27. 1106666.789077
    While emergentism enjoys some good fortune in contemporary philosophy, attempts at elucidating the history of this view are rare. Among such attempts, by far the most influential certainly is McLaughlin’s landmark paper “The Rise and Fall of British Emergentism” (1992). While McLaughlin’s analysis of the recent history of emergentism is insightful and instructive in its own ways, in the present paper we offer reasons to be suspicious of some of its central claims. In particular, we advance evidence that rebuts McLaughlin’s contention that British Emergentism did not fall in the 1920–1930s because of philosophical criticism but rather because of an alleged empirical inconsistency with fledgling quantum mechanics.
    Found 1 week, 5 days ago on PhilSci Archive
  28. 1135627.789092
    Comparative philosophy of religion is a subfield of both philosophy of religion and comparative philosophy. Philosophy of religion engages with philosophical questions related to religious belief and practice, including questions concerning the concept of religion itself. Comparative philosophy compares concepts, theories, and arguments from diverse philosophical traditions. The term “comparative philosophy of religion” can refer to the comparative philosophical study of different religions or of different philosophies of religion. It can thus be either a first-order philosophical discipline—investigating matters to do with religion—or a second-order philosophical discipline, investigating matters to do with philosophical inquiry into religion.
    Found 1 week, 6 days ago on Stanford Encyclopedia of Philosophy
  29. 1146772.789103
    High speed store required: 947 words. No of bits in a word: 64 Is the program overlaid? No No. of magnetic tapes required: None What other peripherals are used? Card Reader; Line Printer No. of cards in combined program and test deck: 112 Card punching code: EBCDIC Keywords: Atomic, Molecular, Nuclear, Rotation Matrix, Rotation Group, Representation, Euler Angle, Symmetry, Helicity, Correlation.
    Found 1 week, 6 days ago on John Cramer's site
  30. 1170741.789115
    The most common argument that mathematical truth is not provability uses Tarski’s indefinability of truth theorem or Goedel’s first incompleteness theorem. But while this is a powerful argument, it won’t convince an intuitionist who rejects the law of excluded middle. …
    Found 1 week, 6 days ago on Alexander Pruss's Blog