1. 94005.030144
    Theories of consciousness are abundant, yet few directly address the structural conditions necessary for subjectivity itself. This paper defends and develops the QBist constraint: the proposal that any conscious system must implement a first-person, self-updating inferential architecture. Inspired by Quantum Bayesianism (QBism), this constraint specifies that subjectivity arises only in systems capable of self-referential probabilistic updating from an internal perspective. The QBist constraint is not offered as a process theory, but as a metatheoretical adequacy condition: a structural requirement which candidate theories of consciousness must satisfy if they are to explain not merely behaviour or information processing, but genuine subjectivity. I assess five influential frameworks — the Free Energy Principle (FEP), Predictive Processing (PP), Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Higher-Order Thought (HOT) theory — and consider how each fares when interpreted through the lens of this constraint. I argue that the QBist constraint functions as a litmus test for process theories, forcing a shift in focus: from explaining cognitive capacities to specifying how an architecture might realize first-personal belief updating as a structural feature.
    Found 1 day, 2 hours ago on PhilSci Archive
  2. 382345.030211
    Knowledge brokers, usually conceptualized as passive intermediaries between scientists and policymakers in evidence-based policymaking, are understudied in philosophy of science. Here, we challenge that usual conceptualization. As agents in their own right, knowledge brokers have their own goals and incentives, which complicate the effects of their presence at the science-policy interface. We illustrate this in an agent-based model and suggest several avenues for further exploration of the role of knowledge brokers in evidence-based policy.
    Found 4 days, 10 hours ago on PhilSci Archive
  3. 612049.03023
    We develop a theory of policy advice that focuses on the relationship between the competence of the advisor (e.g., an expert bureaucracy) and the quality of advice that the leader may expect. We describe important tensions between these features present in a wide class of substantively important circumstances. These tensions point to the presence of a trade-off between receiving advice more often and receiving more informative advice. The optimal realization of this trade-off for the leader sometimes induces her to prefer advisors of limited competence – a preference that, we show, is robust under different informational assumptions. We consider how institutional tools available to leaders affect preferences for advisor competence and the quality of advice they may expect to receive in equilibrium.
    Found 1 week ago on Dimitri Landa's site
  4. 613136.030242
    There are two main strands of arguments regarding the value-free ideal (VFI): desirability and achievability (Reiss and Sprenger 2020). In this essay, I will argue for what I will call a compatibilist account of upholding the VFI focusing on its desirability even if the VFI is unachievable. First, I will explain what the VFI is. Second, I will show that striving to uphold the VFI (desirability) is compatible with the rejection of its achievability. Third, I will demonstrate that the main arguments against the VFI do not refute its desirability. Finally, I will provide arguments on why it is desirable to strive to uphold the VFI even if the VFI is unachievable and show what role it can play in scientific inquiry. There is no single definition of the VFI, yet the most common way to interpret it is that non-epistemic values ought not to influence scientific reasoning (Brown 2024, 2). Non-epistemic values are understood as certain ethical, social, cultural or political considerations. Therefore, it is the role of epistemic values, such as accuracy, consistency, empirical adequacy and simplicity, to be part of and to ensure proper scientific reasoning.
    Found 1 week ago on PhilSci Archive
  5. 699218.030252
    Prioritarianism is generally understood as a kind of moral axiology. An axiology provides an account of what makes items, in this case outcomes, good or bad, better or worse. A moral axiology focuses on moral value: on what makes outcomes morally good or bad, morally better or worse. Prioritarianism, specifically, posits that the moral-betterness ranking of outcomes gives extra weight (“priority”) to well-being gains and losses affecting those at lower levels of well-being. It differs from utilitarianism, which is indifferent to the well-being levels of those affected by gains and losses.[ 1 ] Although it is possible to construe prioritarianism as a non-axiological moral view, this entry follows the prevailing approach and trains its attention on axiological prioritarianism.
    Found 1 week, 1 day ago on Stanford Encyclopedia of Philosophy
  6. 712135.030262
    This paper defends the view that logic gives norms for reasoning. This view is often thought to be problematic given that logic is not itself a theory of reasoning and that valid inferences can lead to silly or pointless beliefs. To defend it, I highlight an overlooked distinction between norms for reasoning and norms for belief. With this distinction in hand, I motivate and defend a straightforward account of how logic gives norms for reasoning, showing that it avoids standard objections. I also show that, given some substantive assumptions, we can offer an attractive account of why logic gives norms for reasoning in the way I propose, and of how it is (also) relevant to norms for belief.
    Found 1 week, 1 day ago on Conor McHugh's site
  7. 714734.030276
    Statistics play an essential role in an extremely wide range of human reasoning. From theorizing in the physical and social sciences to determining evidential standards in legal contexts, statistical methods are ubiquitous, and thus various questions about their application inevitably arise. As tools for making inferences that go beyond a given set of data, they are inherently a means of reasoning ampliatively, and so it is unsurprising that philosophers interested in the notions of evidence and inductive inference have been concerned to utilize statistical frameworks to further our understanding of these topics. However, the field of statistics has long been the subject of heated philosophical controversy. Given that a central goal for philosophers of science is to help resolve problems about evidence and inference in scientific practice, it is important that they be involved in current debates in statistics and data science. The purpose of this topical collection is to promote such philosophical interaction. We present a cross-section of these subjects, written by scholars from a variety of fields in order to explore issues in philosophy of statistics from different perspectives.
    Found 1 week, 1 day ago on Elay Shech's site
  8. 728539.030285
    Extrapolating causal effects is becoming an increasingly important kind of inference in Evidence-Based Policy, development economics, and microeconometrics more generally. While several strategies have been proposed to aid with extrapolation, the existing methodological literature has left our understanding of what extrapolation consists of and what constitutes successful extrapolation underdeveloped. This paper addresses this lack in understanding by offering a novel account of successful extrapolation. Building on existing contributions pertaining to the challenges involved in extrapolation, this more nuanced and comprehensive account seeks to provide tools that facilitate the scrutiny of specific extrapolative inferences and general strategies for extrapolation. Offering such resources is important especially in view of the increasing amounts of real-world decision-making in policy, development, and beyond that involve extrapolation.
    Found 1 week, 1 day ago on PhilSci Archive
  9. 809535.030296
    Political disagreement tends to display a “radical” nature that is partly related to the fact that political beliefs and judgments are generally firmly held. This makes people unlikely to revise and compromise on them. …
    Found 1 week, 2 days ago on The Archimedean Point
  10. 959277.030305
    We draw a distinction between the traditional reference class problem which describes an obstruction to estimating a single individual probability—which we re-term the individual reference class problem—and what we call the reference class problem at scale, which can result when using tools from statistics and machine learning to systematically make predictions about many individual probabilities simultaneously. We argue that scale actually helps to mitigate the reference class problem, and purely statistical tools can be used to efficiently minimize the reference class problem at scale, even though they cannot be used to solve the individual reference class problem.
    Found 1 week, 4 days ago on PhilSci Archive
  11. 959293.030315
    Modal Empiricism in philosophy of science proposes to understand the possibility of modal knowledge from experience by replacing talk of possible worlds with talk of possible situations, which are coarse-grained, bounded and relative to background conditions. This allows for an induction towards objective necessity, assuming that actual situations are representative of possible ones. The main limitation of this epistemology is that it does not account for probabilistic knowledge. In this paper, we propose to extend Modal Empiricism to the probabilistic case, thus providing an inductivist epistemology for probabilistic knowledge. The key idea is that extreme probabilities, close to 1 and 0, serve as proxies for testing mild probabilities, using a principle of model combination.
    Found 1 week, 4 days ago on PhilSci Archive
  12. 1150454.030324
    I gave a talk on March 8 at an AI, Systems, and Society Conference at the Emory Center for Ethics. The organizer, Alex Tolbert (who had been a student at Virginia Tech), suggested I speak about controversies in statistics, especially P-hacking in statistical significance testing. …
    Found 1 week, 6 days ago on D. G. Mayo's blog
  13. 1225152.030336
    Where does the Born Rule come from? We ask: “What is the simplest extension of probability theory where the Born rule appears”? This is answered by introducing “superposition events” in addition to the usual discrete events. Two-dimensional matrices (e.g., incidence matrices and density matrices) are needed to mathematically represent the differences between the two types of events. Then it is shown that those incidence and density matrices for superposition events are the (outer) products of a vector and its transpose whose components foreshadow the “amplitudes” of quantum mechanics. The squares of the components of those “amplitude” vectors yield the probabilities of the outcomes. That is how probability amplitudes and the Born Rule arise in the minimal extension of probability theory to include superposition events. This naturally extends to the full Born Rule in the Hilbert spaces over the complex numbers of quantum mechanics. It would perhaps be satisfying if probability amplitudes and the Born Rule only arose as the result of deep results in quantum mechanics (e.g., Gleason’s Theorem). But both arise in a simple extension of probability theory to include “superposition events”–which should not be too surprising since superposition is the key non-classical concept in quantum mechanics.
    Found 2 weeks ago on David Ellerman's site
  14. 1647311.030359
    What is it for an argument to be successful? Some take success to be mind-independent, holding that successful arguments are those that meet some objective criterion such as soundness. Others take success to be dialectical, holding that successful arguments are those that would convince anyone meeting certain (perhaps idealized) conditions, or perhaps some targeted audience meeting those conditions. I defend a set of desiderata for theories of success, and argue that no objective or dialectical meets those desiderata. Instead, I argue, success is individualistic: arguments can only (plausibly) be evaluated as successes (qua argument) relative to individuals. In particular, I defend The Knowledge Account, according to which an argument A is successful for individual i iff i knows A is sound and non-fallacious. This conception of success is a significant departure from orthodoxy and has interesting and unexplored philosophical and methodological implications for the evaluation of arguments.
    Found 2 weeks, 5 days ago on John A. Keller's site
  15. 1651206.030367
    Scientific fields frequently need to exchange data to advance their own inquiries. Data unification is the process of stabilizing these forms of interfield data exchange. I present an account of the epistemic structure of data unification, drawing on case studies from model-based cognitive neuroscience (MBCN). MBCN is distinctive because it shows that modeling practices play an essential role in mediating these data exchanges. Models often serve as interfield evidential integrators, and models built for this purpose have their own representational and inferential functions. This form of data unification should be seen as autonomous from other forms, particularly explanatory unification.
    Found 2 weeks, 5 days ago on PhilSci Archive
  16. 1703885.030378
    According to classical utilitarianism, well-being consists in pleasure or happiness, the good consists in the sum of well-being, and moral rightness consists in maximizing the good. Leibniz was perhaps the first to formulate this doctrine. Bentham made it widely known. For a long time, however, the second, summing part lacked any clear foundation. John Stuart Mill, Henry Sidgwick, and Richard Hare all gave arguments for utilitarianism, but they took this summing part for granted. It was John Harsanyi who finally presented compelling arguments for this controversial part of the utilitarian doctrine.
    Found 2 weeks, 5 days ago on Johan E. Gustafsson's site
  17. 1714121.030388
    Scientists do not merely choose to accept fully formed theories, they also have to decide which models to work on before they are fully developed and tested. Since decisive empirical evidence in favour of a model will not yet have been gathered, other criteria must play determining roles. I examine the case of modern high-energy physics where the experimental context that once favoured the pursuit of beautiful, simple, and general theories now favours the pursuit of models that are ad hoc, narrow in scope, and complex; in short, ugly models. The lack of new discoveries since the Higgs boson, together with the unlikeliness of a new higher energy collider, has left searches for new physics conceptually and empirically wide open. Physicists must make use of the experiment at hand while also creatively exploring alternatives that have not yet been explored. This encourages the pursuit of models that have at least one of two key features: i) they take radically novel approaches, or ii) are easily testable. I present three models, neutralino dark matter, the relaxion, and repulsive gravity, and show that even if they do exhibit traditional epistemic virtues, they are nonetheless pursuitworthy. I argue that experimental context strongly determines pursuitworthiness and I lay out the conditions under which experiment encourages the pursuit of ugly models.
    Found 2 weeks, 5 days ago on Martin King's site
  18. 1738112.030397
    [Editor’s Note: The following new entry by Juliana Bidadanure and David Axelsen replaces the former entry on this topic by the previous author.] Egalitarianism is a school of thought in contemporary political philosophy that treats equality as the chief value of a just political system. Simply put, egalitarians argue for equality. They have a presumption in favor of social arrangements that advance equality, and they treat deviations from equality as prima facie suspect. They recommend a far greater degree of equality than we currently have, and they do so for distinctly egalitarian reasons.
    Found 2 weeks, 6 days ago on Stanford Encyclopedia of Philosophy
  19. 1824231.030405
    Suppose we observe many emeralds which are all green. This observation usually provides good evidence that all emeralds are green. However, the emeralds we have observed are also all grue, which means that they are either green and already observed or blue and not yet observed. We usually do not think that our observation provides good evidence that all emeralds are grue. Why? I argue that if we are in the best case for inductive reasoning, we have reason to assign low probability to the hypothesis that all emeralds are grue before seeing any evidence. My argument appeals to random sampling and the observation-independence of green, understood as probabilistic independence of whether emeralds are green and when they are observed.
    Found 3 weeks ago on PhilSci Archive
  20. 2377580.030414
    The desirable gambles framework provides a foundational approach to imprecise probability theory but relies heavily on linear utility assumptions. This paper introduces function-coherent gambles, a generalization that accommodates non-linear utility while preserving essential rationality properties. We establish core axioms for function-coherence and prove a representation theorem that characterizes acceptable gambles through continuous linear functionals. The framework is then applied to analyze various forms of discounting in intertemporal choice, including hyperbolic, quasi-hyperbolic, scale-dependent, and state-dependent discounting. We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework. This unified treatment provides theoretical foundations for modeling sophisticated patterns of time preference within the desirability paradigm, bridging a gap between normative theory and observed behavior in intertemporal decision-making under genuine uncertainty.
    Found 3 weeks, 6 days ago on Gregory Wheeler's site
  21. 2458699.030423
    This paper examines cases in which an individual’s misunderstanding improves the scientific community’s understanding via “corrective” processes that produce understanding from poor epistemic inputs. To highlight the unique features of valuable misunderstandings and corrective processes, we contrast them with other social-epistemological phenomena including testimonial understanding, collective understanding, Longino’s critical contextual empiricism, and knowledge from falsehoods.
    Found 4 weeks ago on PhilSci Archive
  22. 2801497.030432
    Years ago, in ‘Expected Value without Expecting Value’, I noted that “The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. This, even though the latter choice has 100 times the expected value.” Joe Carlsmith’s essay on Expected Utility Maximization nicely explains “Why it’s OK to predictably lose” in this sort of situation. …
    Found 1 month ago on Good Thoughts
  23. 2862611.030441
    The received view of scientific experimentation holds that science is characterized by experiment and experiment is characterized by active intervention on the system of interest. Although versions of this view are widely held, they have seldom been explicitly defended. The present essay reconstructs and defuses two arguments in defense of the received view: first, that intervention is necessary for uncovering causal structures, and second, that intervention conduces to better evidence. By examining a range of non-interventionist studies from across the sciences, I conclude that interventionist experiments are not, ceteris paribus, epistemically superior to non-interventionist studies and that the latter may thus be classified as experiment proper. My analysis explains why intervention remains valuable while at the same time elevating the status of some non-interventionist studies to that of experiment proper.
    Found 1 month ago on PhilSci Archive
  24. 2913421.030452
    Researchers worried about catastrophic risks from advanced AI have argued that we should expect sufficiently capable AI agents to pursue power over humanity because power is a convergent instrumental goal, something that is useful for a wide range of final goals. Others have recently expressed skepticism of these claims. This paper aims to formalize the concepts of instrumental convergence and power-seeking in an abstract, decision-theoretic framework, and to assess the claim that power is a convergent instrumental goal. I conclude that this claim contains at least an element of truth, but might turn out to have limited predictive utility, since an agent’s options cannot always be ranked in terms of power in the absence of substantive information about the agent’s final goals. However, the fact of instrumental convergence is more predictive for agents who have a good shot at attaining absolute or near-absolute power.
    Found 1 month ago on Christian Tarsney's site
  25. 2956244.030466
    There is a near consensus among philosophers of science whose research focuses on science and values that the ideal of value-free science is untenable, and that science not only is, but normatively must be, value-laden in some respect. The consensus is far from complete; with some regularity, defenses of the value-free ideal (VFI) as well as critiques of major arguments against the VFI surface in the literature. I review and respond to many of the recent defenses of the VFI and show that they generally fail to meet the mark. In the process, I articulate what the current burden of argument for a defense of the VFI ought to be, given the state of the literature.
    Found 1 month ago on Matthew J. Brown's site
  26. 2977912.030475
    There is an "under-representation problem” in philosophy departments and journals. Empirical data suggest that while we have seen some improvements since the 1990s, the rate of change has slowed down. Some posit that philosophy has disciplinary norms making it uniquely resistant to change (Antony and Cudd 2012; Dotson 2012; Hassoun et al. 2022). In this paper, we present results from an empirical case study of a philosophy department that achieved and maintained male-female gender parity among its faculty as early as 2014. Our analysis extends beyond matters of gender parity because that is only one, albeit important, dimension of inclusion. We build from the case study to reflect on strategies that may catalyze change.
    Found 1 month ago on PhilSci Archive
  27. 2978063.030485
    According to the traditional understanding, ethical normativity is about what you should do and epistemic normativity is about what you should believe. Singer’s topic in Right Belief and True Belief is the latter. However, though he later rejects this traditional understanding of the distinction (pp. 205–7), he thinks we can learn a great deal from looking at the parallels between these two species of normativity, and his book provides a masterclass in how to do that: this is epistemology as practised by someone very much at home in ethics and well versed in its contemporary literature, its arguments, distinctions, and central positions. In the rst chapter, Singer distinguishes a number of di erent normative notions to which we appeal when we evaluate beliefs: Is the belief correct? Is it right? Should we believe it? Ought we to? Must we? These he calls ‘deontic notions’, and we use them to evaluate the belief with respect to the believer. But there are also these: Is it praiseworthy or blameworthy to have the belief? Is the believer at fault if they do? Are they rational? Is the belief justi ed for them? These he calls ‘responsibility notions’, and we use them to evaluate the believer with respect to the belief (pp. 73–74). This distinction he calls bipartite (p. 189).
    Found 1 month ago on PhilSci Archive
  28. 2978099.030496
    At the meta-level, two positions emerge as through lines of the book. The rst is a view of the central concepts of epistemology (belief, knowledge, con dence) as emergent properties. As such they are non-fundamental, but feature in highly useful and tractable models. Epistemology is thus intrinsically idealized; in a basic way, attributing propositional attitudes to agents always involves abstraction and distortion. The second through line is that this undermines hope for a uni ed account of the epistemic domain. Instead, the best we can do is to build models that succeed at limited purposes within parts of that domain.
    Found 1 month ago on PhilSci Archive
  29. 3080397.030507
    I’m back from a short trip to Oslo, Norway. Compared to other Scandinavian capitals like Stockholm and Helsinki (I’ve never been to Copenhagen, yet), I find Oslo more modern and “cold.” There is beauty in modernity of course, but it lacks the charm of Stockholm’s downtown. …
    Found 1 month, 1 week ago on The Archimedean Point
  30. 3087509.030516
    Critical-Level Utilitarianism entails one of the Repugnant Conclusion and the Sadistic Conclusion (both of which are counter-intuitive), depending on the critical level. Indeterminate Critical-Level Utilitarianism is a version of Critical- Level Utilitarianism where it is indeterminate which well-being level is the critical level. Undistinguished Critical-Range Utilitarianism is variant of Critical-Level Utilitarianism where additions of lives in a range of well-being between the good and the bad lives makes the resulting outcome incomparable to the original outcome. These views both avoid the Repugnant Conclusion and avoid the Sadistic Conclusion. And they agree about all comparisons of outcomes that do not involve indeterminacy or incomparability. So it is unclear whether we have any reason to favour one of these theories over the other. I argue that Indeterminate Critical-Level Utilitarianism still entails the disjunction of the Repugnant Conclusion and the Sadistic Conclusion, which is also repugnant. Whereas, Undistinguished Critical- Range Utilitarianism does not entail this conclusion.
    Found 1 month, 1 week ago on Johan E. Gustafsson's site