1. 128994.169574
    The desirable gambles framework provides a foundational approach to imprecise probability theory but relies heavily on linear utility assumptions. This paper introduces function-coherent gambles, a generalization that accommodates non-linear utility while preserving essential rationality properties. We establish core axioms for function-coherence and prove a representation theorem that characterizes acceptable gambles through continuous linear functionals. The framework is then applied to analyze various forms of discounting in intertemporal choice, including hyperbolic, quasi-hyperbolic, scale-dependent, and state-dependent discounting. We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework. This unified treatment provides theoretical foundations for modeling sophisticated patterns of time preference within the desirability paradigm, bridging a gap between normative theory and observed behavior in intertemporal decision-making under genuine uncertainty.
    Found 1 day, 11 hours ago on Gregory Wheeler's site
  2. 210113.169656
    This paper examines cases in which an individual’s misunderstanding improves the scientific community’s understanding via “corrective” processes that produce understanding from poor epistemic inputs. To highlight the unique features of valuable misunderstandings and corrective processes, we contrast them with other social-epistemological phenomena including testimonial understanding, collective understanding, Longino’s critical contextual empiricism, and knowledge from falsehoods.
    Found 2 days, 10 hours ago on PhilSci Archive
  3. 552911.169671
    Years ago, in ‘Expected Value without Expecting Value’, I noted that “The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. This, even though the latter choice has 100 times the expected value.” Joe Carlsmith’s essay on Expected Utility Maximization nicely explains “Why it’s OK to predictably lose” in this sort of situation. …
    Found 6 days, 9 hours ago on Good Thoughts
  4. 614025.169683
    The received view of scientific experimentation holds that science is characterized by experiment and experiment is characterized by active intervention on the system of interest. Although versions of this view are widely held, they have seldom been explicitly defended. The present essay reconstructs and defuses two arguments in defense of the received view: first, that intervention is necessary for uncovering causal structures, and second, that intervention conduces to better evidence. By examining a range of non-interventionist studies from across the sciences, I conclude that interventionist experiments are not, ceteris paribus, epistemically superior to non-interventionist studies and that the latter may thus be classified as experiment proper. My analysis explains why intervention remains valuable while at the same time elevating the status of some non-interventionist studies to that of experiment proper.
    Found 1 week ago on PhilSci Archive
  5. 664835.1697
    Researchers worried about catastrophic risks from advanced AI have argued that we should expect sufficiently capable AI agents to pursue power over humanity because power is a convergent instrumental goal, something that is useful for a wide range of final goals. Others have recently expressed skepticism of these claims. This paper aims to formalize the concepts of instrumental convergence and power-seeking in an abstract, decision-theoretic framework, and to assess the claim that power is a convergent instrumental goal. I conclude that this claim contains at least an element of truth, but might turn out to have limited predictive utility, since an agent’s options cannot always be ranked in terms of power in the absence of substantive information about the agent’s final goals. However, the fact of instrumental convergence is more predictive for agents who have a good shot at attaining absolute or near-absolute power.
    Found 1 week ago on Christian Tarsney's site
  6. 707658.169716
    There is a near consensus among philosophers of science whose research focuses on science and values that the ideal of value-free science is untenable, and that science not only is, but normatively must be, value-laden in some respect. The consensus is far from complete; with some regularity, defenses of the value-free ideal (VFI) as well as critiques of major arguments against the VFI surface in the literature. I review and respond to many of the recent defenses of the VFI and show that they generally fail to meet the mark. In the process, I articulate what the current burden of argument for a defense of the VFI ought to be, given the state of the literature.
    Found 1 week, 1 day ago on Matthew J. Brown's site
  7. 729326.169726
    There is an "under-representation problem” in philosophy departments and journals. Empirical data suggest that while we have seen some improvements since the 1990s, the rate of change has slowed down. Some posit that philosophy has disciplinary norms making it uniquely resistant to change (Antony and Cudd 2012; Dotson 2012; Hassoun et al. 2022). In this paper, we present results from an empirical case study of a philosophy department that achieved and maintained male-female gender parity among its faculty as early as 2014. Our analysis extends beyond matters of gender parity because that is only one, albeit important, dimension of inclusion. We build from the case study to reflect on strategies that may catalyze change.
    Found 1 week, 1 day ago on PhilSci Archive
  8. 729477.169737
    According to the traditional understanding, ethical normativity is about what you should do and epistemic normativity is about what you should believe. Singer’s topic in Right Belief and True Belief is the latter. However, though he later rejects this traditional understanding of the distinction (pp. 205–7), he thinks we can learn a great deal from looking at the parallels between these two species of normativity, and his book provides a masterclass in how to do that: this is epistemology as practised by someone very much at home in ethics and well versed in its contemporary literature, its arguments, distinctions, and central positions. In the rst chapter, Singer distinguishes a number of di erent normative notions to which we appeal when we evaluate beliefs: Is the belief correct? Is it right? Should we believe it? Ought we to? Must we? These he calls ‘deontic notions’, and we use them to evaluate the belief with respect to the believer. But there are also these: Is it praiseworthy or blameworthy to have the belief? Is the believer at fault if they do? Are they rational? Is the belief justi ed for them? These he calls ‘responsibility notions’, and we use them to evaluate the believer with respect to the belief (pp. 73–74). This distinction he calls bipartite (p. 189).
    Found 1 week, 1 day ago on PhilSci Archive
  9. 729513.169749
    At the meta-level, two positions emerge as through lines of the book. The rst is a view of the central concepts of epistemology (belief, knowledge, con dence) as emergent properties. As such they are non-fundamental, but feature in highly useful and tractable models. Epistemology is thus intrinsically idealized; in a basic way, attributing propositional attitudes to agents always involves abstraction and distortion. The second through line is that this undermines hope for a uni ed account of the epistemic domain. Instead, the best we can do is to build models that succeed at limited purposes within parts of that domain.
    Found 1 week, 1 day ago on PhilSci Archive
  10. 831811.169762
    I’m back from a short trip to Oslo, Norway. Compared to other Scandinavian capitals like Stockholm and Helsinki (I’ve never been to Copenhagen, yet), I find Oslo more modern and “cold.” There is beauty in modernity of course, but it lacks the charm of Stockholm’s downtown. …
    Found 1 week, 2 days ago on The Archimedean Point
  11. 838923.169772
    Critical-Level Utilitarianism entails one of the Repugnant Conclusion and the Sadistic Conclusion (both of which are counter-intuitive), depending on the critical level. Indeterminate Critical-Level Utilitarianism is a version of Critical- Level Utilitarianism where it is indeterminate which well-being level is the critical level. Undistinguished Critical-Range Utilitarianism is variant of Critical-Level Utilitarianism where additions of lives in a range of well-being between the good and the bad lives makes the resulting outcome incomparable to the original outcome. These views both avoid the Repugnant Conclusion and avoid the Sadistic Conclusion. And they agree about all comparisons of outcomes that do not involve indeterminacy or incomparability. So it is unclear whether we have any reason to favour one of these theories over the other. I argue that Indeterminate Critical-Level Utilitarianism still entails the disjunction of the Repugnant Conclusion and the Sadistic Conclusion, which is also repugnant. Whereas, Undistinguished Critical- Range Utilitarianism does not entail this conclusion.
    Found 1 week, 2 days ago on Johan E. Gustafsson's site
  12. 1130640.169782
    Hume [Hume 1739: bk.I pt.III sec.XI] held, incredibly, that objective chance is a projection of our beliefs. Bruno de Finetti [1970] gave mathematical substance to this idea. Scientific reasoning about chance, he argued, should be understood as arising from symmetries in degrees of belief. De Finetti’s gambit is popular in some quarters of statistics and philosophy – see, for example, [Bernardo and Smith 2009], [Spiegelhalter 2024], [Skyrms 1984: ch.3], [Diaconis and Skyrms 2017: ch.7], [Jeffrey 2004]. It is safe to say, however, that it has not been widely accepted. Science textbooks generally ignore it. So does the excellent Stanford Encyclopedia entry on “Interpretations of Probability” [Hájek 2023].
    Found 1 week, 6 days ago on Wolfgang Schwarz's site
  13. 1133333.169791
    In epidemiology, an effect of a dichotomous exposure on a dichotomous outcome is a comparison of risks between the exposed and the unexposed. Causally interpreted, this comparison is assumed to equal a comparison in counterfactual risks if, hypothetically, both exposure states were to occur at once for each subject (Hernán and Robins, 2020). These comparisons are summarized by effect measures like risk difference or risk ratio. Risk difference describes the additive influence of an exposure on an outcome, and is often called an absolute effect measure. Trials occasionally report the inverse of a risk difference, which can also be classified as an absolute measure, as inverting it again returns the risk difference. Measures like risk ratio, which describe a multiplier of risk, are called relative, or ratio measures.
    Found 1 week, 6 days ago on PhilSci Archive
  14. 1137645.1698
    Suppose for simplicity that everyone is a good Bayesian and has the same priors for a hypothesis H, and also the same epistemic interests with respect to H. I now observe some evidence E relevant to H. My credence now diverges from everyone else’s, because I have new evidence. …
    Found 1 week, 6 days ago on Alexander Pruss's Blog
  15. 1248679.16981
    The inductive risk argument challenges the value-free ideal of science by asserting that scientists should manage the inductive risks involved in scientific inference through social values, which consists in weighing the social implications of errors when setting evidential thresholds. Most of the previous analyses of the argument fall short of engaging directly with its core assumptions, and thereby offer limited criticisms. This paper critically examines the two key premises of the inductive risk argument: the thesis of epistemic insufficiency, which asserts that the internal standards of science do not suffice to determine evidential thresholds in a non-arbitrary fashion, and the thesis of legitimate value-encroachment, which asserts that non-scientific value judgments can justifiably influence these thresholds. A critical examination of the first premise shows that the inductive risk argument does not pose a unique epistemic challenge beyond what is already implied by fallibilism about scientific knowledge, and fails because the mere assumption of fallibilism does not imply the untenability of value-freedom. This is demonstrated by showing that the way in which evidential thresholds are set in science is not arbitrary in any sense that would lend support to the inductive risk argument. A critical examination of the second premise shows that incorporating social values into scientific inference as an inductive risk-management strategy faces a meta-criterion problem, and consequently leads to several serious issues such as wishful thinking, category mistakes in decision making, or Mannheim-style paradoxes of justification. Consequently, value-laden strategies for inductive risk management in scientific inference would likely weaken the justification of scientific conclusions in most cases.
    Found 2 weeks ago on PhilSci Archive
  16. 1350750.169822
    Regular readers may know that I’ve been interested in epistocracy for quite some time now. Epistocracy is a political regime in which political power is allocated according to criteria of competence and knowledge. …
    Found 2 weeks, 1 day ago on The Archimedean Point
  17. 1360652.169831
    Last week I reblogged a post from 2023 where I began a discussion of a topic in a paper by Gardiner and Zaharatos (2022) (G & Z). G & Z fruitfully trace out connections between the severity requirement and the notion of sensitivity in epistemology. …
    Found 2 weeks, 1 day ago on D. G. Mayo's blog
  18. 1594709.169843
    Performativity refers to the phenomenon that scientific conceptualisations can sometimes change their target systems or referents. A widely held view in the literature is that scientists ought not to deliberately deploy performative models or theories with the aim of eliciting desirable changes in their target systems. This paper has three aims. First, I cast and defend this received view as a worry about autonomy-infringing paternalism and, to that end, develop a taxonomy of the harms it can impose. Second, I consider various approaches to this worry within the extant literature and argue that these offer only unsatisfactory responses. Third, I propose two positive claims. Manipulation of target systems is (a) not inherently paternalist and can be unproblematic, and is (b) sometimes paternalist, but whenever such paternalism is inescapable, it has got to be justifiable. I generalise an example of modelling international climate change coordination to develop this point.
    Found 2 weeks, 4 days ago on PhilSci Archive
  19. 1652371.169853
    We evaluate the roles general relativistic assumptions play in simulations used in recent observations of black holes including LIGO-Virgo and the Event Horizon Telescope. In both experiments simulations play an ampliative role, enabling the extraction of more information from the data than would be possible otherwise. This comes at a cost of theory-ladenness. We discuss the issue of inferential circularity, which arises in some applications; classify some of the epistemic strategies used to reduce the extent of theory-ladenness; and discuss ways in which these strategies are model independent.
    Found 2 weeks, 5 days ago on PhilSci Archive
  20. 1682633.169862
    Did you ever submit a grant proposal to a funding agency? Then, you have likely encountered the request to specify your research method. Anecdotal evidence suggests that philosophers often address this unpopular request by mentioning reflective equilibrium (RE), the method proposed by Goodman (1983 [1954]) and baptized by John Rawls in his “A Theory of Justice” (1971). Appeal to RE has indeed become a standard move in ethics (see, e.g., Daniels, 1996; Swanton, 1992; van der Burg & van Willigenburg, 1998; DePaul, 2011; Mikhail, 2011; Beauchamp & Childress, ). The method has also been referred to in many other branches of philosophy, e.g., in methodological discussions about logic (e.g., Goodman, 1983; Resnik, 1985, , 1997; Brun, 2014; Peregrin & Svoboda, 2017) and theories of rationality (e.g., Cohen, 1981; Stein, 1996). Some philosophers have gone as far as to argue that RE is unavoidable in ethics (Scanlon, 2003) or simply the philosophical method (Lewis, , p. x; Keefe, 2000, ch. 2). The popularity of RE indicates that its key idea resonates well with the inclinations of many philosophers: You start with your initial views or commitments on a theme and try to systematize them in terms of a theory or a few principles. Discrepancies between theory and commitments trigger a characteristic back and forth between the commitments and the theories, in which commitments and theories are adjusted to each other until an equilibrium state is reached.
    Found 2 weeks, 5 days ago on Georg Brun's site
  21. 1755659.169871
    Recently, Dardashti et al. (Stud Hist Philos Sci Part B Stud Hist Philos Mod Phys 67:1–11, 2019) proposed a Bayesian model for establishing Hawking radiation by analogical inference. In this paper we investigate whether their model would work as a general model for analogical inference. We study how it performs when varying the believed degree of similarity between the source and the target system. We show that there are circumstances in which the degree of confirmation for the hypothesis about the target system obtained by collecting evidence from the source system goes down when increasing the believed degree of similarity between the two systems. We then develop an alternative model in which the direction of the variation of the degree of confirmation always coincides with the direction of the believed degree of similarity. Finally, we argue that the two models capture different types of analogical inference.
    Found 2 weeks, 6 days ago on Alexander Geharter's site
  22. 1767663.169881
    Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of Human-AI cooperation in which AI agents always defer to humans because they are uncertain about our preferences. I explain two reasons why AI agents might not defer. First, AI agents might not value learning. Second, even if AI agents value learning, they might not be certain to learn our actual preferences.
    Found 2 weeks, 6 days ago on PhilSci Archive
  23. 1940715.16989
    In theory, replication experiments purport to independently validate claims from previous research or provide some diagnostic evidence about their truth value. In practice, this value of replication experiments is often taken for granted. Our research shows that in replication experiments, practice often does not live up to theory. Most replication experiments involve confounding factors and their results are not uniquely determined by the treatment of interest, hence are uninterpretable. These results can be driven by the true data generating mechanism, limitations of the original experimental design, discrepancies between the original and the replication experiment, distinct limitations of the replication experiment, or combinations of any of these factors. Here we introduce the notion of minimum viable experiment to replicate which defines experimental conditions that always yield interpretable replication results and is replication-ready. We believe that most reported experiments are not replication-ready and before striving to replicate a given result, we need theoretical precision in or systematic exploration of the experimental space to discover empirical regularities.
    Found 3 weeks, 1 day ago on PhilSci Archive
  24. 1974057.169901
    The optimism bias is a cognitive bias where individuals overestimate the likelihood of good outcomes and underestimate the likelihood of bad outcomes. Associated with improved quality of life, optimism bias is considered to be adaptive and is a promising avenue of research for mental health interventions in conditions where individuals lack optimism such as major depressive disorder. Here we lay the groundwork for future research on optimism as an intervention by introducing a domain general formal model of optimism bias, which can be applied in different task settings. Employing the active inference framework, we propose a model of the optimism bias as high precision likelihood biased towards positive outcomes. First, we simulate how optimism may be lost during development by exposure to negative events. We then ground our model in the empirical literature by showing how the developmentally acquired differences in optimism are expressed in a belief updating task typically used to assess optimism bias. Finally, we show how optimism affects action in a modified two-armed bandit task. Our model and the simulations it affords provide a computational basis for understanding how optimism bias may emerge, how it may be expressed in standard tasks used to assess optimism, and how it affects agents’ decision-making and actions; in combination, this provides a basis for future research on optimism as a mental health intervention.
    Found 3 weeks, 1 day ago on Jakob Hohwy's site
  25. 2042532.16991
    Picking up where I left off in a 2023 post, I will (finally!) return to Gardiner and Zaharos’s discussion of sensitivity in epistemology and its connection to my notion of severity. But before turning to Parts II (and III), I’d better reblog Part I. …
    Found 3 weeks, 2 days ago on D. G. Mayo's blog
  26. 2046774.169921
    In our paper, “The reference of proper names” (2018), we raised and rebutted the “New-Meaning” objection to our methodology. Our rebuttal rested on theoretical considerations and experimental results. In “Do the Gödel vignettes involve a new descriptivist meaning?”, Nicolò D’Agruma provides an interesting argument against our theoretical considerations (but does not address the experimental evidence). Our present paper argues against D’Agruma. So, our original rebuttal of the objection still stands. We offer further evidence against the objection.
    Found 3 weeks, 2 days ago on Michael Devitt's site
  27. 2229219.169935
    Philosophers like to tell stories about knowledge that crackle with drama. We have wizards (who deceive), adventure (kidnapping neuroscientists), and surprise endings (‘and it turns out that Brown was in Barcelona!’). A cynic might wonder whether all the whiz-bang is cover for weak material. Hilary Kornblith puts these cynical doubts to rest in this slim, elegant book about knowledge. Kornblith spins a yarn that is accessible enough for a general reader and theoretically compelling enough for epistemologists and philosophers of science. Kornblith’s story is distinctive because he takes knowledge to be a scienti c category—it’s a concept that manages to do a lot of explanatory work without all the whiz-bang.
    Found 3 weeks, 4 days ago on PhilSci Archive
  28. 2233942.169946
    1. Bias defined. A bad thermometer is one that often gets the temperature wrong. In one case, the errors are random: sometimes too high (say, it reads 74 when the temp is 72), sometimes too low. In another, the errors are systematic—it tends, say, to read too high. …
    Found 3 weeks, 4 days ago on Mostly Aesthetics
  29. 2273581.169956
    |Source| Why can’t the world do the right and obvious thing about this huge and urgent problem? The science is clear and not really in question. The core mechanism of how certain molecules create a greenhouse warming effect on the earth is well-understood (and has been known for over a century). …
    Found 3 weeks, 5 days ago on The Philosopher's Beard
  30. 2322154.169965
    The first person who, having enclosed a plot of land, took it into his head to say “this is mine,” and found people simple enough to believe him, was the true founder of civil society. —Rousseau How does one come to acquire property, that is, rightful ownership in something that was previously unowned and, by so doing, exclude all others from its rightful use? The distinction between mine and thine also creates the distinction between use and theft and, as Rousseau noted, is the true source of human inequality (1755, 69). One prominent answer to this question is that one can rightfully acquire ownership of something that was previously unowned by improving it through one’s labor. One can come to own an unowned plot of land, for instance, by farming or building on the land. The classic philosophical source for this view is Locke’s 2nd Treatise on Government. There, Locke argues that since we own ourselves and our labor, once we “mix” our labor with a thing, we make it our own.
    Found 3 weeks, 5 days ago on John Thrasher's site