1. 68887.653656
    Recent research has identified a tension between the Safety principle that knowledge is belief without risk of error, and the Closure principle that knowledge is preserved by competent deduction. Timothy Williamson reconciles Safety and Closure by proposing that when an agent deduces a conclusion from some premises, the agent’s method for believing the conclusion includes their method for believing each premise. We argue that this theory is untenable because it implies problematically easy epistemic access to one’s methods. Several possible solutions are explored and rejected.
    Found 19 hours, 8 minutes ago on PhilPapers
  2. 81209.653845
    Self-locating beliefs are beliefs about one’s position or situation in the world, as opposed to beliefs about how the world is in itself. Section 1 of this entry introduces self-locating beliefs. Section 2 presents several distinct arguments that self-locating beliefs constitute a theoretically distinctive category. These arguments are driven by central examples from the literature; we categorize the examples by the arguments to which they contribute. (Some examples serve multiple strands of argument at once.) Section 3 examines positive proposals for modeling self-locating belief, focusing on the two most prominent proposals, due to Lewis and Perry.
    Found 22 hours, 33 minutes ago on Stanford Encyclopedia of Philosophy
  3. 250956.653873
    The subjective Bayesian answer to the problem of induction Posted on Wednesday, 28 Sep 2022. Some people – important people, like Richard Jeffrey or Brian Skyrms – seem to believe that Laplace and de Finetti have solved the problem of induction, assuming nothing more than probabilism. …
    Found 2 days, 21 hours ago on wo's weblog
  4. 251575.653894
    How can we make moral progress on factory farming? Part of the answer lies in human moral psychology. Meat consumption remains high, despite increased awareness of its negative impact on animal welfare. Weakness of will is part of the explanation: acceptance of the ethical arguments does not always motivate changes in dietary habits. However, we draw on scientific evidence to argue that many consumers are not fully convinced that they morally ought to reduce their meat consumption. We then identify two key psychological mechanisms—motivated reasoning and social proof—that lead people to resist the ethical reasons. Finally, we show how to harness these psychological mechanisms to encourage reductions in meat consumption. A central lesson for moral progress generally is that durable social change requires socially embedded reasoning.
    Found 2 days, 21 hours ago on Victor Kumar's site
  5. 251585.653921
    According to ‘Excluders’, descriptive uncertainty—but not normative uncertainty— matters to what we ought to do. Recently, several authors have argued that those wishing to treat normative uncertainty differently from descriptive uncertainty face a dependence problem because one’s descriptive uncertainty can depend on one’s normative uncertainty. The aim of this paper is to determine whether the phenomenon of dependence poses a decisive problem for Excluders. I argue that existing arguments fail to show this, and that, while stronger ones can be found, Excluders can escape them.
    Found 2 days, 21 hours ago on PhilPapers
  6. 251627.653956
    Social environments often impose tradeoffs between pursuing personal goals and maintaining a favorable reputation. We studied how individuals navigate these tradeoffs using Reinforcement Learning (RL), paying particular attention to the role of social value orientation (SVO). We had human participants play an interated Trust Game against various software opponents and analyzed the behaviors. We then incorporated RL into two cognitive models, trained these RL agents against the same software opponents, and performed similar analyses. Our results show that the RL agents reproduce many interesting features in the human data, such as the dynamics of convergence during learning and the tendency to defect once reciprocation becomes impossible. We also endowed some of our agents with SVO by incorporating terms for altruism and inequality aversion into their reward functions. These prosocial agents differed from proself agents in ways that resembled the differences between prosocial and proself participants. This suggests that RL is a useful framework for understanding how people use feedback to make social decisions.
    Found 2 days, 21 hours ago on Chris Eliasmith's site
  7. 291621.653979
    An action is unratifiable when, on the assumption that one performs it, another option has higher expected utility. Unratifiable actions are often claimed to be somehow rationally defective. But in some cases where multiple options are unratifiable, one unratifiable option can still seem preferable to another. We should respond, I argue, by invoking a graded notion of ratifiability.
    Found 3 days, 9 hours ago on David James Barnett's site
  8. 291721.654
    Self-verifying judgments like I exist seem rational, and self-defeating ones like It will rain, but I don’t believe it will rain seem irrational. But one’s evidence might support a self-defeating judgment, and fail to support a self-verifying one. This paper explains how it can be rational to defy one’s evidence if judgment is construed as a mental performance or act, akin to inner assertion. The explanation comes at significant cost, however. Instead of causing or constituting beliefs, judgments turn out to be mere epiphenomena, and self-verification and self-defeat lack the broader philosophical import often claimed for them.
    Found 3 days, 9 hours ago on David James Barnett's site
  9. 420975.65402
    How could the initial, drastic decisions to implement “lockdowns” to control the spread of Covid-19 infections be justifiable, when they were made on the basis of such uncertain evidence? We defend the imposition of lockdowns in some countries by, first, looking at the evidence that undergirded the decision (focusing particularly on the decision-making process in the United Kingdom); second, arguing that this provided sufficient grounds to restrict liberty, given the circumstances; and third, defending the use of poorly empirically constrained epidemiological models as tools that can legitimately guide public policy.
    Found 4 days, 20 hours ago on PhilSci Archive
  10. 470400.654042
    Supererogatory acts are good deeds beyond the call of duty, ranging from friendly favors to saintly sacrifices to risky rescues. As any reader of this handbook will have noticed, philosophers disagree deeply about what supererogation is, and whether it is even possible. To some extent, this is a verbal dispute. “Supererogation” is not ordinary language, like “good” or “wrong.” It is a “quasi-technical term” (Heyd 1982), whose meaning is somewhat up for grabs. If you say that supererogation must spring from a noble motive, whereas we say that only needs to be a good thing to do, there’s no point in shouting at each other about the true essence of supererogation. We are better off admitting that we just prefer different definitions.
    Found 5 days, 10 hours ago on Daniel Muñoz's site
  11. 557022.654068
    According to Mercier and Sperber (2009, 2011, 2017), people have an immediate and intuitive feeling about the strength of an argument. These intuitive evaluations are not captured by current evaluation methods of argument strength, yet they could be important to predict the extent to which people accept the claim supported by the argument. In an exploratory study, therefore, a newly developed intuitive evaluation method to assess argument strength was compared to an explicit argument strength evaluation method (the PAS scale; Zhao et al., 2011), on their ability to predict claim acceptance (predictive validity) and on their sensitivity to differences in the manipulated quality of arguments (construct validity). An experimental study showed that the explicit argument strength evaluation performed well on the two validity measures. The intuitive evaluation measure, on the other hand, was not found to be valid. Suggestions for other ways of constructing and testing intuitive evaluation measures are presented.
    Found 6 days, 10 hours ago on Jos Hornikx's site
  12. 557043.65409
    The effects of repression on dissent are debated widely. We contribute to the debate by developing an agent-based model grounded in ethnographic interviews with dissidents. Building on new psychology research, the model integrates emotions as a dynamic context of dissent. The model moreover differentiates between four repression types: violence, street blockages, curfews and Facebook cuts. The simulations identify short-term dampening effects of each repression type, with a maximum effect related to non-violent forms of repression. The simulations also show long-term spurring effects, which are most strongly associated with state violence. In addition, the simulations identify nonlinear short-term spurring effects of state violence on early stage dissent. Such effects are not observed for the remaining repressive measures. Contrasting with arguments that violence deters dissent, this suggests that violence may fuel dissent, while non-violent repression might suppress it.
    Found 6 days, 10 hours ago on Bruce Edmonds's site
  13. 767343.658159
    When it comes to finding whether a firm has violated antitrust law, economists are often called upon as expert witnesses by the parties involved in litigation. This paper focuses on a challenge that economists may face when appearing as expert witnesses in US federal courts, namely to comply with the so-called Daubert standard of admissibility of expert testimony. I propose a new framework for analysing the interplay between model applicability and admissibility standard in courtrooms. The framework distinguishes between weak applicability claims, stating that a model’s critical assumptions are shared by the target, and strong applicability claims, connecting empirical models and specific market features. I use this distinction to examine a recent antitrust case where an expert testimony based on economic models has been assessed following the Daubert standard.
    Found 1 week, 1 day ago on PhilSci Archive
  14. 828760.658197
    A de minimis risk is defined as a risk that is so small that it may be legitimately ignored when making a decision. While ignoring small risks is common in our day-to-day decision making, attempts to introduce the notion of a de minimis risk into the framework of decision theory have run up against a series of well-known difficulties. In this paper, I will develop an enriched decision theoretic framework that is capable of overcoming two major obstacles to the modelling of de minimis risk. The key move is to introduce, into decision theory, a non-probabilistic conception of risk known as normic risk.
    Found 1 week, 2 days ago on PhilPapers
  15. 882860.658219
    We argue that the epistemic functions of replication in science are best understood by their role in assessing kinds of experimental error. Direct replications serve to assess the reliability of an experiment through its precision: the presence and degree of random error. Conceptual replications serve to assess the validity of an experiment through its accuracy: the presence and degree of systematic errors. To illustrate the aptness of this view, we examine the Hubble constant controversy in astronomy, showing how astronomers have responded to the concordances and discordances in their results by carrying out the different kinds of replication that we identify, with the aim of establishing a precise, accurate value for the Hubble constant. We contrast our view with Machery’s “re-sampling” account of replicability, which maintains that replications only assess reliability.
    Found 1 week, 3 days ago on PhilSci Archive
  16. 886522.65824
    Many virtue epistemologists conceive of epistemic competence on the model of skill—such as archery, playing baseball or chess. In this paper, I argue that this is a mistake: epistemic competences and skills are crucially and relevantly different kinds of capacities. This, I suggest, undermines the popular attempt to understand epistemic normativity as a mere special case of the sort of normativity familiar from skilful action. In fact, as I argue further, epistemic competences resemble virtues, rather than skills—a claim that is based on an important, but often overlooked, difference between virtue and skill. The upshot is that virtue epistemology should indeed be based on virtue, not on skill.
    Found 1 week, 3 days ago on PhilPapers
  17. 1022007.65826
    This paper identifies two distinct dimensions of what might be called testimonial strength: first, in the case of testimony from more than one speaker, testimony can be said to be stronger to the extent that a greater proportion of the speakers give identical testimony; second, in both single-speaker and multi-speaker testimony, testimony can be said to the stronger to the extent that each speaker expresses greater conviction in the relevant proposition. These two notions of testimonial strength have received scant attention in the philosophical literature so far, presumably because it has been thought that whatever lessons we learn from thinking about testimony as a binary phenomenon will apply mutatis mutandis to varying strengths of testimony. This paper shows that this will not work for either of the two aforementioned dimensions of testimonial strength, roughly because less testimony can provide more justification in a way that can only be explained by appealing to the (non-binary) strength of the testimony itself. The paper also argues that this result undermines some influential versions of non-reductionism about testimonial justification.
    Found 1 week, 4 days ago on Finnur Dellsén's site
  18. 1056002.658279
    Concerns about a crisis of mass irreplicability across scientific fields (“the replication crisis”) have stimulated a movement for open science, encouraging or even requiring researchers to publish their raw data and analysis code. Recently, a rule at the US Environmental Protection Agency (US EPA) would have imposed a strong open data requirement. The rule prompted significant public discussion about whether open science practices are appropriate for fields of environmental public health. The aims of this paper are to assess (1) whether the replication crisis extends to fields of environmental public health; and (2) in general whether open science requirements can address the replication crisis. There is little empirical evidence for or against mass irreplicability in environmental public health specifically.
    Found 1 week, 5 days ago on PhilSci Archive
  19. 1056128.658305
    Since at least the mid-2000s, political commentators, environmental advocates, and scientists have raised concerns about an “anti-science” approach to environmental policymaking in conservative governments in the US and Canada. This paper explores and resolves a paradox surrounding at least some uses of the “anti-science” epithet. I examine two cases of such “anti-science” environmental policy, both of which involve appeals to epistemic values that are widely endorsed by both scientists and philosophers of science. It seems paradoxical to call an appeal to epistemic values “anti-science.” I develop an analysis that, I argue, can resolve this paradox. This analysis is a version of the “aims approach” to science and values, drawing on ideas from axiology and virtue ethics. I characterize the paradox in terms of conflicts or tensions between epistemic and pragmatic aims, and argue that there is a key asymmetry between them: epistemic aims are valuable, in part, because they are useful for pursuing pragmatic aims. Thus, when epistemic and pragmatic aims conflict, epistemic aims need to be reconceptualized in order to reconcile them to pragmatic aims. When this is done, in the “anti-science” cases, the epistemic values are scientific vices rather than virtues. Thus the “anti-science” epithet is apt.
    Found 1 week, 5 days ago on PhilSci Archive
  20. 1075012.658327
    A central dispute in discussions of self-locating attitudes is whether attitude relations like believing and knowing are relations between an agent and properties (things that vary in truth value across individuals) or between an agent and propositions (things that do not so vary). Proponents of the proposition view have argued that the property view is unable to give an adequate account of relations like communication and agreement. We agree and in this paper we show that the problems facing the property view are much more serious than has been appreciated. We then develop and explore two versions of the proposition view. In each case, we show how facts about the self-ascription of properties may be reduced to facts about propositional belief in conjunction with certain other facts.
    Found 1 week, 5 days ago on Dilip Ninan's site
  21. 1218208.658356
    Perceptual Confidence is the view that our conscious perceptual experiences assign degrees of confidence. In previous papers, I motived it using first-personal evidence (Morrison 2016) and Jessie Munton motivated it using normative evidence (Munton 2016). In this paper, I will consider the extent to which it is motivated by third-personal evidence. I will argue that the current evidence is supportive but not decisive. I will also describe experiments that might provide more decisive evidence.
    Found 2 weeks ago on John Morrison's site
  22. 1257433.65838
    John Broome and Duncan Foley’s paper discusses several important and interesting questions regarding how we can handle the climate crisis. It is also innovative on the institutional level with its proposal of a World Climate Bank. This is indeed valuable; we need much more creative institutional thinking about the challenge of the climate crisis. All too much thinking has been focused on individual behaviour instead of collective solutions and institutional change.
    Found 2 weeks ago on Gustaf Arrhenius's site
  23. 1257472.658401
    Marton (2019) argues that that it follows from the standard antirealist theory of truth, which states that truth and possible knowledge are equivalent, that knowing possibilities is equivalent to the possibility of knowing, whereas these notions should be distinct. Moreover, he argues that the usual strategies of dealing with the Church-Fitch paradox of knowability are either not able to deal with his modal-epistemic collapse result or they only do so at a high price. Against this, I argue that Marton’s paper does not present any seriously novel challenge to anti-realism not already found in the Church-Fitch result. Furthermore, Edgington (1985)’s reformulated antirealist theory of truth can deal with his modal-epistemic collapse argument at no cost.
    Found 2 weeks ago on PhilPapers
  24. 1287183.65842
    Scoring rules measure the accuracy or epistemic utility of a credence assignment. A significant literature uses plausible conditions on scoring rules on finite sample spaces to argue for both probabilism— the doctrine that credences ought to satisfy the axioms of probabilism— and for the optimality of Bayesian update as a response to evidence. I prove a number of formal results regarding scoring rules on infinite sample spaces that impact the extension of these arguments to infinite sample spaces. A common condition in the arguments for probabilism and Bayesian update is strict propriety: that according to each probabilistic credence, the expected accuracy of any other credence is worse. Much of the discussion needs to divide depending on whether we require finite or countable additivity of our probabilities. I show that in a number of natural infinite finitely additive cases, there simply do not exist strictly proper scoring rules, and the prospects for arguments for probabilism and Bayesian update are limited. In many natural infinite countably additive cases, on the other hand, there do exist strictly proper scoring rules that are continuous on the probabilities, and which support arguments for Bayesian update, but which do not support arguments for probabilism. There may be more hope for accuracy-based arguments if we drop the assumption that scores are extended-real-valued. I sketch a framework for scoring rules whose values are nets of extended reals, and show the existence of a strictly proper net-valued scoring rules in all infinite cases, both for f.a. and c.a. probabilities. These can be used in an argument for Bayesian update, but it is not at present known what is to be said about probabilism in this case.
    Found 2 weeks ago on PhilSci Archive
  25. 1290656.658451
    Antirealists who hold the knowability thesis, namely that all truths are knowable, have been put on the defensive by the Church-Fitch paradox of knowability. Rejecting the non-factivity of the concept of knowability used in that paradox, Edgington has adopted a factive notion of knowability, according to which only actual truths are knowable. She has used this new notion to reformulate the knowability thesis. The result has been argued to be immune against the Church-Fitch paradox, but it has encountered several other triviality objections. Schlöder in a forthcoming paper defends the general approach taken by Edgington, but amends it to save it in turn from the triviality objections. In this paper I will argue, first, that Schlöder’s justification for the factivity of his version of the concept of knowability is vulnerable to criticism, but I will also offer an improved justification that is in the same spirit as his. To the extent that some philosophers are right about our intuitive concept of knowability being a factive one, it is important to explore factive concepts of knowability that are made formally precise. I will subsequently argue that Schlöder’s version of the knowability thesis overgenerates knowledge or, in other words, it leads to attributions of knowledge where there is ignorance. This fits a general pattern for the research programme initiated by Edgington. This paper also contains preliminary investigations into the internal and logical structure of lines of inquiries, which raise interesting research questions.
    Found 2 weeks ago on PhilPapers
  26. 1402600.658472
    Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decision-making that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures.
    Found 2 weeks, 2 days ago on PhilSci Archive
  27. 1403617.658496
    According to Positive Egalitarianism, not only do relations of inequality have negative value, as Negative Egalitarians claim, but relations of equality also have positive value. The egalitarian value of a population is a function of both pairwise relations of inequality (negative) and pairwise relations of equality (positive). Positive and Negative Egalitarianism diverge, especially in different number cases. Hence, an investigation of Positive Egalitarianism might shed new light on the vexed topic of population ethics and our duties to future generations. We shall here, in light of some recent criticism, further develop the idea of giving positive value to equal relations.
    Found 2 weeks, 2 days ago on Gustaf Arrhenius's site
  28. 1452448.658516
    Schlenker 2009, 2010a,b provides an algorithm for deriving the presupposition projection properties of an expression from that expression’s classical semantics. In this paper, we consider the predictions of Schlenker’s algorithm as applied to attitude verbs. More specifically, we compare Schlenker’s theory with a prominent view which maintains that attitudes exhibit belief projection, so that presupposition triggers in their scope imply that the attitude holder believes the presupposition (Kartunnen, 1974; Heim, 1992; Sudo, 2014). We show that Schlenker’s theory does not predict belief projection, and discuss several consequences of this result.
    Found 2 weeks, 2 days ago on Simon D. Goldstein's site
  29. 1529225.658536
    Can we be manipulated by technology? Science fction suggests that the answer is yes. In the 2014 movie, Ex Machina, software engineer Caleb falls prey to the empathic android Ava’s sly charm. She has a subtle grasp of Caleb’s needs and desires and feigns romantic feelings for the engineer. However, as it turns out, she merely uses him as a means to fee from her creator’s enclosure. Caleb falls in love with her and helps her escape, and Ava leaves him to die once she is set free.
    Found 2 weeks, 3 days ago on Michael Klenk's site
  30. 1627775.658565
    Consider the task of selecting a medical test to determine whether a patient has a particular disease. Normatively, this requires taking into account (a) the prior probability of the disease, (b) the likelihood—for each available test—of obtaining a positive result if the medical condition is present or absent, respectively, and (c) the utilities for both correct and incorrect treatment decisions based upon each possible test result. But these quantities may not be precisely known. Are there strategies that could help identify the test with the highest utility given incomplete information? Here, we consider the Likelihood Difference Heuristic (LDH), a simple heuristic that selects the test with the highest difference between the likelihood of obtaining a true positive and a false-positive test result, ignoring all other information. We prove that the LDH is optimal when the probability of the disease equals the therapeutic threshold, the probability for which treating the patient and not treating the patient have the same expected utility. By contrast, prominent models of the value of information from the literature, such as information gain, probability gain, and Bayesian diagnosticity, are not optimal under these circumstances. Further results show how, depending on the relationship of the therapeutic threshold and prior probability of the disease, it is possible to determine which likelihoods are more important for assessing tests’ expected utilities. Finally, to illustrate the potential relevance for real-life contexts, we show how the LDH might be applied to choosing tests for screening of latent tuberculosis infection.
    Found 2 weeks, 4 days ago on Vincenzo Crupi's site