1. 227029.531431
    This paper articulates in formal terms a crucial distinction concerning future contingents, the distinction between what is true about the future and what is reasonable to believe about the future. Its key idea is that the branching structures that have been used so far to model truth can be employed to define an epistemic property, credibility, which we take to be closely related to knowledge and assertibility, and which is ultimately reducible to probability. As a result, two kinds of claims about future contingents — one concerning truth, the other concerning credibility — can be smoothly handled within a single semantic framework.
    Found 2 days, 15 hours ago on PhilPapers
  2. 284838.531509
    Philosophers who take rationality to consist in the satisfaction of rational requirements typically favour rational requirements that govern mental attitudes at a time rather than across times. One such account has been developed by Broome in Rationality through reasoning. He claims that diachronic functional properties of intentions such as settling on courses of actions and resolving conflicts are emergent properties that can be explained with reference to synchronic rational pressures. This is why he defends only a minimal diachronic requirement which characterises forgetting as irrational. In this paper, I show that Broome’s diachronically minimalist account lacks the resources to explain how a rational agent may resolve incommensurable choices by an act of will. I argue that one can solve this problem by either specifying a mode of diachronic deliberation or by introducing a genuinely diachronic requirement that governs the rational stability of an intention via a diachronic counterfactual condition concerning rational reconsideration. My proposal is similar in spirit to Gauthier’s account in his seminal paper ‘Assure and threaten’. It improves on his work by being both more general and explanatorily richer in its application with regard to diachronic phenomena such as transformative choices and acts of will.
    Found 3 days, 7 hours ago on PhilPapers
  3. 400674.531538
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
    Found 4 days, 15 hours ago on PhilPapers
  4. 458566.531554
    This paper explores the principle that knowledge is fragile, in that whenever S knows that S doesn’t know that S knows that p, S thereby fails to know p. Fragility is motivated by the infelicity of dubious assertions, utterances which assert p while acknowledging higher order ignorance of p. Fragility is interestingly weaker than KK, the principle that if S knows p, then S knows that S knows p. Existing theories of knowledge which deny KK by accepting a Margin for Error principle can be conservatively extended with Fragility.
    Found 5 days, 7 hours ago on PhilPapers
  5. 516460.531568
    We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two monisms, we argue, are in tension with each other. If only accuracy has final epistemic value, then there are legitimate alternatives to strictly proper scoring rules. Our argument relies on the way scoring rules are used in contexts where accuracy is rewarded, such as education.
    Found 5 days, 23 hours ago on PhilPapers
  6. 556344.531583
    Angell’s logic of analytic containment AC has been shown to be characterized by a 9-valued matrix NC by Ferguson, and by a 16-valued matrix by Fine. We show that the former is the image of a surjective homomorphism from the latter, i.e., an epimorphic image. The epimorphism was found with the help of MUltlog, which also provides a tableau calculus for NC extended by quantifiers that generalize conjunction and disjunction.
    Found 6 days, 10 hours ago on PhilPapers
  7. 556470.531618
    According to lexical views in population axiology, there are good lives a: and 3/ such that some number of lives equally good as x is not worse than any number of lives equally good as 3/. Such views can avoid the Repugnant Conclusion without violating Transitivity or Separability, but they imply a dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically incomplete, in a sense to be explained. One might judge that the Repugnant Conclusion is preferable to each of these horns and hence embrace an Archimedean view. This is, roughly, the claim that quantity can always substitute for quality: each population is worse than a population of enough good lives. However, Archimedean views face an analogous dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically and symmetrically incomplete, in a sense to be explained. Therefore, the lexical dilemma gives us little reason to prefer Archimedean views. Even if we give up on lexicality, problems of the same kind remain.
    Found 6 days, 10 hours ago on PhilPapers
  8. 556573.531651
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service’
    Found 6 days, 10 hours ago on PhilPapers
  9. 556590.531685
    Nicod Criterion (NC): A claim of form “All Fs are Gs” is confirmed by any sentence of the form “i is F and i is G” where “i” is a name of some particular object. Equivalence Condition (EC): Whatever confirms (disconfirms) one of two equivalent sentences also confirms (disconfirms) the other.2
    Found 6 days, 10 hours ago on PhilSci Archive
  10. 579271.531722
    In On the Plurality of Worlds (Lewis 1986), David Lewis imposes a condition on realist theories of modality which he calls ‘plenitude’. Lewis apparently assigns this condition considerable importance, and uses it to motivate his Humean principle of recombination, but he never says exactly what plenitude amounts to. This chapter first sets aside some obvious ways of reconstructing the plenitude criterion which do not fit with the textual evidence. An objection to modal realism due to John Divers and Joseph Melia (Divers and Melia 2002) is diagnosed as equivocating between an overly-demanding plenitude constraint and a weaker constraint which fails to establish their conclusion. An alternative deflationary interpretation of the plenitude condition has it following from an application of standard theoretical virtues to a modal realist’s total theory; Lewis’ correspondence provides new evidence in support of this interpretation. The deflationary plenitude criterion also has broader application, beyond Lewisian modal realism.
    Found 6 days, 16 hours ago on Alastair Wilson's site
  11. 643664.531757
    What are the necessary and sufficient conditions under which a set of material objects S composes something? In other words: what is the criterion—i.e. a condition that is both sufficient and necessary—ψ such that: ψ(S) iff the objects in set S compose (Comp) an object x: ∃x(Comp(S, x))?
    Found 1 week ago on PhilSci Archive
  12. 690095.53179
    Much of the literature on the relationship between belief and credence has focused on the reduction question: that is, whether either belief or credence reduces to the other. This debate, while important, only scratches the surface of the belief-credence connection. Even on the anti-reductive dualist view, belief and credence could still be very tightly connected. Here, I explore questions about the belief-credence connection that go beyond reduction. This paper is dedicated to what I call the independence question: just how independent are belief and credence? I look at this question from two angles: a descriptive one (as a psychological matter, how much can belief and credence come apart?) and a normative one (for a rational agent, how closely connected are belief and credence?) Ultimately, I suggest that the two attitudes are more independent than one might think.
    Found 1 week ago on PhilPapers
  13. 748049.531825
    We call attention to certain cases of epistemic akrasia, arguing that they support belief-credence dualism. Belief-credence dualism is the view that belief and credence are irreducible, equally fundamental attitudes. Consider the case of an agent who believes p, has low credence in p, and thus believes that they shouldn’t believe p. We argue that dualists, as opposed to belief-firsters (who say credence reduces to belief) and credence-firsters (who say belief reduces to credence) can best explain features of akratic cases, including the observation that akratic beliefs seem to be held despite possessing a defeater for those beliefs, and that, in akratic cases, one can simultaneously believe and have low confidence in the very same proposition.
    Found 1 week, 1 day ago on PhilPapers
  14. 772674.531859
    Have we entered a “post-truth” era? The present paper attempts to answer this question by (a) offering an explication of the notion of “post-truth” from recent discussions; (b) deriving a testable implication from that explication, to the effect that we should expect to see decreasing information effects—i.e., differences between actual preferences and estimated, fully informed preferences—on central political issues over time; and then (c) putting the relevant narrative to the test by way of counterfactual modelling, using election year data for the period of 2004-2016 from the American National Election Studies’ (ANES) Times Series Study. The implication in question turns out to be consistent with the data: at least in a US context, we do see evidence of a decrease in information effects on key, political issues—immigration, same-sex adoption, and gun laws, in particular—in the period 2004 to 2016. This offers some novel, empirical evidence for the “post-truth” narrative.
    Found 1 week, 1 day ago on Kristoffer Ahlstrom-Vij's site
  15. 773967.531915
    Legal probabilism is a research program that relies on probability theory to analyze, model and improve the evaluation of evidence and the process of decision-making in trial proceedings. While the expression “legal probabilism” seems to have been coined by Haack (2014b), the underlying idea can be traced back to the early days of probability theory (see, for example, Bernoulli 1713). Another term that is sometimes encountered in the literature is “trial by mathematics” coined by Tribe (1971). Legal probabilism remains a minority view among legal scholars, but attained greater popularity in the second half of the twentieth century in conjunction with the law and economics movement (Becker 1968; Calabresi 1961; Posner 1973).
    Found 1 week, 1 day ago on Wes Morriston's site
  16. 806013.531958
    Some find it plausible that a sufficiently long duration of torture is worse than any duration of mild headaches. Similarly, it has been claimed that a million humans living great lives is better than any number of worm-like creatures feeling a few seconds of pleasure each. Some have related bad things to good things along the same lines. For example, one may hold that a future in which a sufficient number of beings experience a lifetime of torture is bad, regardless of what else that future contains, while minor bad things, such as slight unpleasantness, can always be counterbalanced by enough good things. Among the most common objections to such ideas are sequence arguments. But sequence arguments are usually formulated in classical logic. One might therefore wonder if they work if we instead adopt many-valued logic. I show that, in a common many-valued logical framework, the answer depends on which versions of transitivity are used as premises. We get valid sequence arguments if we grant any of several strong forms of transitivity of ‘is at least as bad as’ and a notion of completeness. Other, weaker forms of transitivity lead to invalid sequence arguments. The plausibility of the premises is largely set aside here, but I tentatively note that almost all of the forms of transitivity that result in valid sequence arguments seem intuitively problematic. Still, a few moderately strong forms of transitivity that might be acceptable result in valid sequence arguments, although weaker statements of the initial value claims avoid these arguments at least to some extent.
    Found 1 week, 2 days ago on PhilPapers
  17. 831904.532
    On the basis of a wide range of historical examples various features of axioms are discussed in relation to their use in mathematical practice. A very general framework for this discussion is provided, and it is argued that axioms can play many roles in mathematics and that viewing them as self-evident truths does not do justice to the ways in which mathematicians employ axioms. Possible origins of axioms and criteria for choosing axioms are also examined. The distinctions introduced aim at clarifying discussions in philosophy of mathematics and contributing towards a more refined view of mathematical practice.
    Found 1 week, 2 days ago on Dirk Schlimm's site
  18. 831949.532018
    The design of good notation is a cause that was dear to Charles Babbage’s heart throughout his career. He was convinced of the “immense power of signs” (1864, 364), both to rigorously express complex ideas and to facilitate the discovery of new ones. As a young man, he promoted the Leibnizian notation for the calculus in England, and later he developed a Mechanical Notation for designing his computational engines. In addition, he reflected on the principles that underlie the design of good mathematical notations. In this paper, we discuss these reflections, which can be found somewhat scattered in Babbage’s writings, for the first time in a systematic way. Babbage’s desiderata for mathematical notations are presented as ten guidelines pertinent to notational design and its application to both individual symbols and complex expressions. To illustrate the applicability of these guidelines in non-mathematical domains, some aspects of his Mechanical Notation are also discussed.
    Found 1 week, 2 days ago on Dirk Schlimm's site
  19. 995645.532032
    Mathematical pluralism can take one of three forms: (1) every consistent mathematical theory is about its own domain of individuals and relations; (2) every mathematical theory, consistent or inconsistent, is about its own (possibly uninteresting) domain of individuals and relations; and (3) many of the principal philosophies of mathematics is based upon some insight or truth about the nature of mathematics that can be preserved. (1) includes the multiverse approach to set theory. (2) helps us to understand the significance of the distinguished non-logical individual and relation terms of even inconsistent theories. (3) is a metaphilosophical form of mathematical pluralism and hasn’t been discussed in the literature. In what follows, I show how the analysis of theoretical mathematics in object theory exhibits all three forms of mathematical pluralism.
    Found 1 week, 4 days ago on Ed Zalta's site
  20. 1095264.532076
    Many classic moral paradoxes involve conditional obligations, such as the obligation to be gentle if one is to murder. Many others involve supererogatory acts, or “good deeds beyond the call of duty.” Less attention, however, has been paid to the intersection of these topics. We develop the first general account of conditional supererogation. It has the power to solve both some familiar puzzles as well as several that we introduce. Moreover, our account builds on two familiar insights: the idea that conditionals restrict quantification and the idea that supererogation emerges from a clash between justifying and requiring reasons.
    Found 1 week, 5 days ago on PhilPapers
  21. 1158193.532187
    We argue that inductive analysis (based on formal learning theory and the use of suitable machine learning reconstructions) and operational (citation metrics-based) assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.
    Found 1 week, 6 days ago on Slobodan Perović's site
  22. 1158344.532225
    An argument is presented that if a theory of quantum gravity is physically discrete at the Planck scale and the theory recovers General Relativity as an approximation, then, at the current stage of our knowledge, causal sets must arise within the theory, even if they are not its basis. We show in particular that an apparent alternative to causal sets, viz. a certain sort of discrete Lorentzian simplicial complex, cannot recover General Relativistic spacetimes in the appropriately unique way. For it cannot discriminate between Minkowski spacetime and a spacetime with a certain sort of gravitational wave burst.
    Found 1 week, 6 days ago on PhilSci Archive
  23. 1211401.532244
    In a recent article, P. Roger Turner and Justin Capes argue that no one is, or ever was, even partly morally responsible for certain world-indexed truths. Here we present our reasons for thinking that their argument is unsound: It depends on the premise that possible worlds are maximally consistent states of affairs, which is, under plausible assumptions concerning states of affairs, demonstrably false. Our argument to show this is based on Bertrand Russell’s original ‘paradox of propositions’. We should then opt for a different approach to explain world-indexed truths whose upshot is that we may be (at least partly) morally responsible for some of them. The result to the effect that there are no maximally consistent states of affairs is independently interesting though, since this notion motivates an account of the nature of possible worlds in the metaphysics of modality. We also register in this article, independently of our response to Turner and Capes, and in the spirit of Russell’s aforementioned paradox and many other versions thereof, a proof of the claim that there is no set of all true propositions one can render false.
    Found 2 weeks ago on PhilPapers
  24. 1269151.53228
    In this paper, I distinguish between two possible versions of Amie Thomasson’s easy ontology project that differ in virtue of positing atomic or holistic application conditions, and evaluate the strengths of a holistic version over a non-holistic version. In particular, I argue that neither of the recently identified regress or circularity problems are troublesome for the supporter of easy ontology if they adopt a holistic account of application conditions. This is not intended to be a defence of easy ontology from all possible objections, but rather to compare holistic and non-holistic versions of the view. This discussion is also significant in that it serves to highlight two distinct forms of easy ontology, which, I argue, need to be distinguished when assessing the merits of the easy approach in future work.
    Found 2 weeks ago on PhilPapers
  25. 1276297.532298
    This paper describes a method for learning from a teacher’s potentially unreliable corrective feedback in an interactive task learning setting. The graphical model uses discourse coherence to jointly learn symbol grounding, domain concepts and valid plans. Our experiments show that the agent learns its domain-level task in spite of the teacher’s mistakes.
    Found 2 weeks ago on Alex Lascarides's site
  26. 1324568.532312
    The paper explores Hermann Weyl’s turn to intuitionism through a philosophical prism of normative framework transitions. It focuses on three central themes that occupied Weyl’s thought: the notion of the continuum, logical existence, and the necessity of intuitionism, constructivism, and formalism to adequately address the foundational crisis of mathematics. The analysis of these themes reveals Weyl’s continuous endeavor to deal with such fundamental problems and suggests a view that provides a different perspective concerning Weyl’s wavering foundational positions. Building on a philosophical model of scientific framework transitions and the special role that normative indecision or ambivalence plays in the process, the paper examines Weyl’s motives for considering such a radical shift in the first place. It concludes by showing that Weyl’s shifting stances should be regarded as symptoms of a deep, convoluted intrapersonal process of self-deliberation induced by exposure to external criticism.
    Found 2 weeks, 1 day ago on PhilSci Archive
  27. 1327312.532326
    The Ideal Worlds Account of Desire says that S wants p just in case all of S’s most highly preferred doxastic possibilities make p true. The account predicts that a desire report pS wants pq should be true so long as there is some doxastic p-possibility that is most preferred (by S). But we present a novel argument showing that this prediction is incorrect. More positively, we take our examples to support alternative analyses of desire, and close by briefly considering what our cases suggest about the logic of desire.
    Found 2 weeks, 1 day ago on PhilPapers
  28. 1500824.532368
    Psychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents’ prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature.
    Found 2 weeks, 3 days ago on PhilPapers
  29. 1510827.53241
    This paper aims to clarify some conceptual aspects of decoherence that seem largely overlooked in the recent literature. In particular, I want to stress that decoherence theory, in the standard framework, is rather silent with respect to the description of (sub)systems and associated dynamics. Also, the selection of position basis for classical objects is more problematic than usually thought: while, on the one hand, decoherence offers a pragmatic-oriented solution to this problem, on the other hand, this can hardly be seen as a genuine ontological explanation of why the classical world is position-based. This is not to say that decoherence is not useful to the foundations of quantum mechanics; on the contrary, it is a formidable weapon, as it accounts for a realistic description of quantum systems. That powerful description, however, becomes manifest when decoherence theory itself is interpreted in a realist framework of quantum mechanics.
    Found 2 weeks, 3 days ago on PhilSci Archive
  30. 1537256.532455
    . While I would agree that there are differences between Bayesian statisticians and Bayesian philosophers, those differences don’t line up with the ones drawn by Jon Williamson in his presentation to our Phil Stat Wars Forum (May 20 slides). …
    Found 2 weeks, 3 days ago on D. G. Mayo's blog