1. 35213.080838
    A Boltzmann Brain is a hypothesized observer that comes into existence by way of an extremely low-probability quantum or thermodynamic “fluctuation” and that is capable of conscious experience (including sensory experience and apparent memories) and at least some degree of reflection about itself and its environment. Boltzmann Brains do not have histories that are anything like the ones that we seriously consider as candidates for own history; they did not come into existence on a large, stable planet, and their existence is not the result of any sort of evolutionary process or intelligent design. Rather, they are staggeringly improbable cosmic “accidents” that are (at least typically) massively deluded about their own predicament and history. It is uncontroversial that Boltzmann Brains are both metaphysically and physically possible, and yet that they are staggeringly unlikely to fluctuate into existence at any particular moment. Throughout the following, I will use the term “ordinary observer” to refer to an observer who is not a Boltzmann Brain. We naturally take ourselves to be ordinary observers, and I will not be arguing that we are in any way wrong to do so.
    Found 9 hours, 46 minutes ago on Matthew Kotzen's site
  2. 35226.080912
    Accuracy and the Laws of Credence is required reading for anyone interested in the foundations of epistemology. It is that rare philosophical work which serves both as a stunningly clear overview of a topic and as a cutting-edge contribution to that topic. I can’t possibly address all of the interesting and philosophically rich components of Accuracy and the Laws of Credence here, so I will largely restrict my attention to pieces of Parts I, II, and III of the book, though I’ll have some more general things to say about Petti-grew’s accuracy-only approach to epistemology toward the end.
    Found 9 hours, 47 minutes ago on Matthew Kotzen's site
  3. 82521.080955
    Brian Jabarian U. Paris 1 & Paris School of Economics How should we evaluate options when we are uncertain about the correct standard of evaluation, for instance due to con‡icting normative intuitions? Such ‘normative’ uncertainty differs from ordinary ‘empirical’uncertainty about an unknown state, and raises new challenges for decision theory and ethics. The most widely discussed proposal is to form the expected value of options, relative to correctness probabilities of competing valuations. But this meta-theory overrules our beliefs about the correct risk-attitude: it for instance fails to be risk-averse when we are certain that the correct (…rst-order) valuation is risk-averse. We propose an ‘impartial’meta-theory, which respects risk-attitudinal beliefs. We show how one can address empirical and normative uncertainty within a uni…ed formal framework, and rigorously de…ne risk attitudes of theories. Against a common impression, the classical expected-value theory is not risk-neutral, but of hybrid risk attitude: it is neutral to normative risk, not to empirical risk. We show how to de…ne a fully risk-neutral meta-theory, and a meta-theory that is neutral to empirical risk, not to normative risk. We compare the various meta-theories based on their formal properties, and conditionally defend the impartial meta-theory.
    Found 22 hours, 55 minutes ago on Franz Dietrich's site
  4. 108803.080982
    In this paper we address the question of how it can be possible for a non-expert to acquire justified true belief from expert testimony. We discuss reductionism and epistemic trust as theoretical approaches to answer this question and present a novel solution that avoids major problems of both theoretical options: Performative Expert Testimony (PET). PET draws on a functional account of expertise insofar as it takes the expert’s visibility as a good informant capable to satisfy informational needs as equally important as her specific skills and knowledge. We explain how PET generates justification for testimonial belief, which is at once assessable for non-experts and maintains the division of epistemic labor between them and the experts. Thereafter we defend PET against two objections. First, we point out that the non-expert’s interest in acquiring widely assertable true beliefs and the expert’s interest in maintaining her status as a good informant counterbalances the relativist account of justification at work in PET. Second, we show that with regard to the interests at work in testimonial exchanges between experts and non-experts, PET yields a better explanation of knowledge-acquisition from expert testimony than externalist accounts of justification such as reliabilism. As our arguments ground in a conception of knowledge, which conceives of belief-justification as a declarative speech act, throughout the rearmost sections of this paper we also indicate to how such a conception is operationalized in PET.
    Found 1 day, 6 hours ago on PhilSci Archive
  5. 124788.081
    We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form KAB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic-or aboutness- preservation constraint. Variable strictness models the nonmonotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation models the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions.
    Found 1 day, 10 hours ago on PhilPapers
  6. 349942.081014
    In Reasons and Persons, Derek Parfit (1984) observed that most people are biased towards the future at least when it comes to pain and pleasure. That is, they regard a given amount of pain as less bad when it is in the past than when it is in the future, and a given amount of pleasure as less good. While Parfit (implicitly) held that this bias is rational, it has recently come under effective attack by temporal neutralists, who have offered cases that with plausible auxiliary assumptions appear to be counterexamples to the rationality claim. I’m going to argue that these cases and the rationale behind them only suffice to motivate a more limited rejection of future bias, and that constrained future bias is indeed rationally permissible. My argument turns on the distinct rational implications of action-guiding and pure temporal preferences. I’ll argue that future bias is rational when it comes to the latter, even if not the former. As I’ll say, Only Action Fixes Utility: it is only when you act on the basis of assigning a utility to an outcome that you rationally commit to giving it the same value when it is past as when it is in the future.
    Found 4 days, 1 hour ago on Antti Kauppinen's site
  7. 413616.081027
    Justification depends on context: even if E on its own justifies H, still it might fail to justify in the context of D. This sort of effect, epistemologists think, is due to the possibility of defeaters, which undermine or rebut a would-be justifier. I argue that there is another fundamental sort of contextual effect, disqualification, which doesn’t involve rebuttal or undercutting, and which cannot be reduced to any notion of screening-off. A disqualifier makes some would-be justifier otiose, as direct testimony sometimes does to distal testimony, and as manifestly decisive evidence might do to weak but gratuitous evidence on the same team. Basing a belief on disqualified evidence, moreover, is irrational in a distinctive way. One is not necessarily irresponsible. Instead one is turning down, for no reason, an upgrade to a sleeker, stabler basis for one's beliefs. Such an upgrade would prevent wastes of epistemic effort, since someone who bases her belief on a disqualified proposition E will need to remember E and rethink her belief should she come across a defeater for E. The upgrade might also reduce reliance on unwieldy evidence, if E is relevant only thanks to some labyrinthine argument; and to the extent that even ideal agents should doubt their ability to follow such an argument, even they should care about disqualifiers.
    Found 4 days, 18 hours ago on PhilPapers
  8. 553592.081091
    A prominent type of scientific realism holds that some important parts of our best current scientific theories are at least approximately true. According to such realists, radically distinct alternatives to these theories or theory-parts are unlikely to be approximately true. Thus one might be tempted to argue, as the prominent anti-realist Kyle Stanford recently did, that realists of this kind have little or no reason to encourage scientists to attempt to identify and develop theoretical alternatives that are radically distinct from currently accepted theories in the relevant respects. In other words, it may seem that realists should recommend that scientists be relatively conservative in their theoretical endeavors. This paper aims to show that this argument is mistaken. While realists should indeed be less optimistic of finding radically distinct alternatives to replace current theories, realists also have greater reasons to value the outcomes of such searches. Interestingly, this holds both for successful and failed attempts to identify and develop such alternatives.
    Found 6 days, 9 hours ago on Finnur Dellsén's site
  9. 560177.081113
    There is a powerful three-step argument that philosophy has made no progress. The first step maintains that a field makes genuine progress to the extent that, over time, it provides true answers to its central questions. The second step observes that the central questions of philosophy are among life’s “big questions”—concerning, inter alia, free will, personal identity, skepticism, universals, the mind-body relation, God, and morality. Step three delivers the bad news: we lack the answers to any of these questions.
    Found 6 days, 11 hours ago on John Bengson's site
  10. 624690.081134
    Continuing with the discussion of E.S. Pearson in honor of his birthday: Egon Pearson’s Neglected Contributions to Statistics by Aris Spanos Egon Pearson (11 August 1895 – 12 June 1980), is widely known today for his contribution in recasting of Fisher’s significance testing into the Neyman-Pearson (1933) theory of hypothesis testing. …
    Found 1 week ago on D. G. Mayo's blog
  11. 636850.08116
    Those are not at all to be tolerated who deny the being of a God. Promises, covenants, and oaths, which are the bonds of human society, can have no hold upon an atheist. The taking away of God, though but even in thought, dissolves all. John Locke, Letter Concerning Toleration ([1983] 1689) Over the past few decades, much ink has been spilled in attempts to understand the relationships between religion, intolerance and conflict. And, although, some progress has been made, religion‘s precise role in intolerance and intergroup conflict remains a poorly researched scientific topic. This oversight is remarkable given that the vast majority of the world is religious (Norris & Inglehart, 2004), and hardly a day goes by without religious conflict shaping events and making international headlines (The Washington Post, May 11, 2011).
    Found 1 week ago on Peter Richerson's site
  12. 636876.081195
    Cognitive scientists have increasingly turned to cultural transmission to explain the widespread nature of religion. One key hypothesis focuses on memory, proposing that that minimally counterintuitive (MCI) content facilitates the transmission of supernatural beliefs. We propose two caveats to this hypothesis. (1) Memory effects decrease as MCI concepts become commonly used, and (2) people do not believe counterintuitive content readily; therefore additional mechanisms are required to get from memory to belief. In experiments 1–3 (n = 283), we examined the relationship between MCI, belief, and memory. We found that increased tendencies to anthropomorphize predicted poorer memory for anthropomorphic-MCI content. MCI content was found less believable than intuitive content, suggesting different mechanisms are required to explain belief. In experiment 4 (n = 70), we examined the non-content-based cultural learning mechanism of credibility-enhancing displays (CREDs) and found that it increased participants’ belief in MCI content, suggesting this type of learning can better explain the transmission of belief.
    Found 1 week ago on Peter Richerson's site
  13. 637408.081231
    Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
    Found 1 week ago on Peter Richerson's site
  14. 637565.081255
    Cognitive theories of religion have postulated several cognitive biases that predispose human minds towards religious belief. However, to date, these hypotheses have not been tested simultaneously and in relation to each other, using an individual difference approach. We used a path model to assess the extent to which several interacting cognitive tendencies, namely mentalizing, mind body dualism, teleological thinking, and anthropomorphism, as well as cultural exposure to religion, predict belief in God, paranormal beliefs and belief in life’s purpose. Our model, based on two independent samples (N = 492 and N = 920) found that the previously known relationship between mentalizing and belief is mediated by individual differences in dualism, and to a lesser extent by teleological thinking. Anthropomorphism was unrelated to religious belief, but was related to paranormal belief. Cultural exposure to religion (mostly Christianity) was negatively related to anthropomorphism, and was unrelated to any of the other cognitive tendencies. These patterns were robust for both men and women, and across at least two ethnic identifications. The data were most consistent with a path model suggesting that mentalizing comes first, which leads to dualism and teleology, which in turn lead to religious, paranormal, and life’s-purpose beliefs. Alternative theoretical models were tested but did not find empirical
    Found 1 week ago on Peter Richerson's site
  15. 646908.08127
    Coincidence Analysis (CNA) is a configurational comparative method of causal data analysis that is related to Qualitative Comparative Analysis (QCA) but, contrary to the latter, is custom-built for analyzing causal structures with multiple outcomes. So far, however, CNA has only been capable of processing dichotomous variables, which greatly limited its scope of applicability. This paper generalizes CNA for multi-value variables as well as continuous variables whose values are interpreted as membership scores in fuzzy sets. This generalization comes with a major adaptation of CNA’s algorithmic protocol, which, in an extended series of benchmark tests, is shown to give CNA an edge over QCA not only with respect to multi-outcome structures but also with respect to the analysis of non-ideal data stemming from single-outcome structures. The inferential power of multi-value and fuzzy-set CNA is made available to end users in the newest version of the R package cna.
    Found 1 week ago on Michael Baumgartner's site
  16. 668779.081284
    Karl Popper developed a theory of deductive logic in the late 1940s. In his approach, logic is a metalinguistic theory of deducibility relations that are based on certain purely structural rules. Logical constants are then characterized in terms of deducibility relations. Characterizations of this kind are also called inferential definitions by Popper. In this paper, we expound his theory and elaborate some of his ideas and results that in some cases were only sketched by him. Our focus is on Popper’s notion of duality, his theory of modalities, and his treatment of different kinds of negation. This allows us to show how his works on logic anticipate some later developments and discussions in philosophical logic, pertaining to trivializing (tonk-like) connectives, the duality of logical constants, dual-intuitionistic logic, the (non-)conservativeness of language extensions, the existence of a bi-intuitionistic logic, the non-logicality of minimal negation, and to the problem of logicality in general.
    Found 1 week ago on Wagner de Campos Sanz's site
  17. 685446.081301
    This paper contributes to the underdeveloped field of experimental philosophy of science. We examine variability in the philosophical views of scientists. Using data from Toolbox Dialogue Initiative, we analyze scientists’ responses to prompts on philosophical issues (methodology, confirmation, values, reality, reductionism, and motivation for scientific research) to assess variance in the philosophical views of physical scientists, life scientists, and social and behavioral scientists. We find six prompts about which differences arose, with several more that look promising for future research. We then evaluate the difference between the natural and social sciences and the challenge of interdisciplinary integration across scientific branches.
    Found 1 week ago on PhilSci Archive
  18. 758290.081315
    In this paper we show how to formalise false-belief tasks like the Sally- Anne task and the second-order chocolate task in Dynamic Epistemic Logic (DEL). False-belief tasks are used to test the strength of the Theory of Mind (ToM) of humans, that is, a human’s ability to attribute mental states to other agents. Having a ToM is known to be essential to human social intelligence, and hence likely to be essential to social intelligence of artificial agents as well. It is therefore important to find ways of implementing a ToM in artificial agents, and to show that such agents can then solve false-belief tasks. In this paper, the approach is to use DEL as a formal framework for representing ToM, and use reasoning in DEL to solve false-belief tasks. In addition to formalising several false-belief tasks in DEL, the paper introduces some extensions of DEL itself: edge-conditioned event models and observability propositions. These extensions are introduced to provide better formalisations of the false-belief tasks, but expected to have independent future interest.
    Found 1 week, 1 day ago on Thomas Bolander's site
  19. 782679.081328
    The title well represents this paper’s goals. I shall discuss certain basic issues pertaining to subjective probability and, in particular, the point at which the concept of natural predicates is necessary within the probabilistic framework. Hempel’s well-known puzzle of ravens serves as a starting point and as a concrete example. I begin by describing in §2 four solutions that have been proposed. Two of these represent fundamental approaches that concern me most: the probabilistic standard solution and what I refer to as the natural-predicates solution. The first is essentially due to various investigators, among them Hempel himself. The second has been proposed by Quine in his ‘Natural kinds’; it represents a general line rather than a single precise solution. Underlying it is some classification of properties (or, to remain safely on the linguistic level, of predicates) which derives from epistemic or pragmatic factors and is, at least prima facie, irreducible to distinctions in terms of logical structure. Goodman’s concept of entrenchment belongs here as well (his paradox is taken up in §3 and §5). Of the other two, the one referred to as a “nearly-all”-solution is based on interpreting ‘all’ (in ‘all ravens are black’) as nearly all. An analysis shows that the valid part of this argument is reducible to the standard probabilistic solution. The remaining solution is based on a modal interpretation; it is shown to belong to the natural-predicates brand. Another modality argument turns out, upon analysis, to be false.
    Found 1 week, 2 days ago on Haim Gaifman's site
  20. 818227.081342
    There are many domains about which we think we are reliable. That is, we think that our beliefs about the domains are by and large true (or at least are true much more often than chance alone would predict). For some of these domains, our reliability is not very puzzling. For instance, we understand how it is that we are reliable about facts about medium-sized objects in our environment. We possess psycho-physical explanations (or explanation sketches) of how our perceptual faculties work that explain how these faculties yield true beliefs about our environment. We also possess evolutionary explanations (or explanation sketches) of how we came to possess reliable perceptual faculties. For other domains, our reliability is more puzzling.
    Found 1 week, 2 days ago on PhilPapers
  21. 821778.081354
    Regard for Reason in the Moral Mind argues that a careful examination of the scientific literature reveals a foundational role for reasoning in moral thought and action. Grounding moral psychology in reason then paves the way for a defense of moral knowledge and virtue against a variety of empirical challenges, such as debunking arguments and situationist critiques. The book attempts to provide a corrective to current trends in moral psychology, which celebrates emotion over reason and generates pessimism about the psychological mechanisms underlying commonsense morality. Ultimately, there is rationality in ethics not just despite but in virtue of the neurobiological and evolutionary materials that shape moral cognition and motivation.
    Found 1 week, 2 days ago on Josh May's site
  22. 821801.081368
    Unless presently in a coma, you cannot avoid witnessing injustice. You will find yourself judging that a citizen or a police officer has acted wrongly by killing someone, that a politician is corrupt, that a social institution is discriminatory. In all these cases, you are making a moral judgment. But what is it that drives your judgment? Have you reasoned your way to the conclusion that something is morally wrong? Or have you reached a verdict because you feel indignation or outrage? Rationalists in moral philosophy hold that moral judgment can be based on reasoning alone. Kant argued that one can arrive at a moral belief by reasoning from principles articulating one’s duties. Sentimentalists hold instead that emotion is essential to distinctively moral judgment. Hume, Smith, and their British contemporaries argued that one cannot arrive at a moral belief without experiencing appropriate feelings at some point—e.g. by feeling compassion toward victims or anger toward perpetrators. While many theorists agree that both reason and emotion play a role in ordinary moral cognition, the dispute is ultimately about which process is most central.
    Found 1 week, 2 days ago on Josh May's site
  23. 845037.081381
    E.S. Pearson: 11 Aug 1895-12 June 1980. Today is Egon Pearson’s birthday. In honor of his birthday, I am posting “Statistical Concepts in Their Relation to Reality” (Pearson 1955). I’ve posted it several times over the years, but always find a new gem or two, despite its being so short. …
    Found 1 week, 2 days ago on D. G. Mayo's blog
  24. 864930.081394
    Experimentation is traditionally considered a privileged means of confirmation. However, how experiments are a better confirmatory source than other strategies is unclear, and recent discussions have identified experiments with various modeling strategies on the one hand, and with ‘natural’ experiments on the other hand. We argue that experiments aiming to test theories are best understood as controlled investigations of specimens. ‘Control’ involves repeated, fine-grained causal manipulation of focal properties. This capacity generates rich knowledge of the object investigated. ‘Specimenhood’ involves possessing relevant properties given the investigative target and the hypothesis in question. Specimens are thus representative members of a class of systems, to which a hypothesis refers. It is in virtue of both control and specimenhood that experiments provide powerful confirmatory evidence. This explains the distinctive power of experiments: although modellers exert extensive control, they do not exert this control over specimens; although natural experiments utilize specimens, control is diminished.
    Found 1 week, 3 days ago on Adrian Currie's site
  25. 877310.081412
    This paper tackles the problem of defining what a cognitive expert is. Starting from a shared intuition that the definition of an expert depends upon the conceptual function of expertise, I shed light on two main approaches to the notion of an expert: according to novice-oriented accounts of expertise, experts need to provide laypeople with information they lack in some domain; whereas, according to research-oriented accounts, experts need to contribute to the epistemic progress of their discipline. In this paper, I defend the thesis that cognitive experts should be identified by their ability to perform the latter function rather than the former, as novice-oriented accounts, unlike research-oriented ones, fail to comply with the rules of a functionalist approach to expertise.
    Found 1 week, 3 days ago on PhilPapers
  26. 967652.081425
    According to what I will call ‘the disanalogy thesis,’ beliefs differ from actions in at least the following important way: while cognitively healthy people often exhibit direct control over their actions, there is no possible scenario where a cognitively healthy person exhibits direct control over her beliefs. Recent arguments against the disanalogy thesis maintain that, if you find yourself in what I will call a ‘permissive situation’ with respect to p, then you can have direct control over whether you believe p, and do so without manifesting any cognitive defect. These arguments focus primarily on the idea that we can have direct doxastic control in permissive situations, but they provide insufficient reason for thinking that permissive situations are actually possible, since they pay inadequate attention to the following worries: permissive situations seem inconsistent with the uniqueness thesis, permissive situations seem inconsistent with natural thoughts about epistemic akrasia, and vagueness threatens even if we push these worries aside. In this paper I argue that, on the understanding of permissive situations that is most useful for evaluating the disanalogy thesis, permissive situations clearly are possible.
    Found 1 week, 4 days ago on Blake Roeber's site
  27. 992889.08144
    An influential proposal is that knowledge involves safe belief. A belief is safe, in the relevant sense, just in case it is true in nearby metaphysically possible worlds. In this paper, I introduce a distinct but complementary notion of safety, understood in terms of epistemically possible worlds. The main aim, in doing so, is to add to the epistemologist’s tool-kit. To demonstrate the usefulness of the tool, I use it to advance and assess substantive proposals concerning knowledge and justification.
    Found 1 week, 4 days ago on PhilPapers
  28. 1021677.081453
    According to an increasingly popular epistemological view, people need outright beliefs in addition to credences to simplify their reasoning. Outright beliefs simplify reasoning by allowing thinkers to ignore small error probabilities. What is outright believed can change between contexts. It has been claimed that thinkers manage shifts in their outright beliefs and credences across contexts by an updating procedure resembling conditionalization, which I call pseudo-conditionalization (PC). But conditionalization is notoriously complicated. The claim that thinkers manage their beliefs via PC is thus in tension with the view that the function of beliefs is to simplify our reasoning. I propose to resolve this puzzle by rejecting the view that thinkers employ PC. Based on this solution, I furthermore argue for a descriptive and a normative claim. The descriptive claim is that the available strategies for managing beliefs and credences across contexts that are compatible with the simplifying function of outright beliefs can generate synchronic and diachronic incoherence in a thinker’s attitudes. Moreover, I argue that the view of outright belief as a simplifying heuristic is incompatible with the view that there are ideal norms of coherence or consistency governing outright beliefs that are too complicated for human thinkers to comply with.
    Found 1 week, 4 days ago on PhilPapers
  29. 1125644.081466
    The quantum query complexity of approximate counting was one of the first topics studied in quantum algorithms. Given a nonempty finite set S ⊆ [N ] (here and throughout, [N ] = {1, . . . , N }), suppose we want to estimate its cardinality, |S|, to within some multiplicative accuracy ε. This is a fundamental task in theoretical computer science, used as a subroutine for countless other tasks. As is standard in quantum algorithms, we work in the so-called black-box model (see [10]), where we assume only that we’re given a membership oracle for S: an oracle that, for any i ∈ [N ], tells us whether i ∈ S. We can, however, query the oracle in quantum superposition. How many queries must a quantum computer make, as a function of both N and |S|, to solve this problem with high probability?
    Found 1 week, 6 days ago on Scott Aaronson's site
  30. 1132948.081479
    According to an influential Enlightenment ideal, one shouldn’t rely epistemically on other people’s say-so, at least not if one is in a position to evaluate the relevant evidence for oneself. However, in much recent work in social epistemology, we are urged to dispense with this ideal, which is seen as stemming from a misguided focus on isolated individuals to the exclusion of groups and communities. In this paper, I argue that that an emphasis on the social nature of inquiry should not lead us to entirely abandon the Enlightenment ideal of epistemically autonomous agents. Specifically, I suggest that it is an appropriate ideal for those who serve as experts in a given epistemic community, and develop a notion of expert acceptance to make sense of this. I go on to show that, all other things being equal, this kind of epistemic autonomy among experts makes their joint testimony more reliable, which in turn brings epistemic benefits both to laypeople and to experts in other fields.
    Found 1 week, 6 days ago on Finnur Dellsén's site