1. 381748.765006
    Process reliabilism is a theory about ex post justification, the justification of a doxastic attitude one has, such as belief. It says roughly that a justified belief is a belief formed by a reliable process. It is not a theory about ex ante justification, one’s justification for having a particular attitude toward a proposition, an attitude one might lack. But many reliabilists supplement their theory such that it explains ex ante justification in terms of reliable processes. In this paper I argue that the main way reliabilists supplement their theory fails. In the absence of an alternative, reliabilism does not account for ex ante justification.
    Found 4 days, 10 hours ago on PhilPapers
  2. 388024.765065
    Strawson (1963) vs. explication in Carnap (1950): By replacing one definition of a term with a more precise and fruitful explication, you are changing the topic.
    Found 4 days, 11 hours ago on Erich Rast's site
  3. 388066.765082
    I It is intuitively plausible to assume that if it is asserted that ‘a is overall better than b (all things considered)’ such a verdict is often based on multiple evaluations of the items a and b under considerations, which are sometimes also called ‘criteria’, ‘features’, or ‘attributes’. I Usually, an item a is better than an item b in some aspects, but not in others, and there is a weighing or outranking of these aspects to determine which item is better.
    Found 4 days, 11 hours ago on Erich Rast's site
  4. 438667.765097
    On the standard view, when we forgive, we overcome or renounce future blaming responses to an agent in virtue of what the forgiver understands to be, and is in fact, an immoral action he has performed. Crucially, on the standard view the blaming response is understood as essentially involving a reactive attitude and its expression. In the central case in which the forgiver has been wronged by the party being forgiven, this reactive attitude is moral resentment, that is, anger with an agent due to a wrong he has done to oneself. When someone other than the forgiver has been wronged by the one being forgiven, the attitude is indignation, anger with an agent because of a wrong he has done to a third party. Such a position was developed by Joseph Butler (1749/1900), and in more recent times endorsed by P. F. Strawson (1962), Jeffrie Murphy (1982), and Jay Wallace (1994). Wallace (1994: 72), for example, claims that “in forgiving people we express our acknowledgment that they have done something that would warrant resentment and blame, but we renounce the responses that we thus acknowledge to be appropriate.”
    Found 5 days, 1 hour ago on Derk Pereboom's site
  5. 441108.765111
    Despite initial appearance, paradoxes in classical logic, when comprehension is unrestricted, do not go away even if the law of excluded middle is dropped, unless the law of noncontradiction is eliminated as well, which makes logic much less powerful. Is there an alternative way to preserve unrestricted comprehension of common language, while retaining power of classical logic? The answer is yes, when provability modal logic is utilized. Modal logic NL is constructed for this purpose. Unless a paradox is provable, usual rules of classical logic follow. The main point for modal logic NL is to tune the law of excluded middle so that we allow for φ and its negation ¬φ to be both false in case a paradox provably arises. Curry's paradox is resolved differently from other paradoxes but is also resolved in modal logic NL. The changes allow for unrestricted comprehension and naïve set theory, and allow us to justify use of common language in formal sense.
    Found 5 days, 2 hours ago on PhilSci Archive
  6. 467138.765125
    On occasion, I’ve heard undergraduates suggest that naturalism faces a problem with emotions. They feel that a mere computational system would not have emotional states. One might take this to be a special case of the problem of qualia, and I think it has some plausibility there. …
    Found 5 days, 9 hours ago on Alexander Pruss's Blog
  7. 498181.765139
    This paper analyzes important elements in the reception of Hegel’s philosophy in the present. In order to reach this goal we discuss how analytic philosophy receives Hegel’s philosophy. For that purpose, we reconstruct the reception of analytic philosophy in the face of Hegel, especially from those authors who were central in this movement of reception and distance of his philosophy, namely, Bertrand Russell, Frege and Wittgenstein. Another central point of this paper is to review the book of Paul Redding, Analytic Philosophy and the Return of Hegelian Thought, in comparison with the reception of Hegel, developed here by analytic philosophy. Finally, we show how a dialogue can be productive of these apparently opposing currents.
    Found 5 days, 18 hours ago on PhilPapers
  8. 522547.765153
    We are familiar with the idea that belief sometimes amounts to knowledge – i.e. that there are instances of belief that are also instances of knowledge. Here I defend an unfamiliar idea: that desire sometimes amounts to knowledge – i.e. that there are instances of desire that are also instances of knowledge. My argument rests on two premises. First, I assume that goodness is the correctness condition for desire. Second, I assume a virtue-theoretic account of knowledge, on which knowledge is apt mental representation. With those assumptions made, I’ll argue that desires can amount to instances of apt representation, and thus to knowledge.
    Found 6 days, 1 hour ago on Allan Hazlett's site
  9. 546952.765174
    « John Wright joins UT Austin On two blog posts of Jerry Coyne A few months ago, I got to know Jerry Coyne, the recently-retired biologist at the University of Chicago who writes the blog “Why Evolution Is True.” The interaction started when Jerry put up a bemused post about my thoughts on predictability and free will, and if I pointed out that if he wanted to engage me on those topics, there was more to go on than an 8-minute YouTube video. …
    Found 6 days, 7 hours ago on Scott Aaronson's blog
  10. 557045.765188
    Multiple realisation prompts the question: how is it that multiple systems all exhibit the same phenomena despite their different underlying properties? In this paper I develop a framework for addressing that question and argue that multiple realisation can be reductively explained. I defend this position by applying the framework to a simple example – the multiple realisation of electrical conductors. I go on to compare my position to those advocated in Polger & Shapiro (2016), Batterman (2018), and Sober (1999). Contra these respective authors I claim that multiple realisation is commonplace, that it can be explained, but that it requires a sui generis reductive explanatory strategy. As such, multiple realisation poses a non-trivial challenge to reduction which can, nonetheless, be met.
    Found 6 days, 10 hours ago on PhilSci Archive
  11. 557075.765202
    In this chapter I urge a fresh look at the problem of explaining equilibration. The process of equilibration, I argue, is best seen, not as part of the subject matter of thermodynamics, but as a presupposition of thermodynamics. Further, the relevant tension between the macroscopic phenomena of equilibration and the underlying microdynamics lies not in a tension between time-reversal invariance of the microdynamics and the temporal asymmetry of equilibration, but in a tension between preservation of distinguishability of states at the level of microphysics and the continual effacing of the past at the macroscopic level. This suggests an open systems approach, where the puzzling question is not the erasure of the past, but the question of how reliable prediction, given only macroscopic data, is ever possible at all. I suggest that the answer lies in an approach that has not been afforded sufficient attention in the philosophical literature, namely, one based on the temporal asymmetry of causal explanation.
    Found 6 days, 10 hours ago on PhilSci Archive
  12. 557113.765219
    Lyons’s (2003, 2018) axiological realism holds that science pursues true theories. I object that despite its name, it is a variant of scientific antirealism, and is susceptible to all the problems with scientific antirealism. Lyons (2003, 2018) also advances a variant of surrealism as an alternative to the realist explanation for success. I object that it does not give rise to understanding because it is an ad hoc explanans and because it gives a conditional explanation. Lyons might use axiological realism to account for the success of a theory. I object that some alternative axiological explanations are better than the axiological realist explanation, and that the axiological realist explanation is teleological. Finally, I argue that Putnam’s realist position is more elegant than Lyons’s.
    Found 6 days, 10 hours ago on PhilSci Archive
  13. 586666.765233
    In the Preface to his collection of essays, Saving the Differences, Crispin Wright introduces the Wittgensteinian concern with …differences in the role and function of superficially similar language games … which those very similarities encourage us to overlook, thereby constituting a prime cause of philosophical misunderstandings and confusions.
    Found 6 days, 18 hours ago on Dorit Bar-On's site
  14. 594456.765249
    Our present experiences are strikingly different from past and future ones. Every philosophy of time must explain this difference. It has long been argued that A-theorists can do it better than B-theorists because their explanation is most natural and straightforward: present experiences appear to be special because they are special. I do not wish to dispute one aspect of this advantage. But I contend that the general perception of this debate is seriously incomplete as it tends to conflate two rather different aspects of the phenomenon behind it, the individual and the common dimensions of the present. When they are carefully distinguished and the emerging costs of the A-theories are balanced against their benefits, the advantage disappears.
    Found 6 days, 21 hours ago on Yuri Balashov's site
  15. 595493.765263
    Since many years national and international science organizations have recommended the inclusion of philosophy, history, and ethics courses in science curricula at universities. Chemists may rightly ask: What is that good for? Don’t primary and secondary school provide been taught to you to be the edifice of science, and take it only as a provisional state in the course of the ongoing research process of which your work is meant to become a part. Next let’s see what kind of philosophy, history, and enough general education such that universities can ethics is needed for chemical research, and what not. back to an antiquated form of higher education? Or do they want us to learn some “soft skills” that can at best improve our eloquence at the dinner table but is entirely useless in our chemical work?
    Found 6 days, 21 hours ago on Joachim Schummer's site
  16. 601593.765277
    In her paper “Why Suspend Judging?” Jane Friedman has argued that being agnostic about some question entails that one has an inquiring attitude towards that question. Call this the agnostic-as-inquirer thesis. I argue that the agnostic-as-inquirer thesis is implausible. Specifically, I maintain that the agnostic-as-inquirer thesis requires that we deny the existence of a kind of agent that plausibly exists; namely, one who is both agnostic about Q because they regard their available evidence as insufficient for answering Q and who decides not to inquire into Q because they believe Q to be unanswerable. I claim that it is not only possible for such an agent to exist, but that such an agent is also epistemically permissible.
    Found 6 days, 23 hours ago on Avery Archer's site
  17. 604008.765291
    Work on chance has, for some time, focused on the normative nature of chance: the way in which objective chances constrain what partial beliefs, or credences, we ought to have. According to me, an agent is an expert if and only if their credences are maximally accurate; they are an analyst expert with respect to a body of evidence if and only if their credences are maximally accurate conditional on that body of evidence. I argue that the chances are maximally accurate conditional on local, intrinsic information. This matches nicely with a requirement that Schaffer (2003, 2007) places on chances, called at different times (and in different forms) the Stable Chance Principle and the Intrinsicness Requirement. I call my account the Accuracy-Stability account. I then show how the Accuracy-Stability account underlies some arguments for the New Principle, and show how it revives a version of Van Fraassen’s calibrationist approach. But two new problems arise: first, the Accuracy-Stability account risks collapsing into simple frequentism. But simple frequentism is a bad view. I argue that the same reasoning which motivates the Stability requirement motivates a continuity requirement, which avoids at least some of the problems of frequentism. I conclude by considering an argument from Briggs (2009) that Humean chances aren’t fit to be analyst experts; I argue that the Accuracy-Stability account overcomes Briggs’ difficulties.
    Found 6 days, 23 hours ago on Michael Townsen Hicks's site
  18. 604036.765307
    Humeans are often accused positing laws which fail to explain or are involved in explanatory circularity. Here, I will argue that these arguments are confused, but not because of anything to do with Humeanism: rather, they rest on false assumptions about causal explanation. I’ll show how these arguments can be neatly sidestepped if one takes on two plausible commitments which are motivated independently of Humeanism: first, that laws don’t directly feature in scientific explanation (a view defended recently by Ruben (1990) and Skow (2016)) and second, the view that explanation is contrastive. After outlining and motivating these views, I show how they bear on explanation-based arguments against Humeanism.
    Found 6 days, 23 hours ago on Michael Townsen Hicks's site
  19. 614316.765324
    Logical pluralism is the view that there is more than one correct logic. Most logical pluralists think that logic is normative in the sense that you make a mistake if you accept the premisses of a valid argument but reject its conclusion. Some authors have argued that this combination is self-undermining: Suppose that L1 and L2 are correct logics that coincide except for the argument from Γ to φ, which is valid in L1 but invalid in L2. If you accept all sentences in Γ, then, by normativity, you make a mistake if you reject φ. In order to avoid mistakes, you should accept φ or suspend judgment about φ. Both options are problematic for pluralism. Can pluralists avoid this worry by rejecting the normativity of logic? I argue that they cannot. All else being equal, the argument goes through even if logic is not normative.
    Found 1 week ago on PhilPapers
  20. 614845.765337
    We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time.
    Found 1 week ago on PhilSci Archive
  21. 614893.765354
    Lyons (2016, 2017, 2018) formulates Laudan’s (1981) historical objection to scientific realism as a modus tollens. I present a better formulation of Laudan’s objection, and then argue that Lyons’s formulation is supererogatory. Lyons rejects scientific realism (Putnam, 1975) on the grounds that some successful past theories were (completely) false. I reply that scientific realism is not the categorical hypothesis that all successful scientific theories are (approximately) true, but rather the statistical hypothesis that most successful scientific theories are (approximately) true. Lyons rejects selectivism (Kitcher, 1993; Psillos, 1999) on the grounds that some working assumptions were (completely) false in the history of science. I reply that selectivists would say not that all working assumptions are (approximately) true, but rather that most working assumptions are (approximately) true.
    Found 1 week ago on PhilSci Archive
  22. 614936.765368
    In 2012, CERN scientists announced the discovery of the Higgs boson, claiming their experimental results finally achieved the 5σ criterion for statistical significance. Although particle physicists apply especially stringent standards for statistical significance, their use of “classical” (rather than Bayesian) statistics is not unusual at all. Classical hypothesis testing—a hybrid of techniques developed by Fisher, Neyman and Pearson—remains the dominant form of statistical analysis, and p-values and statistical power are often used to quantify evidential strength.
    Found 1 week ago on PhilSci Archive
  23. 614974.765382
    Traditionally, epistemologists have distinguished between epistemic and pragmatic goals. In so doing, they presume that much of game theory is irrelevant to epistemic enterprises. I will show that this is a mistake. Even if we restrict attention to purely epistemic motivations, members of epistemic groups will face a multitude of strategic choices. I illustrate several contexts where individuals who are concerned solely with the discovery of truth will nonetheless face difficult game theoretic problems. Examples of purely epistemic coordination problems and social dilemmas will be presented. These show that there is a far deeper connection between economics and epistemology than previous appreciated.
    Found 1 week ago on PhilSci Archive
  24. 615128.765396
    The study of psychological and cognitive mechanisms is an interdisciplinary endeavor, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this paper, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly. On the other hand, if we are cautious in the application of our metaphysical commitments, and careful with the inferences we draw from them, then they can provide new insights into how we might find connections between models and theories of mechanisms that appear incompatible.
    Found 1 week ago on PhilSci Archive
  25. 615165.76541
    It is well known that there is a freedom-of-choice loophole or superdeterminism loophole in Bell’s theorem. Since no experiment can completely rule out the possibility of superdeterminism, it seems that a local hidden variable theory consistent with relativity can never be excluded. In this paper, we present a new analysis of local hidden variable theories. The key is to notice that a local hidden variable theory assumes the universality of the Schrodinger equation, and it permits that a measurement can be in principle undone in the sense that the wave function of the composite system after the measurement can be restored to the initial state. We propose a variant of the EPR-Bohm experiment with reset operactions that can undo measurements. We find that according to quantum mechanics, when Alice’s measurement is undone after she obtained her result, the correlation between the results of Alice’s and Bob’s measurements depends on the time order of these measurements, which may be spacelike separated. Since a local hidden variable theory consistent with relativity requires that relativistically non-invariant relations such as the time order of space-like separated events have no physical significance, this result means that a local hidden variable theory cannot explain the correlation and reproduce all predictions of quantum mechanics even when assuming superdeterminism. This closes the major superdeterminism loophole in Bell’s theorem.
    Found 1 week ago on PhilSci Archive
  26. 615205.765424
    We compare and contrast two distinct approaches to understanding the Born rule in de Broglie-Bohm pilot-wave theory, one based on dynamical relaxation over time (advocated by this author and collaborators) and the other based on typicality of initial conditions (advocated by the ‘Bohmian mechanics’ school). It is argued that the latter approach is inherently circular and physically misguided. The typicality approach has engendered a deep-seated confusion between contingent and law-like features, leading to misleading claims not only about the Born rule but also about the nature of the wave function. By artificially restricting the theory to equilibrium, the typicality approach has led to further misunderstandings concerning the status of the uncertainty principle, the role of quantum measurement theory, and the kinematics of the theory (including the status of Galilean and Lorentz invariance). The restriction to equilibrium has also made an erroneously-constructed stochastic model of particle creation appear more plausible than it actually is. To avoid needless controversy, we advocate a modest ‘empirical approach’ to the foundations of statistical mechanics. We argue that the existence or otherwise of quantum nonequilibrium in our world is an empirical question to be settled by experiment.
    Found 1 week ago on PhilSci Archive
  27. 619641.765438
    Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond if you were to discover one of these robots attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?
    Found 1 week ago on Eddy Nahmias's site
  28. 628789.765459
    On the formulation discussed here, epistemological disjunctivism is the view that in paradigmatic cases of perceptual knowledge, a thinker’s perceptual beliefs constitute knowledge when they are based on reasons that provide them with factive support (i.e., the complete description of the thinker’s reason for believing, say, that it is Agnes curled up on the sofa entails that Agnes is curled up on the sofa). A thinker is in a position to know that p perceptually if the thinker sees that p. It is the seeing that p that constitutes the thinker’s reason for believing p and provides the requisite support for that belief. This perceptual relation between a thinker and a fact guarantees that the thinker is in a position to know things about things in her surroundings. Without this kind of support, perceptual knowledge isn’t possible.
    Found 1 week ago on Clayton Littlejohn's site
  29. 628933.765488
    If I were to say, “Agnes does not know that it is raining, but it is,” this seems like a perfectly coherent way of describing Agnes’s epistemic position. If I were to add, “And I don’t know if it is, either,” this seems quite strange. In this chapter, we shall look at some statements that seem, in some sense, contradictory, even though it seems that these statements can express propositions that are contingently true or false. Moore thought it was paradoxical that statements that can express true propositions or contingently false propositions should nevertheless seem absurd like this. If we can account for the absurdity, we shall solve Moore’s Paradox. In this chapter, we shall look at Moore’s proposals and more recent discussions of Moorean absurd thought and speech.
    Found 1 week ago on Clayton Littlejohn's site
  30. 636576.765521
    Extended cognition is when cognitive processes extend beyond the brain and nervous system of the subject, and in the process properly include such ‘external’ devices as technology. This paper explores what relevance extended cognitive processes might have for humility, and especially for the specifically cognitive aspect of humility—viz., intellectual humility. As regards humility in general, it is argued that there are no in principle barriers to extended cognitive processes helping to enable the development and manifestation of this character trait, but that there may be limitations to the extent to which one’s manifestation of humility can be dependent upon these processes, at least insofar as we follow orthodoxy and treat humility as a virtue. As regards the cognitive trait of intellectual humility in particular, the question becomes whether this can itself be an extended cognitive process. It is argued that this wouldn’t be a plausible conception of intellectual humility, at least insofar as we treat intellectual humility (like humility in general) as a virtue.
    Found 1 week ago on Duncan Pritchard's site