-
32328.365734
In his recent book The Value Gap (2021), Toni Rønnow-Rasmussen defends a pluralist view of final goodness and goodness-for, according to which neither concept is analysable in terms of the other. In this paper I defend a specific version of monism, namely so-called ‘Mooreanism’, according to which goodness-for is analysable partly in terms of final goodness. Rønnow-Rasmussen offers three purported counterexamples to Mooreanism. I argue that Mooreanism can accommodate two of them. The third is more problematic, but this is in the end not a decisive objection.
-
90137.366034
The Border Between Seeing and Thinking is an extraordinary achievement, the result of careful attention (and contribution) to both the science and philosophy of perception. The book offers some bold hypotheses. While the hypotheses themselves are worth the price of entry, Block’s sustained defense of them grants the reader insight into countless fascinating experimental results and philosophical concepts. His unpretentious and accommodating exposition of the science—explaining rather than asserting, digging into specific results in detail rather than making summary judgments and demanding that readers take him at his word—is a model of how philosophers ought to engage with empirical evidence. It is simply not possible to read this book without learning something. It will surely play a foundational role in theoretical work on perception for many years to come.
-
90158.366078
This paper explores the relationship between the questioning attitude of wondering and a class of attitudes I call epistemic desires. Broadly, these are desires to improve one’s epistemic position on some question. A common example is the attitude of wanting to know the answer to some question. I argue that one can have any kind of epistemic desire towards any question, Q, without necessarily wondering Q, but not conversely. That is, one cannot wonder Q without having at least some epistemic desire directed towards Q. I defend this latter claim from apparent counterexamples due to Friedman (2013) and Drucker (2022), and finish with a proposal on which epistemic desires, particularly the desire for understanding, play an explanatory role in distinguishing wondering from other forms of question-directed thought.
-
90183.36611
An influential objection to the epistemic power of the imagination holds that it is uninformative. You cannot get more out of the imagination than you put into it, and therefore learning from the imagination is impossible. This paper argues, against this view, that the imagination is robustly informative. Moreover, it defends a novel account of how the imagination informs, according to which the imagination is informative in virtue of its analog representational format. The core idea is that analog representations represent relations ‘for free,’ and this explains how the imagination can contain more information than is put into it. This account makes important contributions to both philosophy of mind, by showing how the imagination can generate new content that is not represented by a subject’s antecedent mental states, and epistemology, by showing how the imagination can generate new justification that is not conferred by a subject’s antecedent evidence.
-
95473.366139
In recent decades, Bayesian modeling has achieved extraordinary success within perceptual psychology (Knill and Richards, 1996; Rescorla, 2015; Rescorla, 2020a; Rescorla, 2021). Bayesian models posit that the perceptual system assigns subjective probabilities (or credences) to hypotheses regarding distal conditions (e.g. hypotheses regarding possible shapes, sizes, colors, or speeds of perceived objects). The perceptual system deploys its subjective probabilities to estimate distal conditions based upon proximal sensory input (e.g. retinal stimulations). It does so through computations that are fast, automatic, subpersonal, and inaccessible to conscious introspection.
-
95496.366164
Kolmogorov conditionalization is a strategy for updating credences based on propositions that have initial probability 0. I explore the connection between Kolmogorov conditionalization and Dutch books. Previous discussions of the connection rely crucially upon a factivity assumption: they assume that the agent updates credences based on true propositions. The factivity assumption discounts cases of misplaced certainty, i.e. cases where the agent invests credence 1 in a falsehood. Yet misplaced certainty arises routinely in scientific and philosophical applications of Bayesian decision theory. I prove a non-factive Dutch book theorem and converse Dutch book theorem for Kolmogorov conditionalization. The theorems do not rely upon the factivity assumption, so they establish that Kolmogorov conditionalization has unique pragmatic virtues that persist even in cases of misplaced certainty.
-
95525.366184
When P(E) > 0, conditional probabilities P(H | E) are given by the ratio formula. An agent engages in ratio conditionalization when she updates her credences using conditional probabilities dictated by the ratio formula. Ratio conditionalization cannot eradicate certainties, including certainties gained through prior exercises of ratio conditionalization. An agent who updates her credences only through ratio conditionalization risks permanent certainty in propositions against which she has overwhelming evidence. To avoid this undesirable consequence, I argue that we should supplement ratio conditionalization with Kolmogorov conditionalization, a strategy for updating credences based on propositions E such that P(E) = 0. Kolmogorov conditionalization can eradicate certainties, including certainties gained through prior exercises of conditionalization. Adducing general theorems and detailed examples, I show that Kolmogorov conditionalization helps us model epistemic defeat across a wide range of circumstances.
-
116866.366202
I favor a "superficialist" approach to belief (see here and here). "Belief" is best conceptualized not in terms of deep cognitive structure (e.g., stored sentences in the language of thought) but rather in terms of how a person would tend to act and react under various hypothetical conditions -- their overall "dispositional profile". …
-
154035.36622
According to one prominent critique of mainstream epistemology, discoveries about what it takes to know or justifiedly believe that p can’t provide the right kind of intellectual guidance. As Mark Webb puts it, “the kinds of principles that are developed in this tradition are of no use in helping people in their ordinary epistemic practices.” In this paper I defend a certain form of traditional epistemology against this “regulative” critique. Traditional epistemology can provide—and, indeed, can be essential for—intellectual guidance. The reason is that, in many cases, how you should proceed intellectually depends on what you already know or justifiedly believe: how you should treat counterevidence to your beliefs, for example, can depend on whether those beliefs count as knowledge. Therefore, to get guidance on how to proceed intellectually, it will often be essential to be able to figure out what you know or justifiedly believe. And to do that it will often be helpful to try to figure out what it takes to count as knowledge or justified belief in the first place. To do this is precisely to engage in mainstream epistemology.
-
154058.366236
We argue that knowledge doesn‘t require any of truth, justification, or belief. This is so for four primary reasons. First, each of the three conditions has been subject to convincing counterexamples. In addition, the resultant account explains the value of knowledge, manifests important theoretical virtues (in particular, simplicity), and avoids commitment to skepticism.
-
205681.366256
According to the Desiderative Lockean Thesis, there are necessary and sufficient conditions, stated in the terms of decision theory, for when one is truly said to want. I advance a new Desiderative Lockean view. My view is distinctive in being doubly context-sensitive. What a person is truly said to want varies by context, a fact that others attempt to capture by positing a single context-sensitive parameter to evaluate want ascriptions; I posit two. Only with a doubly context-sensitive view can we explain a range of facts that go unexplained by all other Desiderative Lockean views.
-
268654.366271
It has been argued that moral assertions involve the possession, on the part of the speaker, of appropriate non-cognitive attitudes. Thus, uttering ‘murder is wrong’ invites an inference that the speaker disapproves of murder. In this paper, we present the result of 4 empirical studies concerning this phenomenon. We assess the acceptability of constructions in which that inference is explicitly canceled, such as ‘murder is wrong but I don’t disapprove of it’; and we compare them to similar constructions involving ‘think’ instead of ‘disapprove’—that is, Moore paradoxes (‘murder is wrong but I don’t think that it is wrong’). Our results indicate that the former type of constructions are largely infelicitous, although not as infelicitous as their Moorean counterparts.
-
270248.366296
In this discussion, we look at three potential problems that arise for Whiting’s account of normative reasons. The first has to do with the idea that objective reasons might have a modal dimension. The second and third concern the idea that there is some sort of direct connection between sets of reasons and the deliberative ought or the ought of rationality. We can see that we might be better served using credences about reasons (i.e., creasons) to characterise any ought that is distinct from the objective ought than possessed or apparent reasons.
-
378822.366314
This paper introduces the axiom of Negative Dominance, stating that, if a lottery f is strictly preferred to a lottery g, then some outcome in the support of f is strictly preferred to some outcome in the support of g. It is shown that, if preferences are incomplete on a sufficiently rich domain, then this plausible axiom, which holds for complete preferences, is incompatible with an array of otherwise plausible axioms for choice under uncertainty. In particular, in this setting, Negative Dominance conflicts with the standard Independence axiom. A novel theory, which includes Negative Dominance, and rejects Independence, is developed and shown to be consistent.
-
378843.366329
I respond to Tim Smartt’s (2023) skepticism about epistemic blame. Smartt’s skepticism is based on the claims that i) mere negative epistemic evaluation can better explain everything proponents of epistemic blame say we need epistemic blame to explain; and ii) no existing account of epistemic blame provides a plausible account of the putative force that any response deserving the label “blame” ought to have. He focuses primarily on the prominent “relationship-based” account of epistemic blame to defend these claims, arguing that the account is explanatorily idle, and cannot distinguish between epistemically excused and epistemically blameworthy agents. I argue that Smartt mischaracterizes the account’s role for judgments of epistemic relationship impairment, leading to mistaken claims about the account’s predictions. I also argue that the very feature of the account that Smartt mischaracterizes is key to understanding what epistemic blame does for our epistemic responsibility practices that mere negative epistemic evaluation cannot.
-
436593.366344
In the first volume of Law, Legislation and Liberty (1973, Chaps. 1 and 2), F.A. Hayek exposes his famous criticism of the constructivist (or rationalist) approach to human history. As Hayek puts it, the latter approach assumes that humans are fully rational and thus can construct perfect social institutions because reason can advise them on how to impeccably do so. In this regard, Hayek (1973, Chap. 1, p. 12) writes: “Complete rationality of action in the Cartesian sense demands complete knowledge of all the relevant facts. A designer or engineer needs all the data and full power to control or manipulate them if he is to organize the material objects to produce the intended result. But the success of action in society depends on more particular facts than anyone can possibly know. And our whole civilization in consequences rests, and must rest, on our believing much that we cannot know [Hayek’s italics] to be true in the Cartesian sense.”
-
436613.366362
If there is any consensus about knowledge in contemporary epistemology, it is that there is one primary kind: knowledge-that. I put forth a view, one I find in the works of Aristotle, on which knowledge-of – construed in a fairly demanding sense, as being well-acquainted with things – is the primary, fundamental kind of knowledge. As to knowledge-that, it is not distinct from knowledge-of, let alone more fundamental, but instead a species of it. To know that such-and-such, just like to know a person or place, is to be well-acquainted with a portion of reality – in this case a fact. In part by comparing classic Gettier cases to cases in which one has true impressions of but fails to know a person, I argue that this account not only respects our intuitions about knowledge-that – in particular that it is or entails non-accidentally true justified belief – but also explains them, providing a compelling analysis.
-
551981.366378
Supersymmetry (SUSY) has long been considered an exceptionally promising theory. A central role for the promise has been played by naturalness arguments. Yet, given the absence of experimental findings it is questionable whether the promise will ever be fulfilled. Here, I provide an analysis of the promises associated with SUSY, employing a concept of pursuitworthiness. A research program like SUSY is pursuitworthy if (1) it has the plausible potential to provide high epistemic gain and (2) that gain can be achieved with manageable research efforts. Naturalness arguments have been employed to support both conditions (1) and (2). First, SUSY has been motivated by way of analogy: the proposed symmetry between fermions and bosons is supposed to ’protect’ the small Higgs mass from large quantum corrections just as the electron mass is protected through the chiral symmetry. Thus, SUSY held the promise of solving a major problem of the Standard Model of particle physics. Second, naturalness arguments have been employed to indicate that such gain is achievable at relatively low cost: SUSY discoveries seemed to be well in reach of upcoming high-energy experiments. While the first part of the naturalness argument may have the right form to facilitate considerations of pursuitworthiness, the second part of the argument has been problematically overstated.
-
561872.366393
A number of authors, including me, have argued that the output of our most complex climate models, that is, of global climate models and Earth system models, should be assessed possibilistically. Worries about the viability of doing so have also been expressed. I examine the assessment of the output of relatively simple climate models in the context of discovery and point out that this assessment is of epistemic possibilities. At the same time, I show that the concept of epistemic possibility used in the relevant studies does not fit available analyses of this concept. Moreover, I provide an alternative analysis that does fit the studies and broad climate modelling practices as well as meshes with my existing view that climate model assessment should typically be of real possibilities. On my analysis, to assert that a proposition is epistemically possible is to assert that it is not known to be false and is consistent with at least approximate knowledge of the basic way things are. I, finally, consider some of the implications of my discussion for available possibilistic views of climate model assessment and for worries about such views. I conclude that my view helps to address worries about such assessment and permits using the full range of climate models in it.
-
579238.366412
Two types of formal models—landscape search tasks and two-armed bandit models—are often used to study the effects that various social factors have on epistemic performance. I argue that they can be understood within a single framework. In this unified framework, I develop a model that may be used to understand the effects of functional and demographic diversity and their interaction. Using the unified model, I find that the benefit of demographic diversity is most pronounced in a functionally homogeneous group, and decreases with the increase of functional diversity.
-
603614.366426
In this short paper, I critically examine Veli Mitova’s proposal that social-identity groups can have collective epistemic reasons. My primary focus is the role of privileged access in her account of how collective reasons become epistemic reasons for social-identity groups. I argue that there is a potentially worrying structural asymmetry in her account of two different types of cases. More specifically, the mechanisms at play in cases of “doxastic reasons” seem fundamentally different from those at play in cases of “epistemic-conduct reasons.” The upshot is a need for further explanation of what unifies these dimensions of the account.
-
677292.366442
Explicating the concept of coherence and establishing a measure for assessing the coherence of an information set are two of the most important tasks of coherentist epistemology. To this end, several principles have been proposed to guide the specification of a measure of coherence. We depart from this prevailing path by challenging two well-established and prima facie plausible principles: Agreement and Dependence. Instead, we propose a new probabilistic measure of coherence that combines basic intuitions of both principles, but without strictly satisfying either of them. It is then shown that the new measure outperforms alternative measures in terms of its truth-tracking properties. We consider this feature to be central and argue that coherence matters because it is likely to be our best available guide to truth, at least when more direct evidence is unavailable.
-
734980.366458
The neuroscience of consciousness is undergoing a significant empirical acceleration thanks to several adversarial collaborations that intend to test different predictions of rival theories of consciousness. In this context, it is important to pair consciousness science with confirmation theory, the philosophical discipline that explores the interaction between evidence and hypotheses, in order to understand how exactly, and to what extent, specific experiments are challenging or validating theories of consciousness.
-
898337.366477
The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin by introducing the concept of Type I error rates and distinguishing between statistical errors and theoretical errors. I then illustrate my argument with respect to model misspecification, multiple testing, selective inference, forking paths, exploratory analyses, p-hacking, optional stopping, double dipping, and HARKing. In each case, I demonstrate that relevant Type I error rates are not usually inflated above their nominal level, and in the rare cases that they are, the inflation is easily identified and resolved. I conclude that the replication crisis may be explained, at least in part, by researchers’ underestimation of theoretical errors and misinterpretation of statistical errors.
-
956265.366491
It is widely held that forgiveness is sometimes elective, such that prospective forgivers sometimes have discretion over whether (or at least, how soon) to forgive wrongdoers. It is also widely held that, in granting forgiveness, the forgiver changes the normative landscape connecting the forgiver and the forgiven; forgiveness is not only psychologically and socially significant, but normatively significant. Attention to the electivity and normativity of forgiveness, I argue, reveals our practices of forgiveness to be subject to a philosophically significant and unexplored form of moral luck. Rather than taking this kind of luck to count against the view that forgiveness is elective and normatively significant, I maintain that it contributes to an understanding of the ways in which we are morally vulnerable to those we culpably wrong. Furthermore, it provides a novel inroad for thinking about moral luck, moderately reconceived, more generally. For, assuming that there is an important way in which one’s being forgiven, upon apologizing, is determined by factors beyond one’s control—factors like the readiness or willingness of the victim to grant forgiveness—one might, given the normative significance of forgiveness, be morally lucky in being forgiven. But if the forgiven agent is morally lucky, assuming that this will not be a matter of her being less blameworthy than her unforgiven counterpart, we have available a view of moral luck that is non-trivial yet, in principle, adoptable by standard opponents of moral luck, those who reject that one’s blameworthiness can be determined (/intensified) by factors beyond one’s control that are causally downstream of one’s action.
-
965682.366509
It is possible that you are living in a simulation—that your world is computer-generated rather than physical. But how likely is this scenario? Bostrom and Chalmers each argue that it is moderately likely—neither very likely nor very unlikely. However, they adopt an unorthodox form of reasoning about self-location uncertainty. Our main contention here is that Bostrom’s and Chalmers’ premises, when combined with orthodoxy about self-location, yields instead the conclusion that you are almost certainly living in a simulation. We consider how this (surprising) conclusion might be resisted, and show that the analogy between Sleeping Beauty cases and simulation cases provides a new way of evaluating approaches to self-location uncertainty. In particular, we argue that some conditionalization-based approaches to self-location are problematically limited in their applicability.
-
965701.366531
In the social epistemology of scientific knowledge, it is largely accepted that relationships of trust, not just reliance, are necessary in contemporary collaborative science characterised by relationships of opaque epistemic dependence. Such relationships of trust are taken to be possible only between agents who can be held accountable for their actions. But today, knowledge production in many fields makes use of AI applications that are epistemically opaque in an essential manner. This creates a problem for the social epistemology of scientific knowledge, as scientists are now epistemically dependent on AI applications that are not agents, and therefore not appropriate candidates for trust.
-
965786.366556
To mitigate the Look Elsewhere Effect in multiple hypothesis testing using p-values, the paper suggests an “entropic correction” of the significance level at which the null hypothesis is rejected. The proposed correction uses the entropic uncertainty associated with the probability measure that expresses the prior-to-test probabilities expressing how likely the confirming evidence may occur at values of the parameter. When the prior-to-test probability is uniform (embodying maximal uncertainty) the entropic correction coincides with the Bonferroni correction. When the prior-to-test probability embodies maximal certainty (is concentrated on a single value of the parameter at which the evidence is obtained), the entropic correction overrides the Look Elsewhere Effect completely by not requiring any correction of significance. The intermediate situation is illustrated by a simple hypothetical example. Interpreting the prior-to-test probability subjectively allows a Bayesian spirit enter the frequentist multiple hypothesis testing in a disciplined manner. If the prior-to-test probability is determined objectively, the entropic correction makes possible to take into account in a technically explicit way the background theoretical knowledge relevant for the test.
-
1069611.366574
Between common knowledge and common belief, is one any sense the more important notion in the explanation of coordinated action? Among those not already skeptical of one or both of these notions in the first place, a not uncommon view is that the distinction between the two is, at least very often, not of great consequence. Readers are often warned that the ‘knowledge’ in ‘common knowledge’ isn’t performing important work, and shouldn’t be credited with much significance. As Lederman observes in a survey article on the topic,
-
1072008.366594
Experiences of urges, impulses or inclinations are among the most basic elements in the practical life of conscious agents. This paper develops a theory of urges and their epistemology. I motivate a framework that distinguishes urges, conscious experiences of urges and exercises of capacities we have to control our urges. I argue that experiences of urges and exercises of control over urges play coordinate roles in providing one with knowledge of one’s urges.