[Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. …
How to serve two epistemic masters
Posted on Thursday, 23 May 2019
2018 paper, J. Dmitri Gallow shows that it is difficult to combine
multiple deference principles. The argument is a little complicated,
but the basic idea is surprisingly simple. …
Suppose that you have been invited to attend an ex-partner’s
wedding and that the best thing you can do is accept the invitation
and be pleasant at the wedding. But, suppose furthermore that if you
do accept the invitation, you’ll freely decide to get inebriated
at the wedding and ruin it for everyone, which would be the worst
outcome. The second best thing to do would be to simply decline the
invitation. In light of these facts, should you accept or decline the
invitation? (Zimmerman 2006: 153). The answer to this question hinges
on the actualism/possibilism debate in ethics, which concerns the
relationship between an agent’s free actions and her moral
The uneducated person blames others for their failures; those who have just begun to be instructed blame themselves; those whose learning is complete blame neither others nor themselves.1 So says Epictetus, spelling out one tenet of Stoic thought: that blame, whether of oneself or another, has no place in a life wisely lived. To blame is unhealthy and dispensable. This tenet long endeared me to Stoicism. For I was, for many years, what Peter Graham calls a ‘blame sceptic’. That is not to say that I resiled from blaming. Rather, I blamed and then reproached myself for doing so. Since reproaching entails blaming, I thereby compounded my felony. And then, reproaching myself for compounding my felony, I compounded it some more.
There has been an ongoing debate about whether desires are beliefs. Call the claim that they are the desire-as-belief thesis (DAB). This paper sets out to impugn the two versions of DAB that have enjoyed the most support in the philosophical literature: the guise of the good and the guise of reasons accounts. According to the guise of the good version of DAB, the desire to j is identical to the belief that j is good. According to the guise of reasons version of DAB, the desire to j is identical to the belief that one has a normative reason to j. My paper presents a pair of objections to DAB: the first specifically targets the guise of reasons account defended by Alex Gregory, while the second aims to undermine DAB more generally.
The three central tenets of traditional Bayesian epistemology are these:
Precision Your doxastic state at a given time is represented by a credence function, $c$, which takes each proposition $X$ about which you have an opinion and returns a single numerical value, $c(X)$, that measures the strength of your belief in $X$. …
It is almost unanimously accepted in the moral luck literature that Kant denies resultant moral luck—that is, he denies that the lucky consequence of a person’s action can affect how much praise or blame she deserves. Philosophers often point to the famous good will passage at the beginning of the Groundwork to justify this claim. I argue, however, that this passage does not support Kant’s denial of resultant moral luck. Subsequently, I argue that Kant allows agents to be morally responsible for certain kinds of lucky consequences. Even so, I argue that it is unclear whether Kant ultimately endorses resultant moral luck. The reason is that Kant does not write enough on moral responsibility for consequences to determine definitively whether he thinks that the lucky consequence for which an agent is morally responsible can add to her degree of praiseworthiness or blameworthiness. The clear upshot, however, is that Kant does not deny resultant moral luck.
Curiously, people assign less punishment to a person who attempts and fails to harm somebody if their intended victim happens to suffer the harm for coincidental reasons. This “blame blocking” effect provides an important evidence in support of the two-process model of moral judgment (Cushman, 2008). Yet, recent proposals suggest that it might be due to an unintended interpretation of the dependent measure in cases of coincidental harm (Prochownik, 2017; also Malle, Guglielmo, & Monroe, 2014). If so, this would deprive the two-process model of an important source of empirical support. We report and discuss results that speak against this alternative account.
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration.
Art can be addressed, not just to individuals, but to groups. Art can even be part of how groups think to themselves – how they keep a grip on their values over time. I focus on monuments as a case study. Monuments, I claim, can function as a commitment to a group value, for the sake of long-term action guidance. Art can function here where charters and mission statements cannot, precisely because of art’s powers to capture subtlety and emotion. In particular, art can serve as the vessel for group emotions, by making emotional content sufficiently public so as to be the object of a group commitment. Art enables groups to guide themselves with values too subtle to be codified.
‘Knowledge-how’ is the knowledge you have when you know how to do something. For example, when you know how to dance the tango, or solve a certain equation, or ride a bike, etc. Influenced by Ryle (1949), the traditional view of knowledge-how had two components: (1) a negative claim (anti-intellectualism) that knowledge-how is not any kind of knowledge-that (or any other propositional attitude state); and (2) a positive claim (abilitism or dispositionalism) that knowledge-how is some kind of ability or complex dispositional state. This traditional Rylean view was, for a long time, a largely unquestioned feature of philosophical orthodoxy. There were occasional challenges to the traditional view but these challenges generated little sustained debate, and did not seriously threaten the orthodox status of Ryleanism.
We argue that comparative psychologists have been too quick to jump to metacognitive interpretations of their data. We examine two such cases in some detail. One concerns so-called “uncertainty monitoring” behavior, which we show to be better explained in terms for first-order estimates of risk. The other concerns informational search, which we argue is better explained in terms of a first-order curiosity-like motivation that directs questions at the environment.
Sometimes theists wonder how God’s beliefs track particular portions of reality, e.g. contingent states of affairs, or facts regarding future free actions. In this article I sketch a general model for how God’s beliefs track reality. God’s beliefs track reality in much the same way that propositions track reality, namely via grounding. Just as the truth values of true propositions are generally or always grounded in their truthmakers, so too God’s true beliefs are grounded in the subject matters of those beliefs (i.e. God believes that p in virtue of the fact that p). This is not idle speculation, since my proposal allows the theist to account for God’s true beliefs regarding causally inert portions of reality.
What are the epistemic benefits of democracy? According to the ‘epistemic democrats’, democratic procedures such as deliberation and voting are valuable in part because they produce epistemically valuable outcomes. Indeed, epistemic democrats claim the legitimacy of democracy depends, at least in part, on the epistemic quality of the outcomes of political decision-making processes. In this paper, I want to consider two epistemic factors that might figure into the value of democracy, namely, veritistic and non-veritistic epistemic goals.
We present an inferentialist account of the epistemic modal operator might. Our starting point is the bilateralist programme. A bilateralist explains the operator not in terms of the speech act of rejection; we explain the operator might in terms of weak assertion, a speech act whose existence we argue for on the basis of linguistic evidence. We show that our account of might provides a solution to certain well-known puzzles about the semantics of modal vocabulary whilst retaining classical logic. This demonstrates that an inferentialist approach to meaning can be successfully extended beyond the core logical constants.
According to rationalists, synthetic a priori propositions convey new knowledge, whereas analytic propositions are non-informative or vacuous conceptual truths. However, as we argue in this article, each a priori proposition is necessarily true because of its semantic constituents and the way they are combined, and hence can be transformed into its equivalent analytic form. So each synthetic a priori proposition conveys only non-informative conceptual truths like analytic propositions.
In recent years there has been an explosion of philosophical work on blame. Much of this work has focused on explicating the nature of blame or on examining the norms that govern it, and the primary motivation for theorizing about blame seems to derive from blame’s tight connection to responsibility. However, very little philosophical attention has been given to praise and its attendant practices. In this paper, I identify three possible explanations for this lack of attention. My goal is to show that each of these lines of thought is mistaken and to argue that praise is deserving of careful, independent analysis by philosophers interested in theorizing about responsibility.
Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that these skeptical worries are misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate.
Decision theory and philosophy of action both attempt to explain what it is for an ideally rational agent to answer the question “What to do?” From the agent’s point of view, the answer to that question is settled in practical deliberation and motivates her to act. The mental states that determine her answer are the sources of rationalizing explanations of the agent’s behavior. They explain why she performed a given action in terms of why it made sense, from her point of view, to so act. Rationalizing explanations should be contrastive, of the form “Agent S performed action A, rather than actions B, C, or D, because P, Q, and R” where B, C, and D are whatever S takes to be the possible alternatives to A, and P, Q, and R are whichever of S’s deliberative considerations and other factors yield a good explanation.
Agents make predictions based on similar past cases, while also learning the relative importance of various attributes in judging similarity. We ask whether the resulting "empirically optimal similarity function (EOSF) is unique, and how easy it is to find it. We show that with many observations and few relevant variables, uniqueness holds. By contrast, when there are many variables relative to observations, non-uniqueness is the rule, and finding the EOSF is computationally hard. The results are interpreted as providing conditions under which rational agents who have access to the same observations are likely to converge on the same predictions, and conditions under which they may entertain different probabilistic beliefs.
There is conflicting experimental evidence about whether the “stakes” or importance of being wrong affect judgments about whether a subject knows a proposition. To date, judgments about stakes effects on knowledge have been investigated using binary paradigms: responses to “low” stakes cases are compared with responses to “high stakes” cases. However, stakes or importance are not binary properties—they are scalar: whether a situation is “high” or “low” stakes is a matter of degree. So far, no experimental work has investigated the scalar nature of stakes effects on knowledge: do stakes effects increase as the stakes get higher? Do stakes effects only appear once a certain threshold of stakes has been crossed? Does the effect plateau at a certain point? To address these questions, we conducted experiments that probe for the scalarity of stakes effects using several experimental approaches. We found evidence of scalar stakes effects using an “evidence-seeking” experimental design, but no evidence of scalar effects using a traditional “evidence-fixed” experimental design. In addition, using the evidence-seeking design, we uncovered a large, but previously unnoticed framing effect on whether participants are skeptical about whether someone can know something, no matter how much evidence they have. The rate of skeptical responses and the rate at which participants were willing to attribute “lazy knowledge”—that someone can know something without having to check— were themselves subject to a stakes effect: participants were more skeptical when the stakes were higher, and more prone to attribute lazy knowledge when the stakes were lower. We argue that the novel skeptical stakes effect provides resources to respond to criticisms of the evidence-seeking approach that argue that it does not target knowledge.
We identify several ongoing debates related to implicit measures, surveying prominent views and considerations in each. First, we summarize the debate regarding whether performance on implicit measures is explained by conscious or unconscious representations. Second, we discuss the cognitive structure of the operative constructs: are they associatively or propositionally structured? Third, we review debates about whether performance on implicit measures reflects traits or states. Fourth, we discuss the question of whether a person's performance on an implicit measure reflects characteristics of the person who is taking the test or characteristics of the situation in which the person is taking the test. Finally, we survey the debate about the relationship between implicit measures and (other kinds of) behavior.
I give a new argument for the moral difference between lying and misleading. First, following David Lewis (1983, 2002), I hold that conventions of Truthfulness and Trust fix the meanings of our language. These conventions generate fair play obligations. Thus, to fail to conform to the conventions of Truthfulness and Trust is unfair. Second, I argue that the liar, but not the misleader, fails to conform to Truthfulness. So the liar, but not the misleader, does something unfair. This account entails that bald-faced lies are wrong, that we can lie non-linguistically, and that linguistic innovation is morally significant.
. We’ve reached our last Tour (of SIST)*: Pragmatic and Error Statistical Bayesians (Excursion 6), marking the end of our reading with Souvenir Z, the final Souvenir, as well as the Farewell Keepsake in 6.7. …
We address problems (that have since been addressed) in a proofs-version of a paper by Eva, Hartmann and Rad, who where attempting to justify the Kullback-Leibler divergence minimization solution to van Fraassen’s Judy Benjamin problem.
Several treatments of the Shooting Room Paradox have failed to recognize the crucial role played by its involving a number of players unbounded in expectation. We indicate Reflection violations and other vulnerabilities in extant proposals, then show that the paradox does not arise when the expected number of participants is finite; the Shooting Room thus takes its place in the growing list of puzzles that have been shown to require infinite expectation. Recognizing this fact, we conclude that prospects for a “straight solution” are dim.
Internalists in epistemology think that whether one possesses epistemic statuses such as knowledge or justification depends on factors that are internal to one; externalists think that whether one possesses these statuses can depend on factors that are external to one. In this chapter we focus on the relationship between externalism and epistemic relativism. Externalism isn’t straightforwardly incompatible with epistemic relativism but, as we’ll see, it is very common to hold that key externalist insights block or undermine some standard arguments for epistemic relativism. Our aim in this chapter is to give a broad overview of why externalism poses a problem for standard arguments for relativism. But we also want to discuss some—admittedly less developed—ways in which some externalist ideas might actually provide support for certain forms of epistemic relativism.
One argument for Duality is that it makes sense of the ‘inescapable clash’ involved in asserting q if p and might not q if p: Throughout the paper I will restrict attention to a propositional language L , defined as follows: Definition 1. Let L be a language consisting of a set A of atomic formulae α, α , ..., closed under the connectives ¬, ∨, ∧, the indicative and subjunctive conditionals → and >, and the epistemic and subjunctive possibility modals ♦e and ♦. Say that a claim is boolean if it does not contain →, >, ♦e, ♦, or ∨.
Paying strict attention to Brandon Carter's several published renditions of anthropic reasoning, we present a \nutshell" version of the Doomsday argument that is truer to Carter's principles than the standard balls-and-urns or otherwise \naive Bayesian" versions that proliferate in the literature. At modest cost in terms of complication, the argument avoids commitment to many of the half-truths that have inspired so many to rise up against other toy versions, never adopting posterior outside of the convex hull of one's prior distribution over the \true chance" of Doom. The hyper-pessimistic position of the standard balls-and-urn presentation and the hyper-optimistic position of naive self-indicators are seen to arise from dubiously extreme prior distributions, leaving room for a more satisfying and plausible intermediate solution.
It’s a balmy day today on Ship StatInfasST: An invigorating wind has a salutary effect on our journey. So, for the first time I’m excerpting all of Excursion 5 Tour I (proofs) of Statistical Inference as Severe Testing How to Get Beyond the Statistics Wars (2018, CUP)
A salutary effect of power analysis is that it draws one forcibly to consider the magnitude of effects. …