1. 24510.226716
    Successful communication depends not only on our knowledge of language, but also our knowledge of context. If a speaker utters the sentence “he is going to get burnt,” we will have to rely on our knowledge of the context in order to grasp what proposition they are trying to express. If there is a mutually salient individual in front of us, whose trousers have just caught on fire, then we will know that this salient individual is the intended referent. If we were talking about our mutual friend Frank, and somebody has just asked how Frank’s latest business deal is going, it will be clear that Frank is the intended referent. Two completely different propositions are expressed in these situations, and without contextual knowledge, we would not be able to tell which proposition was expressed.
    Found 6 hours, 48 minutes ago on Andrew Peet's site
  2. 64360.226828
    Trust is important, but it is also dangerous. It is important because it allows us to depend on others—for love, for advice, for help with our plumbing, or what have you—especially when we know that no outside force compels them to give us these things. But trust also involves the risk that people we trust will not pull through for us, since if there were some guarantee they would pull through, then we would have no need to trust them.[ 1 ] Trust is therefore dangerous. What we risk while trusting is the loss of valuable things that we entrust to others, including our self-respect perhaps, which can be shattered by the betrayal of our trust.
    Found 17 hours, 52 minutes ago on Stanford Encyclopedia of Philosophy
  3. 85627.226864
    In this situation, uttering (1a) is to lie while uttering (1b) is not. Crucially, (1a) is something the speaker believes (indeed knows) to be false, whereas (1b) is something she believes to be true. Yet both utterances are aimed at the same thing: deceiving the hearer into believing that the speaker has not been opening the mail.
    Found 23 hours, 47 minutes ago on Andreas Stokke's site
  4. 98073.226894
    Last week, I explained how you can give an accuracy dominance argument for Probabilism without assuming that your inaccuracy measures are additive -- that is, without assuming that the inaccuracy of a whole credence function is obtained by adding up the inaccuracy of all the individual credences that it assigns. …
    Found 1 day, 3 hours ago on M-Phi
  5. 292256.226926
    Psycholinguistic studies have repeatedly demonstrated that downward entailing (DE) quantifiers are more difficult to process than upward entailing (UE) ones. We contribute to the current debate on cognitive processes causing the monotonicity effect by testing predictions about the underlying processes derived from two competing theoretical proposals: two-step and pragmatic processing models. We model reaction times and accuracy from two verification experiments (a sentence-picture and a purely linguistic verification task), using the diffusion decision model (DDM). In both experiments, verification of UE quantifier more than half was compared to verification of DE quantifier fewer than half. Our analyses revealed the same pattern of results across tasks: Both non-decision times and drift rates, two of the free model parameters of the DDM, were affected by the monotonicity manipulation. Thus, our modeling results support both two-step (prediction: non-decision time is affected) and pragmatic processing models (prediction: drift rate is affected).
    Found 3 days, 9 hours ago on Jakub Szymanik's site
  6. 292271.226964
    [177] Shenoy, Prakash P. (1991), On Spohn’s Rule for Revision of Beliefs. International Journal of Approximate Reasoning 5, 149-181. [178] Spirtes, Peter & Glymour, Clark & Scheines, Richard (2000), Causation,
    Found 3 days, 9 hours ago on Franz Huber's site
  7. 389730.227002
    For a PDF of this post, see here.One of the central arguments in accuracy-first epistemology -- the one that gets the project off the ground, I think -- is the accuracy-dominance argument for Probabilism. …
    Found 4 days, 12 hours ago on M-Phi
  8. 489301.22705
    Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
    Found 5 days, 15 hours ago on Chris Eliasmith's site
  9. 541469.227106
    In a recent series of papers, Jane Friedman argues that suspended judgment is a sui generis first-order attitude, with a question (rather than a proposition) as its content. In this paper, I offer a critique of Friedman’s project. I begin by responding to her arguments against reductive higher-order propositional accounts of suspended judgment, and thus undercut the negative case for her own view. Further, I raise worries about the details of her positive account, and in particular about her claim that one suspends judgment about some matter if and only if one inquires into this matter. Subsequently, I use conclusions drawn from the preceding discussion to offer a tentative account: S suspends judgment about p iff (i) S believes that she neither believes nor disbelieves that p, (ii) S neither believes nor disbelieves that p, and (iii) S intends to judge that p or not-p.
    Found 6 days, 6 hours ago on Michal Masny's site
  10. 568781.227139
    This paper is primarily an advertisement for a research program, and for some particular, so far under-explored research questions within that research program. It’s an advertisement for the program of constructing fragmented models of subjects’ propositional attitudes, and theorizing about and by means of such models. I’ll aim to do two things: First, motivate a fragmentationist research program by identifying a cluster of problems that such a research program is well-positioned to address or resolve. Second, identify what I take to be some of the challenges and research questions that the fragmentationist program will need to address, and where the space of possible answers is not yet well-charted.
    Found 6 days, 13 hours ago on Andy Egan's site
  11. 575324.227174
    Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn’t defer to your doctor by aligning your credence in the de se thought ‘I am sick’ with the doctor’s credence in that same de se thought. Nor should you defer to the objective chances by setting your credence in the thought ‘The actual winner wins’ equal to the objective chance that the actual winner wins. Here, I generalize principles of expert deference to handles these kinds of problem cases.
    Found 6 days, 15 hours ago on PhilPapers
  12. 607016.227206
    Consumption decisions are partly in‡uenced by values and ideologies. Consumers care about global warming as well as about child labor and fair trade. Incorporating values into the consumer’s utility function will often violate monotonicity, in case consumption hurts cherished values in a way that isn’t offset by the hedonic bene…ts of material consumption. We distinguish between intrinsic and instrumental values, and argue that the former tend to introduce discontinuities near zero. For example, a vegetarian’s preferences would be discontinuous near zero amount of animal meat. We axiomatize a utility representation that captures such preferences and discuss the measurability of the degree to which consumers care about such values.
    Found 1 week ago on Itzhak Gilboa's site
  13. 633248.227243
    According to an increasingly popular view in epistemology and philosophy of mind, beliefs are sensitive to contextual factors such as practical factors and salient error possibilities. A prominent version of this view, called credal sensitivism, holds that the context-sensitivity of belief is due to the context-sensitivity of degrees of belief or credence. Credal sensitivism comes in two variants: while credence-one sensitivism (COS) holds that maximal confidence (credence one) is necessary for belief, threshold credal sensitivism (TCS) holds that belief consists in having credence above some threshold, where this threshold doesn’t require maximal confidence. In this paper, I argue that COS has difficulties in accounting for three important features about belief: i) the compatibility between believing p and assigning non-zero credence to certain error possibilities that one takes to entail not-p, ii) the fact that outright beliefs can occur in different strengths, and iii) beliefs held by unconscious subjects. I also argue that TCS can easily avoid these problems. Finally, I consider an alleged advantage of COS over TCS in terms of explaining beliefs about lotteries. I argue that lottery cases are rather more problematic for COS than TCS. In conclusion, TCS is the most plausible version of credal sensitivitism.
    Found 1 week ago on PhilPapers
  14. 633301.227294
    The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to new information. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting.
    Found 1 week ago on PhilPapers
  15. 635434.227341
    In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In section 3, I argue that these methodological claims are in tension with a prominent account of scientific method, often called “Inference to the Best Explanation” (IBE). Later on, in section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely the argument due to Roche and Sober (2013) that “explanatoriness is evidentially irrelevant”. This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In section 4, I consider some extant responses to this argument, especially that of Climenhaga (2017). In section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected De Divinatione, I formulate what I call the “Ciceronian Causal-nomological Requirement”, (CCR), which states roughly that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations”, chance correlations which we should not rely upon for predictive inference. In section 7, I offer some concluding remarks.
    Found 1 week ago on PhilSci Archive
  16. 749212.227377
    I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures.
    Found 1 week, 1 day ago on PhilPapers
  17. 766493.227403
    This article sheds light on a response to experimental philosophy that has not yet received enough attention: the reflection defense. According to proponents of this defense, judgments about philosophical cases are relevant only when they are the product of careful, nuanced, and conceptually rigorous reflection. We argue that the reflection defense is misguided: We present five studies (N>1800) showing that people make the same judgments when they are primed to engage in careful reflection as they do in the conditions standardly used by experimental philosophers.
    Found 1 week, 1 day ago on Markus Kneer's site
  18. 809102.22743
    Comparative psychology came into its own as a science of animal minds, so a standard story goes, when it abandoned anecdotes in favor of experimental methods. However, pragmatic constraints significantly limit the number of individual animals included in laboratories experiments. Studies are often published with sample sizes in the single digits, and sometimes samples of one animal. With such small samples, comparative psychology has arguably not actually moved on from its anecdotal roots. Replication failures in other branches of psychology have received substantial attention, but have only recently been addressed in comparative psychology, and have not received serious attention in the attending philosophical literature. I focus on the question of how to interpret findings from experiments with small samples, and whether they can be generalized to other members of the tested species. As a first step, I argue that we should view studies with extreme small sample sizes as anecdotal experiments, lying somewhere between traditional experiments and traditional anecdotes in evidential weight and generalizability.
    Found 1 week, 2 days ago on PhilSci Archive
  19. 854944.227458
    The vast majority of philosophers accept Assertion Incompatibilism: according to this view, given intuitive variability of proper assertion with practical stakes, non-shifty invariantism (NSI) is incompatible with a biconditional knowledge norm of assertion (KNA). There are also a few dissenting voices, however: some invariantists venture to explain the sensitivity data for proper assertion in a fashion that preserves both NSI and KNA (Assertion Compatibilism). In this paper, I argue that my preferred incarnation of Compatibilism fares better than the competition. According to the competition, shiftiness in proper assertability is to be explained via appealing to the pragmatics of language. According to the view I defend, what varies with practical considerations is the all-things-considered propriety of assertion: epistemic propriety and the epistemic standard at stake are invariant.
    Found 1 week, 2 days ago on Mona Simion's site
  20. 855012.227486
    Can groups have beliefs? On the one hand, there is a growing number of researchers who argue that the answer to this question is ‘no’. On the other hand, extant attempts to counter this rejectionism about group belief in the literature remain unsatisfactory. Of course, if there is no such thing as group belief, the worry is that there can be no group knowledge or justified belief either. In this way, collective epistemology threatens to fall into disarray. This paper argues that a distinctively knowledge first approach to collective epistemology carries great promise, in that it can remain neutral on the issue of whether groups can host beliefs proper, while at the same time allowing us to develop workable accounts of knowledge and justification.
    Found 1 week, 2 days ago on Mona Simion's site
  21. 872123.227513
    In this paper, we will revisit a recent solution to the lottery paradox by Igor Douven [2008], which, we believe, has been underappreciated. More specifically, we aim to show the following: First, Douven’s solution is best seen as epistemic rule consequentialist at heart and, once it is thus seen, it is more attractive not only than it may appear at first glance but also than Douven would have us think. Second, Douven’s specific way of implementing epistemic rule consequentialism does not offer a fully satisfactory solution to the lottery paradox. Fortunately, however, a better alternative is available. Finally, third, we will work towards an epistemic rule consequentialist solution to the related preface paradox. Interestingly enough, while the lottery paradox does support the alternative form of rule consequentialism over Douven’s, in case of the preface paradox, it does not matter which version of the view one adopts. Both lead to the same result.
    Found 1 week, 3 days ago on Christoph Kelp's site
  22. 872204.227549
    Catherine Herfeld: Professor List, what comes to your mind when someone refers to rational choice theory? What do you take rational choice theory to be? Christian List: When students ask me to define rational choice theory, I usually tell them that it is a cluster of theories, which subsumes individual decision theory, game theory, and social choice theory. I take rational choice theory to be not a single theory but a label for a whole field. In the same way, if you refer to economic theory, that is not a single theory either, but a whole discipline, which subsumes a number of different, specific theories. I am actually very ecumenical in my use of the label ‘rational choice theory’. I am also happy to say that rational choice theory in this broad sense subsumes various psychologically informed theories, including theories of boundedly rational choice. We should not define rational choice theory too narrowly, and we definitely shouldn’t tie it too closely to the traditional idea of homo economicus.
    Found 1 week, 3 days ago on Christian List's site
  23. 872210.227574
    In reasoning, we consider our reasons. When reasoning terminates in an action or a belief, we act or believe for the reasons that our reasoning took into account. These claims seem near platitudinous. But does reasoning involve a sensitivity to reasons that exist quite independently of the deliberation of rational agents? Or is it rather that the facts we take into consideration in reasoning are reasons because they are the premises of good reasoning? Proponents of the ‘reasoning view’ endorse the platitudes and answer the second question in the affirmative. That is to say, they both analyze reasons as premises of good reasoning and explain the normativity of reasons by appeal to their role in good reasoning. The aim of this paper is to cast doubt on the reasoning view, not by addressing the latter, explanatory claim directly, but by providing counterexamples to the alleged platitudes and the corresponding analysis of reasons, counterexamples in which premises of good reasoning towards φ-ing are not reasons to φ.
    Found 1 week, 3 days ago on PhilPapers
  24. 906005.227597
    Psychologists frequently use response time to study cognitive processes, but response time may also be a part of the commonsense psychology that allows us to make inferences about other agents’ mental processes. We present evidence that by age six, children expect that solutions to a complex problem can be produced quickly if already memorized, but not if they need to be solved for the first time. We suggest that children could use response times to evaluate agents’ competence and expertise, as well as to assess the value and relevance of information.
    Found 1 week, 3 days ago on Frank Keil's site
  25. 922864.227629
    The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide-scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses draw in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by Epistemic Utility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, Epistemic Utility Theory cannot vindicate such requirements.
    Found 1 week, 3 days ago on PhilPapers
  26. 923006.227653
    While controversy about the nature of grounding abounds, our focus is on a question for which a particular answer has attracted something like a consensus. The question concerns the relation between partial grounding and full grounding. The apparent consensus is that the former is to be defined in terms of the latter. We argue that the standard way of doing this faces a significant problem and that we ought to pursue the reverse project of defining full grounding in terms of partial grounding. The guiding idea behind the definition we propose is that full grounding is what happens when partial grounding works in a way that ensures that the grounded is nothing over and above the grounds. We ultimately understand this idea in terms of iterated nothing-over-and-above claims.
    Found 1 week, 3 days ago on PhilPapers
  27. 940374.227676
    Have you ever disagreed with your government’s stance about some significant social, political, economic, or even philosophical issue? For example: Healthcare policy? Response to a pandemic? Gender inequality? Structural racism? Drilling in the Arctic? Fracking? Approving or vetoing a military intervention in a foreign country? Transgender rights? Exiting some multi-national political alliance (for instance, the European Union)? The building of a 20 billion dollar wall? We’re guessing the answer is most likely ’yes’.
    Found 1 week, 3 days ago on J. Adam Carter's site
  28. 1038729.227702
    The apparent consistency of Sobel sequences (example below) famously motivated David Lewis to defend a variably strict conditional semantics for counterfactuals. (a) If Sophie had gone to the parade she would have seen Pedro. (b) If Sophie had gone to the parade and been stuck behind someone tall she would not have seen Pedro. But if the order of the counterfactuals in a Sobel sequence is reversed – in the example, if (b) is asserted prior to (a) – the second counterfactual asserted no longer rings true. This is the Heim sequence problem. That the order of assertion makes this difference is surprising on the variably strict account. Some argue that this is reason to reject the Lewis-Stalnaker semantics outright. Others argue that the problem motivates a contextualist rendering of counterfactuals. Still others maintain that the explanation for the phenomenon is merely pragmatic. I argue that none of these are right, and defend a novel way to understand the phenomenon. My proposal avoids the problems faced by the alternative analyses and enjoys independent support. There is, however, a difficulty for my view: it entails that many ordinarily-accepted counterfactuals are not true. I argue that this (apparent) cost is acceptable.
    Found 1 week, 5 days ago on PhilPapers
  29. 1038888.227726
    What are the truth conditions of want ascriptions? According to a highly influential and fruitful approach, championed by Heim (1992) and von Fintel (1999), the answer is intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff within S’s belief set, S prefers the p worlds to the ¬p worlds. This approach faces a well known and as-yet unsolved problem, however: it makes the entirely wrong predictions with what we call (counter)factual want ascriptions, wherein the agent either believes p or believes ¬p—e.g., ‘I want it to rain tomorrow and that is exactly what is going to happen’ or ‘I want this weekend to last forever but of course it will end in a few hours’. We solve this problem. The truth conditions for want ascriptions are, we propose, connected to the agent’s conditional beliefs. We bring out this connection by pursuing a striking parallel between (counter)factual and non-(counter)factual want ascriptions on the one hand and counterfactual and indicative conditionals on the other.
    Found 1 week, 5 days ago on PhilPapers
  30. 1040939.227749
    There are three leading theories of normativity: teleology, deontology, and virtue theory. All three types of normative theory countenance values, norms and virtues. What they disagree on is the order of explanation. Teleology takes values to be the fundamental normative kind and explains norms and virtues in terms of them. Deontology takes norms to be the fundamental normative kind and explains value and virtues in terms of them. And, finally, virtue theory takes virtues to be the fundamental normative kind and explains norms and values in terms of them.
    Found 1 week, 5 days ago on Christoph Kelp's site