Here is a widespread but controversial idea: those animals who represent correctly are likely to be selected over those who misrepresent. While various versions of this claim have been traditionally endorsed by the vast majority of philosophers of mind, recently, it has been argued that this is just plainly wrong. My aim in this paper is to argue for an intermediate position: that the correctness of some but not all representations is indeed selectively advantageous. It is selectively advantageous to have correct representations that are directly involved in bringing about and guiding the organism’s action. I start with the standard objection to the claim that it is selectively advantageous to represent correctly, the ‘better safe than sorry’ argument and then generalize it with the help of Peter Godfrey Smith’s distinction between Cartesian and Jamesian reliability and the trade-off between them. This generalized argument rules out a positive answer to our question at least as far as the vast majority of our representational apparatus is concerned.
Gallow on causal counterfactuals without miracles and backtracking
Posted on Friday, 27 Jan 2023. Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions. …
While Classical Logic (CL) used to be the gold standard for evaluating the rationality of human reasoning, certain non-theorems of CL—like Aristotle’s (∼(? → ∼?)) and Boethius’ theses ((? → ?) → ∼(? → ∼?))—appear intuitively rational and plausible. Connexive logics have been developed to capture the underlying intuition that conditionals whose antecedents contradict their consequents, should be false. We present results of two experiments (total ? = 72), the first to investigate connexive principles and related formulae systematically. Our data suggest that connexive logics provide more plausible rationality frameworks for human reasoning compared to CL. Moreover, we experimentally investigate two approaches for validating connexive principles within the framework of coherence-based probability logic . Overall, we observed good agreement between our predictions and the data, but especially for Approach 2.
Starting from the premise that expected utility (EU) is the correct criterion of rational preference both in decision cases under certainty and decision cases under risk, I argue that EU theory is a false theory of instrumental rationality. In its place, I argue for a new theory of instrumental rationality, namely expected comparative utility (ECU) theory. I show that in some commonplace decisions under risk, ECU theory delivers different verdicts from those of EU theory.
In the sense that matters here, someone’s knowledge that p is or requires a particular kind of connection between their belief that p and the fact that p (c.f., Armstrong 1973; Zagzebski 1996; Nagel 2014). Yet there are different views on the nature of this connection. Traditional internalism sees the relevant connection as a kind of reflective assurance of truth that is sufficient to put away any skeptical concerns about whether p. Knowledge is here the result of fully satisfying an uncompromising “philosophical curiosity” (Fumerton 2004, 75). Non-traditional internalism – more popular today – compromises on these anti-skeptical ambitions but remains committed to the idea that knowledge requires reflective assurance of some kind. Knowledge is here the result of getting things right by doing well-enough with what is available from the first-person perspective (e.g., one’s mental states and/or seemings). Contemporary externalism, by contrast to both of these internalisms, sees the relevant connection as something broader and weaker than reflective assurance of any kind: it is something that can sometimes be instantiated by reflective assurance, but something that can also survive without it. Here knowledge and what is available from the first-person perspective – at any level of ambition – can come apart.
Suppose there is a distinctive and significant value to knowledge. What I mean by that is that if two epistemic are very similar in terms
of truth, the level and type of justification, the subject matter and
its relevant to life, the degree of belief, etc., but one is knowledge
and the other is not, then the one that is knowledge has a significantly
higher value because it is knowledge. …
According to second-personal approaches to moral obligation, the distinctive normative features of moral obligation can only be explained in terms of second-personal relations, i.e. the distinctive way persons relate to each other as persons. But there are important disagreements between different groups of second-personal approaches. Most notably, they disagree about the nature of second-personal relations, which has consequences for the nature of the obligations that they purport to explain. This article aims to distinguish these groups from each other, highlight their respective advantages and disadvantages, and thereby indicate avenues for future research.
There are two things called contexts that play important but distinct roles in standard accounts of language and communication. The first—call these compositional contexts—feature in a semantic theory. Compositional contexts are sequences of parameters that play a role in characterizing compositional semantic values for a given language, and in characterizing how such compositional semantic values determine a proposition expressed by a given sentence. The second—call these context sets—feature in a pragmatic theory. Context sets are abstract representations of the conversational states that serve to determine the compositional contexts relevant for interpreting a speech-act and that such speech-acts act upon. In this paper, I’ll consider how, given mutual knowledge of the information codified in a compositional semantic theory, an assertion of a sentence serves to update the context set. There is a standard account of how such conversational updating occurs. However, while this account has much to recommend it, I’ll argue that it needs to be revised in light of certain natural discourses.
recent post, I noted that it is possible to cook up a Bayesian setup
where you don’t meet some threshold, say for belief or knowledge, with
respect to some proposition, but you do meet the same threshold with
respect to the claim that after you examine a piece of evidence, then
you will meet the threshold. …
What differentiates scientific research from non-scientific inquiry? Philosophers addressing this question have typically been inspired by the exalted social place and intellectual achievements of science. They have hence tended to point to some epistemic virtue or methodological feature of science that sets it apart. Our discussion on the other hand is motivated by the case of commercial research, which we argue is distinct from (and often epistemically inferior to) academic research. We consider a deflationary view in which science refers to whatever is regarded as epistemically successful, but find that this does not leave room for the important notion of scientific error and fails to capture distinctive social elements of science. This leads us to the view that a demarcation criterion should be a widely upheld social norm without immediate epistemic connotations. Our tentative answer is the communist norm, which calls on scientists to share their work widely for public scrutiny and evaluation.
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society.
Humans can think about possible states of the world without believing in them, an important capacity for high-level cognition. Here we use fMRI and a novel “shell game” task to test two competing theories about the nature of belief and its neural basis. According to the Cartesian theory, information is first understood, then assessed for veracity, and ultimately encoded as either believed or not believed. According to the Spinozan theory, comprehension entails belief by default, such that understanding without believing requires an additional process of “unbelieving”. Participants (N=70) were experimentally induced to have beliefs, desires, or mere thoughts about hidden states of the shell game (e.g., believing that the dog is hidden in the upper right corner). That is, participants were induced to have specific “propositional attitudes” toward specific “propositions” in a controlled way. Consistent with the Spinozan theory, we found that thinking about a proposition without believing it is associated with increased activation of the right inferior frontal gyrus (IFG). This was true whether the hidden state was desired by the participant (due to reward) or merely thought about. These findings are consistent with a version of the Spinozan theory whereby unbelieving is an inhibitory control process. We consider potential implications of these results for the phenomena of delusional belief and wishful thinking.
The Philosophy of Science Can Usefully Be Divided Into Two Broad Areas. On the One Hand is the Epistemology of Science, Which Deals with Issues Relating to the Justification of Claims to Scientific Knowledge. Philosophers Working in This Area Investigate Such Questions as Whether Science Ever Uncovers Permanent Truths, Whether Objective Decisions Between Competing Theories Are Possible and Whether the Results of Experiment Are Clouded by Prior Theoretical Expectations. On the Other Hand Are Topics in the Metaphysics of Science, Topics Relating to Philosophically Puzzling Features of the Natural World Described by Science. Here Philosophers Ask Such Questions as Whether All Events Are Determined by Prior Causes, Whether Everything Can Be Reduced to Physics and Whether There Are Purposes in Nature. You Can Think of the Difference Between the Epistemologists and the Metaphysicians of Science in This Way. The Epistemologists Wonder Whether We Should Believe What the Scientists Tell Us. The Metaphysicians Worry About What the World is Like, If the Scientists Are Right. Readers Will Wish to Consult Chapters on Epistemology (Chapter 1), Metaphysics (Chapter 2), Philosophy of Mathemat- Ics (Chapter 11), Philosophy of Social Science (Chapter 12) and Pragmatism (Chapter 36).
Global challenges such as climate change, food security, or public health have become dominant concerns in research and innovation policy. This article examines how responses to these challenges are addressed by governance actors. We argue that appeals to global challenges can give rise to a ‘solution strategy’ that presents responses of dominant actors as solutions and a ‘negotiation strategy’ that highlights the availability of heterogeneous and often conflicting responses. On the basis of interviews and document analyses, the study identifies both strategies across local, national, and European levels. While our results demonstrate the co-existence of both strategies, we find that global challenges are most commonly highlighted together with the solutions offered by dominant actors. Global challenges are ‘wicked problems’ that often become misframed as ‘tame problems’ in governance practice and thereby legitimise dominant responses.
We distinguish two types of cases that have potential to generate quasi-cyclical preferences: self-involving choices where an agent oscillates between first- and third-person perspectives that conflict regarding their life-changing implications, and self-serving choices where frame-based reasoning can be “first-personally
In this paper, we defend what we call the ‘Hybrid View’ of privacy. According to this view, an individual has privacy if, and only if, no one else forms an epistemically warranted belief about the individual’s personal matters, nor perceives them. We contrast the Hybrid View with what seems to be the most common view of what it means to access someone’s personal matters, namely the Belief-Based View. We offer a range of examples that demonstrate why the Hybrid View is more plausible than the Belief-Based View. Finally, we show how the Hybrid View generates a more plausible fit between the concept of privacy, and the concept of a (morally objectionable) violation of privacy.
Kocurek on chance and would
Posted on Friday, 20 Jan 2023. A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022). The paper investigates Al Hajek's argument (e.g. …
In a recent paper, Sprenger (2019) advances what he calls a “suppositional” answer to the question of why a Bayesian agent’s degrees of belief should align with the probabilities found in statistical models. We show that Sprenger’s account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
Is epistocracy epistemically superior to democracy? In this paper, I scrutinize some of the arguments for and against the epistemic superiority of epistocracy. Using empirical results from the literature on the epistemic benefits of diversity as well as the epistemic contributions of citizen science, I strengthen the case against epistocracy and for democracy. Disenfranchising, or otherwise discouraging anyone to participate in political life, on the basis of them not possessing a certain body of (social scientific) knowledge, is untenable also from an epistemic point of view. Rather than focussing on individual competence, we should pay attention to the social constellation through which we produce knowledge to make sure we decrease epistemic loss (by ensuring diversity and inclusion) and increase epistemic productivity (by fostering a multiplicity of perspectives interacting fruitfully). Achieving those epistemic benefits requires a more democratic approach that differs significantly from epistocracy.
Cynthia rises from the couch to go get that beer. If we accept industrial-strength representationalism, in particular the Kinematics and Specificity theses, then there must be a fact of the matter exactly which representations caused this behavior. …
We should be dispositionalists rather than representationalists about belief. According to dispositionalism, a person believes when they have the relevant pattern of behavioral, phenomenal, and cognitive dispositions. According to representationalism, a person believes when the right kind of representational content plays the right kind of causal role in their cognition. Representationalism overcommits on cognitive architecture, reifying a cartoon sketch of the mind. In particular, representationalism faces three problems: the Problem of Causal Specification (concerning which specific representations play the relevant causal role in governing any particular inference or action), the Problem of Tacit Belief (concerning which specific representations any one person has stored, among the hugely many approximately redundant possible representations we might have for any particular state of affairs), and the Problem of Indiscrete Belief (concerning how to model gradual belief change and in-between cases of belief). Dispositionalism, in contrast, is flexibly minimalist about cognitive architecture, focusing appropriately on what we do and should care about in belief ascription.
Many philosophers characterize a particularly important sense of free will and responsibility by referring to basically deserved blame. But what is basically deserved blame? The aim of this paper is to identify the appraisal entailed by basic desert claims. It presents three desiderata for an account of desert appraisals and it argues that important recent theories fail to meet them. Then, the paper presents and defends a promising alternative. The basic idea is that claims about basically deserved blame entail that the targets have forfeited their claims that others not blame them and that there is positive reason to blame them. The paper shows how this view frames the discussion about skepticism about free will and responsibility.
In his well-known book Thought Experiments (1992), R.A. Sorensen provides two modal-logical schemata for two different types of ‘destructive’ thought experiments, baptised the Necessity Refuter and the Possibility Refuter. Regarding his schemata, Sorensen (1992, p. 132) advances the following caveat: Don’t worry about whether this is the uniquely correct scheme. The adequacy of a classification system is more a question of efficiency and suggestiveness. A good scheme consolidates knowledge in a way that minimizes the demand on your memory and expedites the acquisition of new knowledge by raising helpful leading questions. Both the Necessity Refuter and the Possibility Refuter consist of five premises, which are claimed to be inconsistent (ibid,, p. 135, 153). Besides clarity about the logical structure of thought experiments, another virtue of the modal-logical schemata is, Sorensen submits, the following (ibid, p. 136): Since the above five premises are jointly inconsistent, one cannot hold all five. This means that there are at most five consistent responses to the set. Sorensen then discusses the five possible responses, each of which rejects one premise. We concur with Sorensen that systematisations of the arguments accompanying thought experiments should be judged by their usefulness, such as classifying different responses. If the premises are inconsistent, then at least one premise must be given up; but if they are consistent, they can all be held. Indeed, we claim that sticto sensu both these modal-logical schemata are consistent, undermining the usefulness of the systematisation.
The theory of morality we can call full rule-consequentialism selects
rules solely in terms of the goodness of their consequences and then
claims that these rules determine which kinds of acts are morally
wrong. George Berkeley was arguably the first rule-consequentialist. He wrote, “In framing the general laws of nature, it is granted
we must be entirely guided by the public good of mankind, but not in
the ordinary moral actions of our lives. … The rule is framed
with respect to the good of mankind; but our practice must be always
shaped immediately by the rule” (Berkeley 1712: section 31).
Communication can be risky. Like other kinds of actions, it comes with potential costs. For instance, an utterance can be embarrassing, offensive, or downright illegal. In the face of such risks, speakers tend to act strategically and seek ‘plausible deni-ability’. In this paper, we propose an account of the notion of deniability at issue. On our account, deniability is an epistemic phenomenon. A speaker has deniability if she can make it epistemically irrational for her audience to reason in certain ways. To avoid predictable confusion, we distinguish deniability from a practical correlate we call ‘untouchability’. Roughly, a speaker has untouchability if she can make it practically irrational for her audience to act in certain ways. These accounts shed light on the nature of strategic speech and suggest countermeasures against strategic speech.
Sher on the weight of reasons
Posted on Friday, 13 Jan 2023. A few thoughts on Sher (2019), which I found advertised in Nair (2021). This (long and rich) paper presents a formal model of reasons and their weight, with the aim of clarifying how different reasons for or against an act combine. …
[Editor’s Note: The following new entry by Timothy Perrine
on this topic by the previous author.]
All of us—theist, atheist, and agnostic alike—experience
suffering and evil in the world. There’s the annoyance of a
stubbed toe, the disappointment of personal or professional setback,
the endless frustration of debilitating chronic pain, and the
soul-crushing experience of the suffering and death of those we care
the most about (to name a few). It doesn’t require extensive
education to worry if suffering and evil is evidence against the
existence of God—or, at least, God understood classically, as a
perfect being that is an all-powerful, all-knowing, all-good creator
of the universe.
Simply stated, this book bridges the gap between statistics and philosophy. It does this by delineating the conceptual cores of various statistical methodologies (Bayesian/frequentist statistics, model selection, machine learning, causal inference, etc.) and drawing out their philosophical implications. Portraying statistical inference as an epistemic endeavor to justify hypotheses about a probabilistic model of a given empirical problem, the book explains the role of ontological, semantic, and epistemological assumptions that make such inductive inference possible. From this perspective, various statistical methodologies are characterized by their epistemological nature: Bayesian statistics by internalist epistemology, classical statistics by externalist epistemology, model selection by pragmatist epistemology, and deep learning by virtue epistemology. Another highlight of the book is its analysis of the ontological assumptions that underpin statistical reasoning, such as the uniformity of nature, natural kinds, real patterns, possible worlds, causal structures, etc. Moreover, recent developments in deep learning indicate that machines are carving out their own “ontology” (representations) from data, and better understanding this—a key objective of the book—is crucial for improving these machines’ performance and intelligibility.
How should your opinion change in response to the opinion of an epistemic peer? We show that the pooling rule known as “upco” is the unique answer satisfying some natural desiderata. If your revised opinion will influence your opinions on other matters by Jeffrey conditionalization, then upco is the only standard pooling rule that ensures the order in which peers are consulted makes no difference. Popular proposals like linear pooling, geometric pooling, and harmonic pooling cannot boast the same. In fact, no alternative to upco can if it possesses four minimal properties which these proposals share.
Endre Begby’s Prejudice: A Study in Non-Ideal Epistemology engages a wide range of issues of enduring interest to epistemologists, applied ethicists, and anyone concerned with how knowledge and justice intersect. Topics include stereotypes and generics, evidence and epistemic justification, epistemic injustice, ethical-epistemic dilemmas, moral encroachment, and the relations between blame and accountability. Begby applies his views about these topics to an equally wide range of pressing social questions, such as conspiracy theories, misinformation, algorithmic bias, discrimination, and criminal justice.