-
165412.871532
Gallow on causal counterfactuals without miracles and backtracking
Posted on Friday, 27 Jan 2023. Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions. …
-
219333.871645
While Classical Logic (CL) used to be the gold standard for evaluating the rationality of human reasoning, certain non-theorems of CL—like Aristotle’s (∼(? → ∼?)) and Boethius’ theses ((? → ?) → ∼(? → ∼?))—appear intuitively rational and plausible. Connexive logics have been developed to capture the underlying intuition that conditionals whose antecedents contradict their consequents, should be false. We present results of two experiments (total ? = 72), the first to investigate connexive principles and related formulae systematically. Our data suggest that connexive logics provide more plausible rationality frameworks for human reasoning compared to CL. Moreover, we experimentally investigate two approaches for validating connexive principles within the framework of coherence-based probability logic [29]. Overall, we observed good agreement between our predictions and the data, but especially for Approach 2.
-
219397.871671
Starting from the premise that expected utility (EU) is the correct criterion of rational preference both in decision cases under certainty and decision cases under risk, I argue that EU theory is a false theory of instrumental rationality. In its place, I argue for a new theory of instrumental rationality, namely expected comparative utility (ECU) theory. I show that in some commonplace decisions under risk, ECU theory delivers different verdicts from those of EU theory.
-
340092.871686
James Sterba (2019, chapter 2) has recently argued that the free will defense fails to explain the compossibility of a perfect God and the amount and degree of moral evil that we see. I think he is mistaken about this. I thus find myself in the awkward and unexpected position, as a non-theist myself, of defending the free will defense. In this paper, I will try to show that once we take care to focus on what the free will defense is trying to accomplish, and by what means it tries to do so, we will see that Sterba’s criticism of it misses the mark.
-
379671.871699
Suppose there is a distinctive and significant value to knowledge. What I mean by that is that if two epistemic are very similar in terms
of truth, the level and type of justification, the subject matter and
its relevant to life, the degree of belief, etc., but one is knowledge
and the other is not, then the one that is knowledge has a significantly
higher value because it is knowledge. …
-
388716.871713
Denić (2021) observes that the availability of distributive inferences — for sentences with disjunction embedded in the scope of a universal quantifier — depends on the size of the domain quantified over as it relates to the number of disjuncts. Based on her observations, she argues that probabilistic considerations play a role in the computation of implicatures. In this paper we explore a different possibility. We argue for a modification of Denić’s generalization, and provide an explanation that is based on intricate logical computations but is blind to probabilities. The explanation is based on the observation that when the domain size is no larger than the number of disjuncts, universal and existential alternatives are equivalent if distributive inferences are obtained. We argue that under such conditions a general ban on ‘fatal competition’ (Magri 2009a,b; Spector 2014) is activated thereby predicting distributive inferences to be unavailable.
-
483230.871727
There are two things called contexts that play important but distinct roles in standard accounts of language and communication. The first—call these compositional contexts—feature in a semantic theory. Compositional contexts are sequences of parameters that play a role in characterizing compositional semantic values for a given language, and in characterizing how such compositional semantic values determine a proposition expressed by a given sentence. The second—call these context sets—feature in a pragmatic theory. Context sets are abstract representations of the conversational states that serve to determine the compositional contexts relevant for interpreting a speech-act and that such speech-acts act upon. In this paper, I’ll consider how, given mutual knowledge of the information codified in a compositional semantic theory, an assertion of a sentence serves to update the context set. There is a standard account of how such conversational updating occurs. However, while this account has much to recommend it, I’ll argue that it needs to be revised in light of certain natural discourses.
-
497726.871748
In a
recent post, I noted that it is possible to cook up a Bayesian setup
where you don’t meet some threshold, say for belief or knowledge, with
respect to some proposition, but you do meet the same threshold with
respect to the claim that after you examine a piece of evidence, then
you will meet the threshold. …
-
512123.871762
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society.
-
780428.871775
Kocurek on chance and would
Posted on Friday, 20 Jan 2023. A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022). The paper investigates Al Hajek's argument (e.g. …
-
794900.871788
In a recent paper, Sprenger (2019) advances what he calls a “suppositional” answer to the question of why a Bayesian agent’s degrees of belief should align with the probabilities found in statistical models. We show that Sprenger’s account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
-
818749.871801
Forty years ago, Niels Green-Pedersen listed five different accounts of valid consequence, variously promoted by logicians in the early fourteenth century and discussed by Niels Drukken of Denmark in his commentary on Aristotle’s Prior Analytics, written in Paris in the late 1330s. Two of these arguably fail to give defining conditions: truth preservation was shown by Buridan and others to be neither necessary nor sufficient; incompatibility of the opposite of the conclusion with the premises is merely circular if incompatibility is analysed in terms of consequence. Buridan was perhaps the first to define consequence in terms of preservation of what we might dub verification, that is, signifying as things are. John Mair pinpointed a sophism which threatens to undermine this proposal. Bradwardine turned it around: he suggested that a necessary condition on consequence was that the premises signify everything the conclusion signifies. Dumbleton gave counterexamples to Bradwardine’s postulates in which the conclusion arguably signifies more than, or even completely differently from the premises. Yet a long-standing tradition held that some species of validity depend on the conclusion being in some way contained in the premises. We explore the connection between signification and consequence and its role in solving the insolubles.
-
905765.871821
Human languages vary in terms of which meanings they lexicalize, but there are important constraints on this variation. It has been argued that languages are under the pressure to be simple (e.g., to have a small lexicon size) and to allow for an informative (i.e., precise) communication with their lexical items, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures ([ ] and much subsequent work). However, in certain semantic domains, it is possible to reach very high levels of informativeness even if very few meanings from that domain are lexicalized. This is due to productive morphosyntax, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey any natural number meaning using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages are (near-)optimal solutions to a different kind of trade-off problem: the trade-off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances).
-
938328.871848
I have now had a chance to read the first part of Greg Restall and Shawn Sandefer’s Logical Methods, some 113 pages on propositional logic.I enjoyed this well enough but I am, to be frank, a bit puzzled about the intended readership. …
-
1025771.871864
Since mass is defined as the measure of the (experimentally established) resistance a particle offers to its acceleration and as it is also an experimental fact that a particle’s resistance to its acceleration increases when its velocity increases, it follows that, like mass, the concept of relativistic mass also reflects an experimental fact. This means that the rejection of the relativistic velocity dependence of mass amounts to both rejection of the experimental evidence and refusing to face and deal with one of the deepest open questions in fundamental physics – the origin and nature of the inertial resistance of a particle to its acceleration, i.e., the origin and nature of its inertial mass.
-
1141128.871883
In his well-known book Thought Experiments (1992), R.A. Sorensen provides two modal-logical schemata for two different types of ‘destructive’ thought experiments, baptised the Necessity Refuter and the Possibility Refuter. Regarding his schemata, Sorensen (1992, p. 132) advances the following caveat: Don’t worry about whether this is the uniquely correct scheme. The adequacy of a classification system is more a question of efficiency and suggestiveness. A good scheme consolidates knowledge in a way that minimizes the demand on your memory and expedites the acquisition of new knowledge by raising helpful leading questions. Both the Necessity Refuter and the Possibility Refuter consist of five premises, which are claimed to be inconsistent (ibid,, p. 135, 153). Besides clarity about the logical structure of thought experiments, another virtue of the modal-logical schemata is, Sorensen submits, the following (ibid, p. 136): Since the above five premises are jointly inconsistent, one cannot hold all five. This means that there are at most five consistent responses to the set. Sorensen then discusses the five possible responses, each of which rejects one premise. We concur with Sorensen that systematisations of the arguments accompanying thought experiments should be judged by their usefulness, such as classifying different responses. If the premises are inconsistent, then at least one premise must be given up; but if they are consistent, they can all be held. Indeed, we claim that sticto sensu both these modal-logical schemata are consistent, undermining the usefulness of the systematisation.
-
1141161.871898
It has become common in foundational discussions to say that we have a variety of possible interpretations of quantum mechanics available to us and therefore we are faced with a problem of underdetermination. In ref [1] Wallace argues that this is not so, because several popular approaches to the measurement problem can’t be fully extended to relativistic quantum mechanics and quantum field theory (QFT), and thus they can’t reproduce many of the empirical phenomena which are correctly predicted by QFT. Wallace thus contends that as things currently stand, only the unitary-only approaches can reproduce all the predictions of quantum mechanics, so at present only the unitary-only approaches are acceptable as solutions to the measurement problem.
-
1223964.871921
The theory of morality we can call full rule-consequentialism selects
rules solely in terms of the goodness of their consequences and then
claims that these rules determine which kinds of acts are morally
wrong. George Berkeley was arguably the first rule-consequentialist. He wrote, “In framing the general laws of nature, it is granted
we must be entirely guided by the public good of mankind, but not in
the ordinary moral actions of our lives. … The rule is framed
with respect to the good of mankind; but our practice must be always
shaped immediately by the rule” (Berkeley 1712: section 31).
-
1252847.871936
In [1] it is claimed that, based on radiation emission measurements described in [2], a certain “variant” of the Orch OR theory has been refuted. I agree with this claim. However, the significance of this result for Orch OR per se is unclear. After all, the refuted “variant” was never advocated by anyone, and it contradicts the views of Hameroff and Penrose (hereafter: HP) who invented Orch OR [3].
-
1391061.87195
Sher on the weight of reasons
Posted on Friday, 13 Jan 2023. A few thoughts on Sher (2019), which I found advertised in Nair (2021). This (long and rich) paper presents a formal model of reasons and their weight, with the aim of clarifying how different reasons for or against an act combine. …
-
1434718.871963
Mereological harmony is the idea that the mereological structure of objects mirrors the mereological structure of locations. Grounding harmony is the idea that there is a similar mirroring between the grounding structure of objects and locations. Our goal in this paper is exploratory: we introduce and then explore two notions of grounding harmony: locative and structural. We outline potential locative and structural harmony principles for grounding, and show which of these principles may entail, or be entailed by, principles of mereological harmony. We then present a case study in grounding harmony, by applying it to Schaffer’s (in Philos Rev 119(1):31, 2010a) specific version of priority monism. We show that, given a strong form of grounding harmony, Schaffer-style monism is inconsistent, but that this inconsistency can be resolved by offering bespoke notions of grounding harmony. We use Schaffer’s priority monism to demonstrate a broader tension within certain packages of metaphysical views, including versions of priority pluralism. We close by briefly considering the case against structural grounding harmony.
-
1442139.871976
Simply stated, this book bridges the gap between statistics and philosophy. It does this by delineating the conceptual cores of various statistical methodologies (Bayesian/frequentist statistics, model selection, machine learning, causal inference, etc.) and drawing out their philosophical implications. Portraying statistical inference as an epistemic endeavor to justify hypotheses about a probabilistic model of a given empirical problem, the book explains the role of ontological, semantic, and epistemological assumptions that make such inductive inference possible. From this perspective, various statistical methodologies are characterized by their epistemological nature: Bayesian statistics by internalist epistemology, classical statistics by externalist epistemology, model selection by pragmatist epistemology, and deep learning by virtue epistemology. Another highlight of the book is its analysis of the ontological assumptions that underpin statistical reasoning, such as the uniformity of nature, natural kinds, real patterns, possible worlds, causal structures, etc. Moreover, recent developments in deep learning indicate that machines are carving out their own “ontology” (representations) from data, and better understanding this—a key objective of the book—is crucial for improving these machines’ performance and intelligibility.
-
1446276.87199
We report our first results regarding the automated verification of deontic correspondences (broadly conceived) and related matters in Isabelle/HOL, analogous to what has been achieved for the modal logic cube.
-
1507054.872012
Assertions, so Stalnaker’s (1978) familiar narrative goes, express propositions and are made in context; in fact, context and what is said frequently affect each other. Since language has context-sensitive expressions, which proposition some given assertion expresses may depend on the context in which it is made. Assertions, in turn, affect the context, and they do so by adding the proposition expressed by that assertion to the context.
-
1519160.872024
[This post draws on ideas developed in collaboration with psychologist Jessie Sun. ]If we want to study morality scientifically, we should want to measure it. Imagine trying to study temperature without a thermometer or weight without scales. …
-
1547000.872038
How should your opinion change in response to the opinion of an epistemic peer? We show that the pooling rule known as “upco” is the unique answer satisfying some natural desiderata. If your revised opinion will influence your opinions on other matters by Jeffrey conditionalization, then upco is the only standard pooling rule that ensures the order in which peers are consulted makes no difference. Popular proposals like linear pooling, geometric pooling, and harmonic pooling cannot boast the same. In fact, no alternative to upco can if it possesses four minimal properties which these proposals share.
-
1560913.87205
Bias infects the algorithms that weird increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have an eye to algorithmic neutrality, what can we learn about algorithmic bias? To answer these questions in concrete terms, I work with a case study: search engines. Drawing on work about neutrality in science, I say that a search engine is neutral only if certain values—like political ideologies or the financial interests of the search engine operator—play no role in how the search engine ranks pages. Search neutrality, I argue, is impossible. Its impossibility seems to threaten the significance of search bias: if no search engine is neutral, then every search engine is biased. To defuse this threat, I distinguish two forms of bias—failing-on-its-own-terms bias and other-values bias. This distinction allows us to make sense of search bias—and capture its normative complexion—despite the impossibility of neutrality.
-
1561050.872063
The proper translation of “unless” into intuitionistic formalisms is examined. After a brief examination of intuitionistic writings on “unless”, and on translation in general, and a close examination of Dummett’s use of “unless” in Elements of Intuitionism (1975b), I argue that the correct intuitionistic translation of “A unless B” is no stronger than “¬? → ?”. In particular, “unless” is demonstrably weaker than disjunction. I conclude with some observations regarding how this shows that one’s choice of logic is methodologically prior to translation from informal natural language to formal systems.
-
1627101.872077
Almost periodic functions form a natural example of a non-separable normed space. As such, it has been a challenge for constructive mathematicians to find a natural treatment of them. Here we present a simple proof of Bohr’s fundamental theorem for almost periodic functions which we then generalize to almost periodic functions on general topological groups.
-
1661432.872089
Like Lewis, many philosophers hold reductionist accounts of chance (on which claims about chance are to be understood as claims that certain patterns of events are instantiated) and maintain that rationality requires that credence should defer to chance (in the sense that under certain circumstances one’s credence in an event must coincide with the chance of that event). It is a shortcoming of an account of chance if it implies that this norm of rationality is unsatisfiable by computable agents. This shortcoming is more common than one might have hoped.