-
36110.112266
It is a pleasure to read and respond to Professor Orr’s learned statement of a conservatism, one that is both rooted in tradition and updated to the contemporary. Conservatism’s top values, we learn, are order, hierarchy, a sense of belonging to a particular community in a particular time and place, a deference to tradition, and a resistance to changes that are too sweeping or too quick. Simultaneously, conservatism is distrustful of abstract definitions, eschews commitments to universal principles and certainties, preferring the empirical, the particular, and the pragmatic. Professor Orr devotes a paragraph or two to explicating further each of those core concepts.
-
36144.112396
The most direct route to political fundamentals is to ask: What should governments do? The different ‘isms’—liberalism, socialism, fascism, and so on—answer that question based on their most cherished values, holding that the purpose of government is to achieve those values. Yet societies are complex and we create many kinds of social institutions—businesses, schools, friendships and families, sports teams, churches/synagogues/mosques/temples, associations dedicated to artistic and scientific pursuits, governments, and so on—to achieve our important values.
-
36168.112409
It is difficult to overstate the extent to which contemporary political debates fail to address the underlying philosophical arguments that inform the way we govern our societies and the leaders we elected to do so. It is therefore with tremendous pleasure that I hosted a set of both written and in-person discussions between two of the great minds of modern political and philosophical thought. As you will see, Dr. James Orr, a friend and regular guest on my show, sets out with tremendous clarity and skill the arguments for the conservative worldview. He is ably challenged by Professor Stephen R. C. Hicks, another friend and favourite interviewee of mine, who argues for liberalism as the correct orientation towards the world. The debate is hugely informative, productive, and, I hope, of use to the reader—it certainly has been to me.
-
63110.112433
Suppose we consider an agent with both numerical credences and all-or-nothing beliefs. This agent might also have a plan about how she is going to update her beliefs upon receiving new evidence. What rational requirements on such a plan can be justified from an epistemic value point of view? Plan Almost Lockean Revision is the claim that it is rationally required that one’s planned beliefs are exactly one’s sufficiently high conditional credences. We start by reviewing arguments available for Plan Lockean Revision in the current literature, ultimately concluding that they are non-optimal. We provide a better argument to the effect that the belief updating rule that is expected to be the best according to one’s current credences is exactly Plan Almost Lockean Revision, that is, we prove a Qualitative Greaves-Wallace Theorem. Furthermore, building on the work of (Rothschild, 2021), we investigate the dutchbookability of Lockean betting behavior for all-or-nothing beliefs and their plannings, ultimately proving a qualitative version of the dutch strategy theorem which leads to the development of novel dutch-strategy/accuracy-dominance arguments for Lockean norms on belief/belief-planning pairs.
-
63165.112444
A number of authors (Morgan, 1999; Boumans, 2005; Morrison, 2009; Massimi and Bhimji, 2015; Parker, 2017) have argued that models can be quite literally thought of as measuring instruments. I here challenge this view by reconstructing three arguments from the literature and rebutting them. Further, I argue that models should be seen as cognitive rather than measuring instruments, and that the distinction is important for understanding scientific change: Both yield two distinct sources of insight that mutually depend on each other, and should not be equated. In particular, we may perform the exact same actions in the laboratory but conceive of them entirely differently by virtue of the models we endorse at different points in time.
-
297554.112453
A seminal controversy in statistical inference is whether error probabilities associated with an inference method are evidentially relevant once the data are in hand. Frequentist error statisticians say yes; Bayesians say no. …
-
365320.112463
Quarrels and wisecracks are essential features of interpersonal life. Quarrels are conflicts that typically take place only between friends, family, and those with whom we are personally engaged and whose attitudes toward us matter. Wisecracks are bits of improvised wit—banter, teasing, mockery, and ball busting—that also typically take place only in interpersonal life (note the following odd but revealing comment: “I can’t tease her like that; I barely even know her!”). Quarrels and cracks are, though, mutually exclusive. People know their quarrel is basically over once they start being amused by each others’ wisecracks again, and if you’re enjoying wisecracks with each other, it’s very hard, if not impossible, to quarrel at the same time. Why is this and what does it mean for interpersonal conflict? In this paper, I attempt to answer this question via a deep dive into the nature of wisecracking humor to explore the unrecognized—and valuable—role it plays in our interpersonal lives. In particular, there is a type of wisecracking humor that has a distinctive sort of interpersonal power, the power to dissolve the anger in quarrels in a surprising and productive way.
-
367400.112473
Novel tools have allowed researchers to intervene into circuits at the mesoscale. The results of these interventions are often explained by appeal to functions. How are functions ascribed to circuit parts experimentally? I identify two kinds of function ascription practices in circuit interventions. Analysis of these practices shows us that function ascriptions are challenging due to a lack of interventive control and insufficient constraints on the class of candidate functions to discriminate in practice. One kind of function ascription practice— subtractive analysis—fares better at addressing these challenges.
-
480344.112482
It has been argued that adult humans are absolutely time biased towards the future, at least as far as purely hedonic experiences (pain/pleasure) are concerned. What this means is that they assign zero value to them once they are in the past. Recent empirical studies have cast doubt on this claim, suggesting that while adults hold asymmetrical hedonic preferences – preferring painful experiences to be in the past and pleasurable experiences to lie in the future – these preferences are not absolute and are often abandoned when the quantity of pain or pleasure under consideration is greater in the past than in the future. Research has also examined whether such preferences might be affected by the utility people assign to experiential memories, since the recollection of past events can itself be pleasurable or aversive. We extend this line of research, investigating the utility people assign to experiential memories regardless of tense, and provide – to our knowledge – the first quantitative attempt at directly comparing the relative subjective weightings given to ‘primary’ experiences (i.e., living through the event first-hand) and ‘secondary’ (i.e., recollective or anticipatory) experiences. We find that when painful events are located in the past, the importance of the memory of the pain appears to be enhanced relative to its importance when they are located in the future. We also find extensive individual differences in hedonic preferences, reasons to adopt them, and willingness to trade them off. This research allows for a clearer picture of the utility people assign to the consumption of recollective experiences and of how this contributes to, or perhaps masks, time biases.
-
540567.11249
This dissertation defends Causal Decision Theory(CDT) against a recent (alleged) counterexample. In Dicing with Death (2014), Arif Ahmed devises a decision scenario where the recommendation given by CDT apparently contradicts our intuitive course of action. Similar to many other alleged counterexamples to CDT, Ahmed’s story features an adversary with fantastic predictive power—Death himself, in this story. Unlike many other alleged counterexamples, however, Ahmed explicitly includes the use of a costly randomization device as a possible action for the agent. I critically assess these two features of Ahmed’s story. I argue that Death’s fantastic predictive power cannot be readily reconciled with the use of randomization device. In order to sustain Dicing with Death as a coherent decision scenario, background explanations must be given about the nature of Death’s fantastic predictive power. After considering a few such explanations, however, it becomes unclear if the initial intuition which CDT apparently contradicts still holds up. Finally, I consider two contrasting decision scenarios to illustrate why Ahmed’s intuition in this case is ultimately false. I conclude that biting the bullet can perhaps be a legitimate response from CDT to many similar cases where evidentially correlated but causally isolated acts seem to force CDT to give counterintuitive recommendations.
-
540607.112498
This paper aims to resolve the incompatibility between two extant gauge-invariant accounts of the Abelian Higgs mechanism: the first account uses global gauge symmetry breaking, and the second eliminates spontaneous symmetry breaking entirely. We resolve this incompatibility by using the constrained Hamiltonian formalism in symplectic geometry. First we argue that, unlike their local counterparts, global gauge symmetries are physical. The symmetries that are spontaneously broken by the Higgs mechanism are then the global ones. Second, we explain how the dressing field method singles out the Coulomb gauge as a preferred gauge for a gauge-invariant account of the Abelian Higgs mechanism. Based on the existence of this group of global gauge symmetries that are physical, we resolve the incompatibility between the two accounts by arguing that the correct way to carry out the second method is to eliminate only the redundant gauge symmetries, i.e. those local gauge symmetries which are not global. We extend our analysis to quantum field theory, where we show that the Abelian Higgs mechanism can be understood as spontaneous global U(1) symmetry breaking in the C -algebraic sense.
-
593699.112508
Very short summary: I discuss Cass Sunstein’s recent article on the “AI calculation debate.” I agree with Sunstein that an omniscient AI is impossible, but I nonetheless argue that a “society of AIs” with a division of cognitive labor would probably be better at tackling the knowledge problem than humans. …
-
704554.112527
Visual illusions provide a means of investigating the rules and principles through which approximate number representations are formed. Here, we investigated the developmental trajectory of an important numerical illusion – the connectedness illusion, wherein connecting pairs of items with thin lines reduces perceived number without altering continuous attributes of the collections. We found that children as young as 5 years of age showed susceptibility to the illusion and that the magnitude of the effect increased into adulthood. Moreover, individuals with greater numerical acuity exhibited stronger connectedness illusions after controlling for age. Overall, these results suggest the approximate number system expects to enumerate over bounded wholes and doing so is a signature of its optimal functioning.
-
803367.112537
The desirable gambles framework provides a rigorous foundation for imprecise probability theory but relies heavily on linear utility via its coherence axioms. In our related work, we introduced function-coherent gambles to accommodate nonlinear utility. However, when repeated gambles are played over time—especially in intertemporal choice where rewards compound multiplicatively— the standard additive combination axiom fails to capture the appropriate long-run evaluation. In this paper we extend the framework by relaxing the additive combination axiom and introducing a nonlinear combination operator that effectively aggregates repeated gambles in the log-domain. This operator preserves the time-average (geometric) growth rate and addresses the ergodicity problem. We prove the key algebraic properties of the operator, discuss its impact on coherence, risk assessment, and representation, and provide a series of illustrative examples. Our approach bridges the gap between expectation values and time averages and unifies normative theory with empirically observed non-stationary reward dynamics. Keywords. Desirability, non-linear utility, ergodicity, intertemporal choice, non-additive dynamics, function-coherent gambles, risk measures.
-
859807.112544
A firm wishes to persuade a patient to take a drug by making either positive statements like “if you take our drug, you will be cured”, or negative statements like “anyone who was not cured did not take our drug”. Patients are neither Bayesian nor strategic: They use a decision procedure based on sampling past cases. We characterize the firm’s optimal statement, and analyze competition between firms making either positive statements about themselves or negative statements about their rivals. The model highlights that logically equivalent statements can differ in effectiveness and identifies circumstances favoring negative ads over positive ones.
-
1059634.112552
The nineteenth-century distinction between the nomothetic and the idiographic approach to scientific inquiry can provide valuable insight into the epistemic challenges faced in contemporary earth modelling. However, as it stands, the nomothetic-idiographic dichotomy does not fully encompass the range of modelling commitments and trade-offs that geoscientists need to navigate in their practice. Adopting a historical epistemology perspective, I propose to further spell out this dichotomy as a set of modelling decisions concerning historicity, model complexity, scale, and closure. Then, I suggest that, to address the challenges posed by these decisions, a pluralist stance towards the cognitive aims of earth modelling should be endorsed, especially beyond predictive aims.
-
1223797.112561
Maribel Barroso suggests exploration of an interesting avenue for inductive inference. The material theory, as I have formulated it, takes as its elements propositions that assert scientific facts. Relations of inductive support among them assess their truth or falsity. She proposes that we should take models as the elements instead of proposition. In favor of this proposal is that models have a pervasive presence in science. We should be able to confront them with evidence in a systematic way. Reconfiguring inductive inference as relations over models faces some interesting questions. Just what is it for models to be supported inductively? Can the material theory be adapted to this new case? In works cited in her review, Barroso has already begun the study of inductive relations among models in science, using insights from Whewell’s work. She is, it seems to me, well placed to seek answers to these questions. I wish her well in her continuing efforts.
-
1285560.112569
I’m on holidays this week, spending some time in Cracow (Poland) and Slovakia. Today’s post is a bit off-topic compared to what I’m used to publish here, but still I hope you will enjoy it! If not the case already, do not hesitate to subscribe to receive for free essays on economics, philosophy, and liberal politics in your mailbox! …
-
1405827.112577
Bell’s theorem states that no model that respects Local Causality and Statistical Independence can account for the correlations predicted by quantum mechanics via entangled states. This paper proposes a new approach, using backward-in-time conditional probabilities, which relaxes conventional assumptions of temporal ordering while preserving Statistical Independence as a “fine-tuning” condition. It is shown how such models can account for EPR/Bell correlations and, analogously, the GHZ predictions while nevertheless forbidding superluminal signalling.
-
1512165.112587
My earlier volume, The Material Theory of Induction, asserts that inductive inferences are warranted materially by facts and not by conformity with universally applicable schemas. A few examples illustrate the assertion. Marie Curie inferred that all samples of radium chloride will be crystallographically like the one sample she had prepared. The inference was warranted, not by the rule of enumerative induction, but by factual discoveries in the 19th century on the properties of crystalline substances. Galileo inferred to the heights of mountains on the moon through an analogy with mountain shadows formed on the earth. The inference was not warranted by a similarity in the reasoning in the two cases conforming with some general rule, but by the warranting fact that the same processes of linear light propagation formed the patterns of light and dark in both cases. Probabilistic inductive inferences are not warranted by the tendentious supposition that all uncertainties can be represented probabilistically. They are warranted on a case-by-case basis by facts specific to the case at hand. That we can infer probabilistically from samples to the population as a whole depends on the fact that the samples were taken randomly, that is, with each individual having an equal probability of selection. If no such warranting facts prevail, we are at serious risk of spurious inferences whose results are an artifact of misapplied logic.
-
1751777.112595
The frequency of major theory change in natural science is rapidly decreasing. Sprenger and Hartmann (2019) claim that this observation can improve the justificatory basis of scientific realism, by way of what can be called a stability argument. By enriching the conceptual basis of Sprenger and Hartmann’s argument, this paper shows that stability arguments pose a strong and novel challenge to scientific anti-realists. However, an anti-realist response to this challenge is also proposed. The resulting dialectic establishes a level of meaningful disagreement about the significance of stability arguments for scientific realism, and indicates how the disagreement can ultimately be resolved.
-
1786189.112604
Bet On It reader Ian Fillmore recently sent me a very insightful email on natalism, which I encouraged him to expand upon. In fact, I’ll put it squarely in the obvious-once-you-think-about-it category. …
-
2009887.112611
Suppose, as often happens, that you get some evidence that some belief of yours is irrational. For example, suppose you believe that you have above-average teaching ability. And suppose you then learn (as is true) that people are generally prone to irrationally overestimate their own teaching abilities. Here’s one thing that seems obvious: you should now at least somewhat increase your credence in the (higher-order) proposition that your belief that you have above-average teaching ability is irrational. So much is (mostly) uncontroversial in the contemporary epistemological literature on “higher-order evidence”— which includes, though is not exhausted by, evidence that your beliefs are irrational. More generally, evidence that some belief of yours is irrational should increase your credence in the (higher-order) proposition that your belief is irrational. This is just a special case of the general principle that evidence for some proposition p should raise your credence for p, with a higher-order proposition (that your belief is irrational) substituted for p in both instances.
-
2040163.112631
In the philosophical debate about scientific progress, several authors appeal to a distinction between what constitutes scientific progress and what promotes it (e.g., Bird, 2008; Rowbottom, 2008; Dellsén, 2016). However, the extant literature is almost completely silent on what exactly it is for scientific progress to be promoted. Here I provide a precise account of progress promotion on which it consists, roughly, in increasing expected progress. This account may be combined with any of the major theories of what constitutes scientific progress, such as the truthlikeness, problem-solving, epistemic, and noetic accounts. However, I will also suggest that once we have this account of progress promotion up and running, some accounts of what constitutes progress become harder to motivate by the sorts of considerations often adduced in their favor, while others turn out to be easier to defend against common objections.
-
2043346.11264
This paper argues that lockdown was racist. The terms are broad, but the task of definition is not random, and in §2 we motivate certain definitions as appropriate. In brief: “lockdown” refers to regulatory responses to the Covid-19 (C-19) pandemic involving significant restrictions on leaving the home and on activities outside the home, historically situated in the pandemic and widely known as “lockdowns”; and “racist” indicates what we call negligent racism, a type of racism which we define. Negligent racism does not require intent, but beyond this constraint, we do not endorse any definition of racism in general. With definitions in hand, in §3 we argue that lockdown was harmful in Africa, causing great human suffering that was not offset by benefits and amounted to net harm, far greater than in the circumstances in which most White people live. Since 1.4
-
2043396.112648
Agents are said to be “clueless” if they are unable to predict some ethically important consequences of their actions. Some philosophers have argued that such “cluelessness’’ is widespread and creates problems for certain approaches to ethics. According to Hilary Greaves, a particularly problematic type of cluelessness, namely, “complex” cluelessness, affects attempts to do good as effectively as possible, as suggested by proponents of “Effective Altruism,” because we are typically clueless about the long-term consequences of such interventions. As a reaction, she suggests focusing on interventions that are long-term oriented from the start. This paper argues for three claims: first, that David Lewis’ distinction between sensitive and insensitive causation can help us better understand the differences between genuinely “complex” and more harmless “simple” cluelessness; second, that Greaves’ worry about complex cluelessness can be mitigated for attempts to do near-term good; and, third, that Greaves’ recommendation to focus on long term-oriented interventions in response to complex cluelessness is not promising as a strategy specifically for avoiding complex cluelessness. There are systematic reasons why the actual effects of serious attempts to beneficially shape the long-term future are inherently difficult to predict and why, hence, such attempts are prone to backfiring.
-
2043483.112657
LOGOS Research Group in Analytic Philosophy, Universitat Autònoma de Barcelona Perception is said to have assertoric force: It inclines the perceiver to believe its content. In contrast, perceptual imagination is commonly taken to be non-assertoric: Imagining winning a piano contest does not incline the imaginer to believe they actually won. However, abundant evidence from clinical and experimental psychology shows that imagination influences attitudes and behavior in ways similar to perceptual experiences. To account for these phenomena, I propose that perceptual imaginings have implicit assertoric force and put forth a theory—the Prima Facie View—as a unified explanation for the empirical findings reviewed. According to this view, mental images are treated as percepts in operations involving associative memory. Finally, I address alternative explanations that could account for the reviewed empirical evidence—such as a Spinozian model of belief formation or Gendler’s notion of alief—as well as potential objections to the Prima Facie View.
-
2043525.112665
Detecting introspective errors about consciousness presents challenges that are widely supposed to be difficult, if not impossible, to overcome. This is a problem for consciousness science because many central questions turn on when and to what extent we should trust subjects’ introspective reports. This has led some authors to suggest that we should abandon introspection as a source of evidence when constructing a science of consciousness. Others have concluded that central questions in consciousness science cannot be answered via empirical investigation. I argue that on closer inspection, the challenges associated with detecting introspective errors can be overcome. I demonstrate how natural kind reasoning—the iterative application of inference to the best explanation to home in on and leverage regularities in nature—can allow us to detect introspective errors even in difficult cases such as judgments about mental imagery, and I conclude that worries about intractable methodological challenges in consciousness science are misguided.
-
2146812.112682
Philosophers have struggled to explain the mismatch of emotions and their objects across time, as when we stop grieving or feeling angry despite the persistence of the underlying cause. I argue for a sceptical approach that says that these emotional changes often lack rational fit. The key observation is that our emotions must periodically reset for purely functional reasons that have nothing to do with fit. I compare this account to David Hume’s sceptical approach in matters of belief, and conclude that resistance to it rests on a confusion similar to one that he identifies.
-
2311222.112692
We characterize Martin-Lof randomness and Schnorr randomness in terms of the merging of opinions, along the lines of the Blackwell-Dubins Theorem [BD62]. After setting up a general framework for defining notions of merging randomness, we focus on finite horizon events, that is, on weak merging in the sense of Kalai-Lehrer [KL94]. In contrast to Blackwell-Dubins and Kalai-Lehrer, we consider not only the total variational distance but also the Hellinger distance and the Kullback-Leibler divergence. Our main result is a characterization of Martin-Lof randomness and Schnorr randomness in terms of weak merging and the summable Kullback-Leibler divergence. The main proof idea is that the Kullback-Leibler divergence between µ and ν, at a given stage of the learning process, is exactly the incremental growth, at that stage, of the predictable process of the Doob decomposition of the ν-submartingale L(σ) = − ln µ(σ) ν(σ) . These characterizations of algorithmic randomness notions in terms of the Kullback-Leibler divergence can be viewed as global analogues of Vovk’s theorem [Vov87] on what transpires locally with individual Martin- Lof µ- and ν-random points and the Hellinger distance between µ, ν.