-
39184.804544
It has been argued that adult humans are absolutely time biased towards the future, at least as far as purely hedonic experiences (pain/pleasure) are concerned. What this means is that they assign zero value to them once they are in the past. Recent empirical studies have cast doubt on this claim, suggesting that while adults hold asymmetrical hedonic preferences – preferring painful experiences to be in the past and pleasurable experiences to lie in the future – these preferences are not absolute and are often abandoned when the quantity of pain or pleasure under consideration is greater in the past than in the future. Research has also examined whether such preferences might be affected by the utility people assign to experiential memories, since the recollection of past events can itself be pleasurable or aversive. We extend this line of research, investigating the utility people assign to experiential memories regardless of tense, and provide – to our knowledge – the first quantitative attempt at directly comparing the relative subjective weightings given to ‘primary’ experiences (i.e., living through the event first-hand) and ‘secondary’ (i.e., recollective or anticipatory) experiences. We find that when painful events are located in the past, the importance of the memory of the pain appears to be enhanced relative to its importance when they are located in the future. We also find extensive individual differences in hedonic preferences, reasons to adopt them, and willingness to trade them off. This research allows for a clearer picture of the utility people assign to the consumption of recollective experiences and of how this contributes to, or perhaps masks, time biases.
-
99407.804838
This dissertation defends Causal Decision Theory(CDT) against a recent (alleged) counterexample. In Dicing with Death (2014), Arif Ahmed devises a decision scenario where the recommendation given by CDT apparently contradicts our intuitive course of action. Similar to many other alleged counterexamples to CDT, Ahmed’s story features an adversary with fantastic predictive power—Death himself, in this story. Unlike many other alleged counterexamples, however, Ahmed explicitly includes the use of a costly randomization device as a possible action for the agent. I critically assess these two features of Ahmed’s story. I argue that Death’s fantastic predictive power cannot be readily reconciled with the use of randomization device. In order to sustain Dicing with Death as a coherent decision scenario, background explanations must be given about the nature of Death’s fantastic predictive power. After considering a few such explanations, however, it becomes unclear if the initial intuition which CDT apparently contradicts still holds up. Finally, I consider two contrasting decision scenarios to illustrate why Ahmed’s intuition in this case is ultimately false. I conclude that biting the bullet can perhaps be a legitimate response from CDT to many similar cases where evidentially correlated but causally isolated acts seem to force CDT to give counterintuitive recommendations.
-
99447.804863
This paper aims to resolve the incompatibility between two extant gauge-invariant accounts of the Abelian Higgs mechanism: the first account uses global gauge symmetry breaking, and the second eliminates spontaneous symmetry breaking entirely. We resolve this incompatibility by using the constrained Hamiltonian formalism in symplectic geometry. First we argue that, unlike their local counterparts, global gauge symmetries are physical. The symmetries that are spontaneously broken by the Higgs mechanism are then the global ones. Second, we explain how the dressing field method singles out the Coulomb gauge as a preferred gauge for a gauge-invariant account of the Abelian Higgs mechanism. Based on the existence of this group of global gauge symmetries that are physical, we resolve the incompatibility between the two accounts by arguing that the correct way to carry out the second method is to eliminate only the redundant gauge symmetries, i.e. those local gauge symmetries which are not global. We extend our analysis to quantum field theory, where we show that the Abelian Higgs mechanism can be understood as spontaneous global U(1) symmetry breaking in the C -algebraic sense.
-
152539.804889
Very short summary: I discuss Cass Sunstein’s recent article on the “AI calculation debate.” I agree with Sunstein that an omniscient AI is impossible, but I nonetheless argue that a “society of AIs” with a division of cognitive labor would probably be better at tackling the knowledge problem than humans. …
-
263394.804901
Visual illusions provide a means of investigating the rules and principles through which approximate number representations are formed. Here, we investigated the developmental trajectory of an important numerical illusion – the connectedness illusion, wherein connecting pairs of items with thin lines reduces perceived number without altering continuous attributes of the collections. We found that children as young as 5 years of age showed susceptibility to the illusion and that the magnitude of the effect increased into adulthood. Moreover, individuals with greater numerical acuity exhibited stronger connectedness illusions after controlling for age. Overall, these results suggest the approximate number system expects to enumerate over bounded wholes and doing so is a signature of its optimal functioning.
-
362207.804912
The desirable gambles framework provides a rigorous foundation for imprecise probability theory but relies heavily on linear utility via its coherence axioms. In our related work, we introduced function-coherent gambles to accommodate nonlinear utility. However, when repeated gambles are played over time—especially in intertemporal choice where rewards compound multiplicatively— the standard additive combination axiom fails to capture the appropriate long-run evaluation. In this paper we extend the framework by relaxing the additive combination axiom and introducing a nonlinear combination operator that effectively aggregates repeated gambles in the log-domain. This operator preserves the time-average (geometric) growth rate and addresses the ergodicity problem. We prove the key algebraic properties of the operator, discuss its impact on coherence, risk assessment, and representation, and provide a series of illustrative examples. Our approach bridges the gap between expectation values and time averages and unifies normative theory with empirically observed non-stationary reward dynamics. Keywords. Desirability, non-linear utility, ergodicity, intertemporal choice, non-additive dynamics, function-coherent gambles, risk measures.
-
418647.80492
A firm wishes to persuade a patient to take a drug by making either positive statements like “if you take our drug, you will be cured”, or negative statements like “anyone who was not cured did not take our drug”. Patients are neither Bayesian nor strategic: They use a decision procedure based on sampling past cases. We characterize the firm’s optimal statement, and analyze competition between firms making either positive statements about themselves or negative statements about their rivals. The model highlights that logically equivalent statements can differ in effectiveness and identifies circumstances favoring negative ads over positive ones.
-
618474.80493
The nineteenth-century distinction between the nomothetic and the idiographic approach to scientific inquiry can provide valuable insight into the epistemic challenges faced in contemporary earth modelling. However, as it stands, the nomothetic-idiographic dichotomy does not fully encompass the range of modelling commitments and trade-offs that geoscientists need to navigate in their practice. Adopting a historical epistemology perspective, I propose to further spell out this dichotomy as a set of modelling decisions concerning historicity, model complexity, scale, and closure. Then, I suggest that, to address the challenges posed by these decisions, a pluralist stance towards the cognitive aims of earth modelling should be endorsed, especially beyond predictive aims.
-
782637.804942
Maribel Barroso suggests exploration of an interesting avenue for inductive inference. The material theory, as I have formulated it, takes as its elements propositions that assert scientific facts. Relations of inductive support among them assess their truth or falsity. She proposes that we should take models as the elements instead of proposition. In favor of this proposal is that models have a pervasive presence in science. We should be able to confront them with evidence in a systematic way. Reconfiguring inductive inference as relations over models faces some interesting questions. Just what is it for models to be supported inductively? Can the material theory be adapted to this new case? In works cited in her review, Barroso has already begun the study of inductive relations among models in science, using insights from Whewell’s work. She is, it seems to me, well placed to seek answers to these questions. I wish her well in her continuing efforts.
-
844400.804952
I’m on holidays this week, spending some time in Cracow (Poland) and Slovakia. Today’s post is a bit off-topic compared to what I’m used to publish here, but still I hope you will enjoy it! If not the case already, do not hesitate to subscribe to receive for free essays on economics, philosophy, and liberal politics in your mailbox! …
-
964667.804959
Bell’s theorem states that no model that respects Local Causality and Statistical Independence can account for the correlations predicted by quantum mechanics via entangled states. This paper proposes a new approach, using backward-in-time conditional probabilities, which relaxes conventional assumptions of temporal ordering while preserving Statistical Independence as a “fine-tuning” condition. It is shown how such models can account for EPR/Bell correlations and, analogously, the GHZ predictions while nevertheless forbidding superluminal signalling.
-
1071005.804965
My earlier volume, The Material Theory of Induction, asserts that inductive inferences are warranted materially by facts and not by conformity with universally applicable schemas. A few examples illustrate the assertion. Marie Curie inferred that all samples of radium chloride will be crystallographically like the one sample she had prepared. The inference was warranted, not by the rule of enumerative induction, but by factual discoveries in the 19th century on the properties of crystalline substances. Galileo inferred to the heights of mountains on the moon through an analogy with mountain shadows formed on the earth. The inference was not warranted by a similarity in the reasoning in the two cases conforming with some general rule, but by the warranting fact that the same processes of linear light propagation formed the patterns of light and dark in both cases. Probabilistic inductive inferences are not warranted by the tendentious supposition that all uncertainties can be represented probabilistically. They are warranted on a case-by-case basis by facts specific to the case at hand. That we can infer probabilistically from samples to the population as a whole depends on the fact that the samples were taken randomly, that is, with each individual having an equal probability of selection. If no such warranting facts prevail, we are at serious risk of spurious inferences whose results are an artifact of misapplied logic.
-
1310617.804971
The frequency of major theory change in natural science is rapidly decreasing. Sprenger and Hartmann (2019) claim that this observation can improve the justificatory basis of scientific realism, by way of what can be called a stability argument. By enriching the conceptual basis of Sprenger and Hartmann’s argument, this paper shows that stability arguments pose a strong and novel challenge to scientific anti-realists. However, an anti-realist response to this challenge is also proposed. The resulting dialectic establishes a level of meaningful disagreement about the significance of stability arguments for scientific realism, and indicates how the disagreement can ultimately be resolved.
-
1345029.804977
Bet On It reader Ian Fillmore recently sent me a very insightful email on natalism, which I encouraged him to expand upon. In fact, I’ll put it squarely in the obvious-once-you-think-about-it category. …
-
1568727.804982
Suppose, as often happens, that you get some evidence that some belief of yours is irrational. For example, suppose you believe that you have above-average teaching ability. And suppose you then learn (as is true) that people are generally prone to irrationally overestimate their own teaching abilities. Here’s one thing that seems obvious: you should now at least somewhat increase your credence in the (higher-order) proposition that your belief that you have above-average teaching ability is irrational. So much is (mostly) uncontroversial in the contemporary epistemological literature on “higher-order evidence”— which includes, though is not exhausted by, evidence that your beliefs are irrational. More generally, evidence that some belief of yours is irrational should increase your credence in the (higher-order) proposition that your belief is irrational. This is just a special case of the general principle that evidence for some proposition p should raise your credence for p, with a higher-order proposition (that your belief is irrational) substituted for p in both instances.
-
1599003.804989
In the philosophical debate about scientific progress, several authors appeal to a distinction between what constitutes scientific progress and what promotes it (e.g., Bird, 2008; Rowbottom, 2008; Dellsén, 2016). However, the extant literature is almost completely silent on what exactly it is for scientific progress to be promoted. Here I provide a precise account of progress promotion on which it consists, roughly, in increasing expected progress. This account may be combined with any of the major theories of what constitutes scientific progress, such as the truthlikeness, problem-solving, epistemic, and noetic accounts. However, I will also suggest that once we have this account of progress promotion up and running, some accounts of what constitutes progress become harder to motivate by the sorts of considerations often adduced in their favor, while others turn out to be easier to defend against common objections.
-
1602186.804995
This paper argues that lockdown was racist. The terms are broad, but the task of definition is not random, and in §2 we motivate certain definitions as appropriate. In brief: “lockdown” refers to regulatory responses to the Covid-19 (C-19) pandemic involving significant restrictions on leaving the home and on activities outside the home, historically situated in the pandemic and widely known as “lockdowns”; and “racist” indicates what we call negligent racism, a type of racism which we define. Negligent racism does not require intent, but beyond this constraint, we do not endorse any definition of racism in general. With definitions in hand, in §3 we argue that lockdown was harmful in Africa, causing great human suffering that was not offset by benefits and amounted to net harm, far greater than in the circumstances in which most White people live. Since 1.4
-
1602236.805
Agents are said to be “clueless” if they are unable to predict some ethically important consequences of their actions. Some philosophers have argued that such “cluelessness’’ is widespread and creates problems for certain approaches to ethics. According to Hilary Greaves, a particularly problematic type of cluelessness, namely, “complex” cluelessness, affects attempts to do good as effectively as possible, as suggested by proponents of “Effective Altruism,” because we are typically clueless about the long-term consequences of such interventions. As a reaction, she suggests focusing on interventions that are long-term oriented from the start. This paper argues for three claims: first, that David Lewis’ distinction between sensitive and insensitive causation can help us better understand the differences between genuinely “complex” and more harmless “simple” cluelessness; second, that Greaves’ worry about complex cluelessness can be mitigated for attempts to do near-term good; and, third, that Greaves’ recommendation to focus on long term-oriented interventions in response to complex cluelessness is not promising as a strategy specifically for avoiding complex cluelessness. There are systematic reasons why the actual effects of serious attempts to beneficially shape the long-term future are inherently difficult to predict and why, hence, such attempts are prone to backfiring.
-
1602323.805009
LOGOS Research Group in Analytic Philosophy, Universitat Autònoma de Barcelona Perception is said to have assertoric force: It inclines the perceiver to believe its content. In contrast, perceptual imagination is commonly taken to be non-assertoric: Imagining winning a piano contest does not incline the imaginer to believe they actually won. However, abundant evidence from clinical and experimental psychology shows that imagination influences attitudes and behavior in ways similar to perceptual experiences. To account for these phenomena, I propose that perceptual imaginings have implicit assertoric force and put forth a theory—the Prima Facie View—as a unified explanation for the empirical findings reviewed. According to this view, mental images are treated as percepts in operations involving associative memory. Finally, I address alternative explanations that could account for the reviewed empirical evidence—such as a Spinozian model of belief formation or Gendler’s notion of alief—as well as potential objections to the Prima Facie View.
-
1602365.805019
Detecting introspective errors about consciousness presents challenges that are widely supposed to be difficult, if not impossible, to overcome. This is a problem for consciousness science because many central questions turn on when and to what extent we should trust subjects’ introspective reports. This has led some authors to suggest that we should abandon introspection as a source of evidence when constructing a science of consciousness. Others have concluded that central questions in consciousness science cannot be answered via empirical investigation. I argue that on closer inspection, the challenges associated with detecting introspective errors can be overcome. I demonstrate how natural kind reasoning—the iterative application of inference to the best explanation to home in on and leverage regularities in nature—can allow us to detect introspective errors even in difficult cases such as judgments about mental imagery, and I conclude that worries about intractable methodological challenges in consciousness science are misguided.
-
1705652.805029
Philosophers have struggled to explain the mismatch of emotions and their objects across time, as when we stop grieving or feeling angry despite the persistence of the underlying cause. I argue for a sceptical approach that says that these emotional changes often lack rational fit. The key observation is that our emotions must periodically reset for purely functional reasons that have nothing to do with fit. I compare this account to David Hume’s sceptical approach in matters of belief, and conclude that resistance to it rests on a confusion similar to one that he identifies.
-
1870062.805041
We characterize Martin-Lof randomness and Schnorr randomness in terms of the merging of opinions, along the lines of the Blackwell-Dubins Theorem [BD62]. After setting up a general framework for defining notions of merging randomness, we focus on finite horizon events, that is, on weak merging in the sense of Kalai-Lehrer [KL94]. In contrast to Blackwell-Dubins and Kalai-Lehrer, we consider not only the total variational distance but also the Hellinger distance and the Kullback-Leibler divergence. Our main result is a characterization of Martin-Lof randomness and Schnorr randomness in terms of weak merging and the summable Kullback-Leibler divergence. The main proof idea is that the Kullback-Leibler divergence between µ and ν, at a given stage of the learning process, is exactly the incremental growth, at that stage, of the predictable process of the Doob decomposition of the ν-submartingale L(σ) = − ln µ(σ) ν(σ) . These characterizations of algorithmic randomness notions in terms of the Kullback-Leibler divergence can be viewed as global analogues of Vovk’s theorem [Vov87] on what transpires locally with individual Martin- Lof µ- and ν-random points and the Hellinger distance between µ, ν.
-
1944901.80505
Titelbaum (2012) introduced a variant of the Sleeping Beauty problem in which a coin is tossed on both Monday and Tuesday, with the Tuesday toss not affecting Beauty’s condition. Titelbaum argues that double halfers are committed to the embarrassing position that Beauty’s credence that today’s coin toss lands heads is greater than 1/2. Pust ( ) agrees with the result, but argues that it is not a distinctive embarrassment for halfers. I argue that thirders need not be embarrassed. Double halfers, on the other hand, must hold that Beauty’s evidence is admissible for direct inference with respect to Monday’s coin toss, but not with respect to today’s coin toss. This is embarrassing because (1) a plausible argument exists for the opposite position, and (2) the position conflicts with the central motivation guiding double halfism.
-
2002616.80506
It has been argued that, in scientific observations, the theory of the observed source should not be involved in the observation process to avoid circular reasoning and ensure reliable inferences. However, the issue of underdetermination of the source has been largely overlooked. I argue that concerns about circularity in inferring the source stem from the hypothetico-deductive (H-D) method. The epistemic threat, if any, arises not from the theory-laden nature of observation but from the underdetermination of the source by the data, since the data could be explained by proposing incompatible sources for it. Overcoming this under-determination is key to reliably inferring the source. I propose a bidirectional version of inference to the only explanation as a methodological framework that addresses this challenge while circumventing concerns about theory-ladenness. Nevertheless, fully justifying the viability of the background theoretical framework and its accurate description of the source requires a broader conception of evidence. To this end, I argue that integrating meta-empirical assessment into inference to the only explanation offers a promising strategy, extending the concept of evidence in a justifiable manner.
-
2175709.80507
Theories of consciousness are abundant, yet few directly address the structural conditions necessary for subjectivity itself. This paper defends and develops the QBist constraint: the proposal that any conscious system must implement a first-person, self-updating inferential architecture. Inspired by Quantum Bayesianism (QBism), this constraint specifies that subjectivity arises only in systems capable of self-referential probabilistic updating from an internal perspective. The QBist constraint is not offered as a process theory, but as a metatheoretical adequacy condition: a structural requirement which candidate theories of consciousness must satisfy if they are to explain not merely behaviour or information processing, but genuine subjectivity. I assess five influential frameworks — the Free Energy Principle (FEP), Predictive Processing (PP), Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Higher-Order Thought (HOT) theory — and consider how each fares when interpreted through the lens of this constraint. I argue that the QBist constraint functions as a litmus test for process theories, forcing a shift in focus: from explaining cognitive capacities to specifying how an architecture might realize first-personal belief updating as a structural feature.
-
2464049.805081
Hannah Rubin, Mike D. Schneider, Remco Heesen, Alejandro Bortolus, Emelda E. Chukwu, Chad L. Hewitt, Ricardo Kaufer, Veli Mitova, Anne Schwenkenbecher, Evangelina Schwindt, Temitope O. Sogbanmu, Helena Slanickova, Katie Woolaston
Knowledge brokers, usually conceptualized as passive intermediaries between scientists and policymakers in evidence-based policymaking, are understudied in philosophy of science. Here, we challenge that usual conceptualization. As agents in their own right, knowledge brokers have their own goals and incentives, which complicate the effects of their presence at the science-policy interface. We illustrate this in an agent-based model and suggest several avenues for further exploration of the role of knowledge brokers in evidence-based policy.
-
2693753.805092
We develop a theory of policy advice that focuses on the relationship between the competence of the advisor (e.g., an expert bureaucracy) and the quality of advice that the leader may expect. We describe important tensions between these features present in a wide class of substantively important circumstances. These tensions point to the presence of a trade-off between receiving advice more often and receiving more informative advice. The optimal realization of this trade-off for the leader sometimes induces her to prefer advisors of limited competence – a preference that, we show, is robust under different informational assumptions. We consider how institutional tools available to leaders affect preferences for advisor competence and the quality of advice they may expect to receive in equilibrium.
-
2694840.805104
There are two main strands of arguments regarding the value-free ideal (VFI): desirability and achievability (Reiss and Sprenger 2020). In this essay, I will argue for what I will call a compatibilist account of upholding the VFI focusing on its desirability even if the VFI is unachievable. First, I will explain what the VFI is. Second, I will show that striving to uphold the VFI (desirability) is compatible with the rejection of its achievability. Third, I will demonstrate that the main arguments against the VFI do not refute its desirability. Finally, I will provide arguments on why it is desirable to strive to uphold the VFI even if the VFI is unachievable and show what role it can play in scientific inquiry. There is no single definition of the VFI, yet the most common way to interpret it is that non-epistemic values ought not to influence scientific reasoning (Brown 2024, 2). Non-epistemic values are understood as certain ethical, social, cultural or political considerations. Therefore, it is the role of epistemic values, such as accuracy, consistency, empirical adequacy and simplicity, to be part of and to ensure proper scientific reasoning.
-
2780922.805114
Prioritarianism is generally understood as a kind of moral
axiology. An axiology provides an account of what makes items, in
this case outcomes, good or bad, better or worse. A moral
axiology focuses on moral value: on what makes outcomes
morally good or bad, morally better or worse. Prioritarianism, specifically, posits that the moral-betterness
ranking of outcomes gives extra weight (“priority”) to
well-being gains and losses affecting those at lower levels of
well-being. It differs from utilitarianism, which is indifferent to
the well-being levels of those affected by gains and
losses.[ 1 ]
Although it is possible to construe prioritarianism as a
non-axiological moral view, this entry follows the prevailing approach
and trains its attention on axiological prioritarianism.
-
2793839.805131
This paper defends the view that logic gives norms for reasoning. This view is often thought to be problematic given that logic is not itself a theory of reasoning and that valid inferences can lead to silly or pointless beliefs. To defend it, I highlight an overlooked distinction between norms for reasoning and norms for belief. With this distinction in hand, I motivate and defend a straightforward account of how logic gives norms for reasoning, showing that it avoids standard objections. I also show that, given some substantive assumptions, we can offer an attractive account of why logic gives norms for reasoning in the way I propose, and of how it is (also) relevant to norms for belief.