-
1470747.136549
moment or not at all. Nonetheless, Lessing thought that there is at the disposal of the poet an indirect means to capture the beauty of material objects. Homer would have put it to good use in the Iliad, where the beauty of Helen of Troy was conveyed not by a description of her beauty-making features, but by a description of the effect of her beauty: “What Homer could not describe in detail he makes us understand by the effect: oh! poets paint for us the pleasure, inclination, love, rapture, which beauty causes, and you will have painted beauty itself” (Lessing 1836[1766], ). At the very least, what this passage makes clear is that
-
1470771.13665
at “trolling.” Trolls often post deliberately inflammatory content with the goal of provoking emotional responses. They aim to trick their targets into mistaking them for good faith interlocutors, thereby “baiting” them into responding in an emotional manner. This is typically done for the troll’s own entertainment, as well as the entertainment of anyone who happens to witness the exchange and recognize it as trolling. Some instances of trolling seem mostly harmless, such as when their contents aren’t ethically problematic and no one takes the bait. However, trolling can also be dangerous. For one thing, empirical studies show that racist and misogynistic trolling can be part of a gradual radicalization into extremist or hateful ideologies (Munn 2019; Hoffman et al. 2020; Rauf 2021; Thorleifsson 2022). Furthermore, when problematic trolls are allowed to run amok, online platforms can gradually become cesspools of hateful speech. So, trolling can contribute to the degradation of both individual trolls’ belief systems and broader online environments.
-
1477156.136665
Attitude relations such as belief and knowledge are two-place relations between a subject and a property, an abstract object that may vary in truth value across individuals. Lewis famously argued that self-locating attitudes should lead us to reject propositionalism in favour of proprietism, while Stalnaker argued, to the contrary, that the phenomenon of self-locating attitudes does not motivate rejecting propositionalism. In what follows, we’ll argue that there are good reasons to prefer propositionalism to pro- prietism, and we’ll show that there are natural accounts of self-locating attitudes that one can provide by appeal to the propositional relations of belief and knowledge.
-
1492034.136677
This week, 50 category theorists and software engineers working on “safeguarded AI” are meeting in Bristol. They’re being funded by £59 million from ARIA, the UK’s Advanced Research and Invention Agency. …
-
1531821.136687
(1) My 8-year-old son asked me last week, “daddy, did you hear that GPT-5 is now out?” So yes, I’m indeed aware that GPT-5 is now out! I’ve just started playing around with it. For detailed reports on what’s changed and how impressive it is compared to previous models, see for example Zvi #1, #2, #3. …
-
1533896.136697
Battisti argues that it is morally problematic to use AI tools for improving the quality of a message sent to a romantic partner as it may no longer authentically reflect one’s personality. If AI is used in this manner, there is a risk that what Battisti refers to as an “authenticity-based obligation” is violated. According to Battisti, authenticity-based obligations are nontransferable because they are inherently tied to specific people. […] the value of the result lies in the person performing the task, that is, in who undertakes the cognitive and emotional process required to bring it about1 While we find the discussion of authenticity-based obligations interesting, we doubt that this is the right criterion to apply in this context, for at least four reasons.
-
1538885.136707
I present a heretofore untheorised form of lay science, called extitutional science, whereby lay scientists, by virtue of their collective experience, are able to detect errors committed by institutional scientists and attempt to have them corrected. I argue that the epistemic success of institutional science is enhanced to the extent that it takes up this extitutional criticism. Since this uptake does not occur spontaneously, extitutional interference in the conduct of institutional science is required. I make a proposal for how to secure this epistemically beneficial form of lay interference.
-
1538938.136717
We re-examine the old question to what extent mathematics may be compared with a game. Mainly inspired by Hilbert and Wittgenstein, our answer is that mathematics is something like a “rhododendron of language games”, where the rules are inferential. The pure side of mathematics is essentially formalist, where we propose that truth is not carried by theorems corresponding to whatever independent reality and arrived at through proof, but is defined by correctness of rule-following (and as such is objective given these rules). Gödel’s theorems, which are often seen as a threat to formalist philosophies of mathematics, actually strengthen our concept of truth. The applied side of mathematics arises from two practices: first, the dual nature of axiomatization as taking from heuristic practices like physics and informal mathematics whilst giving proofs and logical analysis; and second, the ability of using the inferential role of theorems to make “surrogative” inferences about natural phenomena. Our framework is pluralist, combining various (non-referential) philosophies of mathematics.
-
1539100.136728
This paper proposes an alternative to standard first-order logic that seeks greater naturalness, generality, and semantic self-containment. The system removes the first-order restriction, avoids type hierarchies, and dispenses with external structures, making the meaning of expressions depend solely on their constituent symbols. Terms and formulas are unified into a single notion of expression, with set-builder notation integrated as a primitive construct. Connectives and quantifiers are treated as operators among others rather than as privileged primitives. The deductive framework is minimal and intuitive, with soundness and consistency established and completeness examined. While computability requirements may limit universality, the system offers a unified and potentially more faithful model of human mathematical deduction, providing an alternative foundation for formal reasoning.
-
1543047.136745
I often find myself thinking that the conventional wisdom in moral philosophy gets a lot of things backwards. For example, I’ve previously discussed how deontology is much more deeply self-effacing (making objectively right actions, and not just bungled attempts to act rightly, lamentable) than consequentialism. …
-
1570728.136771
Christopher Devlin Brown’s The Hope and Horror of Physicalism works through different ways of understanding the content of physicalism, evaluates the “existential consequences” of physicalism so understood, and attempts to defend one form of physicalism – “Russellian physicalism” – from consciousness-based objections. I first raise some minor-but-not-too-minor concerns about Brown’s historical account of physicalism. Second, I discuss one version of physicalism (the “theory-based version”) that Brown works with in assessing physicalism’s existential consequences. Third, I raise some questions about Brown’s preferred way of understanding physicalism, which he labels “Russellian physicalism”, and which is a version of “via negativa physicalism”. My discussions are offered in a constructive spirit.
-
1649941.136789
Some important policies will change future mortality rates (like climate mitigation), change future fertility rates (like public education), or respond to the emerging challenges of global depopulation. Any such policy will change each of the quality of lives, the quantity of lives, and who will live in the future. Hence, to evaluate economic policies, we need to assess both social risk and variable population. A standard principle for economic policy evaluation is Expected Total Utilitarianism, which maximizes the expected value of the sum of individuals’ transformed lifetime well-being. Despite the prominent use in public economics of both additive utilitarianism and expectation-taking under risk, these methods remain questionable in welfare economics, in part because existing axiomatic justifications make strong assumptions (Fleurbaey, 2010; Golosov et al., 2007).
-
1653759.136819
To celebrate my sons’ graduation from Vanderbilt, I commissioned a custom set of game chips, using images drawn from the bespoke role-playing games we’ve been playing since they were three years old. Since I wanted top quality and consistency, I didn’t use AI. …
-
1654437.136837
Recent work on the philosophy of high energy physics experiments has considerably advanced our understanding of their epistemology, for instance concerning measurements by the ATLAS collaboration at the large hadron collider (Beauchemin 2017). In this paper we aim to highlight and analyze complementary low energy ‘tabletop’ experiments in particle (and other kinds of fundamental) physics. In particular, we contrast ATLAS measurements with high precision measurements of the electron magnetic moment. We find, for instance, that the simplicity of the latter experiment allows for uncertainties to be minimized materially, in the very construction of the apparatus. We also sketch how a notion of ‘frugality’ can be used, in light of considerations of simplicity, to understand the value of low energy experiments with respect to the entrenched field of high energy experiment.
-
1654481.136854
In a recent paper, Harriet Fagerberg argues that the disease debate in the philosophy of medicine makes little sense as conceptual analysis but instead should proceed on the assumption that disease is a real kind. I propose an alternative view. The history and practice of medicine give us reasons to doubt that the category of disease forms a real kind. Instead, drawing on work by Quill R. Kukla, I argue that the disease debate makes good sense on an understanding of disease as an institutional kind. As well as explaining key features of the disease debate, this can facilitate a philosophical understanding of disease that captures the eclectic scope of medicine and the complex reasons why conditions get classified as diseases.
-
1654509.13687
We explore the causes and outcomes of scientific conceptual change using a case study of the development of the individualized niche concept. We outline a framework for characterizing conceptual change that distinguishes between epistemically adaptive and neutral processes and outcomes of conceptual change. We then apply this framework in tracing how the individualized niche concept arose historically out of population niche thinking and how it exhibits plurality within a contemporary biological research program. While the individualized niche concept was developed adaptively to suit new research goals and empirical findings, some of its pluralistic aspects in contemporary research may have arisen neutrally, that is for non-epistemic reasons. We suggest reasons for thinking that this plurality is unproblematic and may become useful, e.g., when it allows for the concept to be applied across differing research contexts.
-
1654530.13689
Scientific metaphysics can inform discussions of scientific representation in a number of ways. For instance, even a relatively generic commitment to some minimal form of scientific realism suggests that the targets of scientific representations should serve as source material for one’s scientifically-informed ontology. Historical connections between commitments to realism and commitments to reductive approaches in scientific metaphysics further inform a persistent strain of reductive approach to generating scientific representations. In this discussion, I examine two recent challenges to reductive scientific metaphysics from philosophers working across a variety of scientific domains and philosophical traditions: C. Kenneth Waters’ “No General Structure Thesis” and Robert Batterman’s account of scientific metaphysics built on many-body physics. Each of these accounts has what I shall call “anti-fundamentalist” leanings: they reject the premise that fundamental physical theory is the appropriate or best source material for scientific metaphysics. Following Waters, I contrast these leanings with the methodological approach of contemporary structural realism. Additionally, both Waters’ and Batterman’s accounts foreground the role of scale in defining ontological categories, and both reject the reductionist ideal that the stuff at the smallest scale is the most fundamental, the most general, or the most real. I discuss the implications for scientific representation imparted by anti-fundamentalist approaches that emphasize the role of scale in building a scientifically-informed ontology.
-
1654551.136902
The meta-inductive approach to induction justifies induction by proving its optimality. The argument for the optimality of induction proceeds in two steps. The first ‘a priori’ step intends to show that meta-induction is optimal and the second ‘a posteriori’ step intends to show that meta-induction selects object-induction in our world. I critically evaluate the second-step and raise two problems: the identification problem and the indetermination problem. In light of these problems, I assess the prospects of any meta-inductive approach to induction.
-
1654573.136912
While causal models are introduced very much like a formal logical system, they have not yet been taken to the level of a proper logic of causal reasoning with structural equations. In this paper, we furnish causal models with a distinct deductive system and a corresponding model-theoretic semantics. Interventionist conditionals will be defined in terms of inferential relations in this logic of causal models.
-
1712228.136922
That science is value-dependent has been taken to raise problems for the democratic legitimacy of scientifically-informed public policy. An increasingly common solution is to propose that science itself ought to be ‘democratised.’ Of the literature aiming to provide principled means of facilitating such, most has been largely concerned with developing accounts of how public values might be identified in order to resolve scientific valuejudgements. Through a case-study of the World Health Organisation’s 2009 redefinition of ‘pandemic’ in response to H1N1, this paper proposes that this emphasis might be unhelpfully pre-emptive, pending more thorough consideration of the question of whose values different varieties of epistemic risk ought to be negotiated in reference to. A choice of pandemic definition inevitably involves the consideration of a particular variety of epistemic risk, described here as ontic risk. In analogy with legislative versus judicial contexts, I argue that the democratisation of ontic risk assessments could bring inductive risk assessments within the scope of democratic control without necessitating that those inductive risk assessments be independently subject to democratic processes. This possibility is emblematic of a novel strategy for mitigating the opportunity costs that successful democratisation would incur for scientists: careful attention to the different normative stakes of different epistemic risks can provide principled grounds on which to propose that the democratisation of science need only be partial.
-
1758706.136932
The self represents a multifactorial entity made up of several interrelated constructs. It is suggested that self-talk orchestrates interactions between most self-processes—especially those entailing self-reflection. A review of the literature is performed, specifically looking for representative studies (n = 12) presenting correlations between self-report measures of self-talk and self-reflective processes. Self-talk questionnaires include the Self-Talk Scale, the Varieties of Inner Speech Questionnaire, the General Inner Speech Questionnaire, and the Inner Speech Scale. The main self-reflection measures are the Rumination and Reflection Questionnaire, the Self-Consciousness Scale, and the Philadelphia Mindfulness Scale. Most measures comprise subscales which are also discussed. Findings include: (1) positive significant correlations between self-talk used for self-management/assessment and self-reflection, arguably because the latter entails self-regulation, which itself relies on self-directed speech; (2) positive significant correlations between critical self-talk and self-rumination, as both may recruit negative, repetitive, and uncontrollable self-thoughts; (3) negative associations between self-talk and the self-acceptance aspect of mindfulness, likely because thinking about oneself in the present in a non-judgmental way is best achieved by repressing one’s inner voice. Limitations are discussed, including the selective nature of the reported correlations. Experimentally manipulating self-talk would make it possible to further explore causal associations with self-processes.
-
1827642.136942
The de Broglie-Bohm pilot-wave theory asserts that a complete characterization of an N - particle system is given by its wave function together with the (at-all-times-defined) positions of the particles, with the wave function always satisfying the Schrödinger equation and the positions evolving according to the deterministic “guiding equation”. A complete agreement with the predictive apparatus of standard quantum mechanics, including the uncertainty principle and the probabilistic Born rule, is then said to emerge from these equations, without having to confer any special status to measurements or observers. Two key elements behind the proof of this complete agreement are absolute uncertainty and the POVM theorem. The former involves an alleged “naturally emerging, irreducible limitation on the possibility of obtaining knowledge within pilot-wave theory” and the latter establishes that the outcome distributions of all measurements are described by POVMs. Here, we argue that the derivations of absolute uncertainty and the POVM theorem depend upon the questionable assumption that “information is always configurationally grounded”. We explain in detail why the offered rationale behind such an assumption is deficient and explore the consequences of having to let go of it.
-
1827668.136951
In John Norton’s Material Theory of Induction, background facts provide the warrant for inductive inference and determine evidential relevance. Replication, however, is excluded as a principle of inductive logic. While Norton argues replication lacks the precision and methodological clarity to serve as a material principle of inference, I argue that replication nonetheless functions as an epistemic principle of induction. I examine how replication contributes to epistemic justification within both externalist and internalist frameworks and show that its role extends beyond procedural repetition. Replication acts as a reliable belief-forming process for identifying stable facts and inferences. This reframes MTI as a theory shaped not only by local facts but by how scientists determine which facts can function as background warrant.
-
1827719.136969
Daniel Dennett’s view about consciousness in nonhuman animals has two parts. One is a methodological injunction that we rely on our best theory of consciousness to settle that issue, a theory that must initially work for consciousness in humans. The other part is Dennett’s application of his own theory of consciousness, developed in Consciousness Explained (1991), which leads him to conclude that nonhuman animals are likely never in conscious mental states. I defend the methodological injunction as both sound and important, and argue that the alternative approaches that dominate the literature are unworkable. But I also urge that Dennett’s theory of consciousness and his arguments against conscious states in nonhuman animals face significant difficulties. Those difficulties are avoided by a higher-order-thought theory of consciousness, which is close to Dennett’s theory, and provides leverage in assessing which kinds of mental state are likely to be conscious in nonhuman animals. Finally, I describe a promising experimental strategy for showing that conscious states do occur in some nonhuman animals, which fits comfortably with the higher-order-thought theory but not with Dennett’s.
-
1827742.136988
Topological Data Analysis (TDA) is a relatively recent method of Data Analysis based on the mathematical theory of Persistent Homology [36], [30], [14]. TDA proved effective in various fields of data-driven research including Life sciences and the Biomedical research. As the popular idiom goes, TDA helps to identify the shape of data, which turns out to be in many ways informative. But what precisely one can possibly learn from such shapes? How this method applies across different scientific disciplines and practical tasks? Sarita Rosenstock in her recent article [42]provided a very valuable presentation of TDA for general philosophical readership and explored the above epistemological questions in general terms. The present work extends Rosenstock’s study in three different ways. First, it broadens the theoretical context of the discussion by bringing in some related epistemological problems concerning today’s data analysis and data-driven research 2. Second, it brings the epistemological discussion on TDA into a wider historical context by pointing to some relevant earlier developments in the pure and applied mathematics 3, 4. Finally, the present Chapter focuses on applications of TDA in the Biomedical research and tests theoretical epistemological conclusions obtained in the earlier sections of this work against some concrete examples 6.
-
1873196.137002
In Part 9 we saw, loosely speaking, that the theory of a hydrogen atom is equivalent to the theory of a massless left-handed spin-½ particle in the Einstein universe—a static universe where space is a 3-sphere. …
-
1873542.13702
We present an account of how idealised models provide scientific understanding that is based on the notion of stability: a model provides understanding of a target behaviour when both the model and the target’s perfect model are in a class of models over which that behaviour is stable. The class is characterised in terms of what we call the model’s noetic core, which contains the features that are indispensable to both the model’s and the target’s behaviour. The account is factivist because it insists that models must get those aspects of the target that it aims to understand right, but it disagrees with extant factivist accounts about how models achieve this.
-
1931747.137039
As part of the summer break, I’m publishing old essays that may be of interest for new subscribers. This post has been originally published March 27, 2024. If not already the case, do not hesitate to subscribe to receive free essays on economics, philosophy, and liberal politics in your mailbox! …
-
1999851.137058
Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. …
-
1999851.137077
- There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long. The “cannot” here is nomic possibility rather than metaphysical possibility. …