1. 7196.339068
    Dietrich and List on reasons Posted on Friday, 03 Feb 2023. Let's return to my recent explorations into the formal structure of reasons. One important approach that I haven't talked about yet is that of Dietrich and List, described in Dietrich and List (2013a), Dietrich and List (2013b), and Dietrich and List (2016). …
    Found 1 hour, 59 minutes ago on wo's weblog
  2. 8719.339151
    Some claims of impossibility proofs in physics are known to harbour unjustified assumptions. In this paper, I show that Bell’s theorem [1] against local hidden variable theories completing quantum mechanics is no exception. It is no different, in this respect, from von Neumann’s theorem against all hidden variable theories [2], or the Coleman-Mandula theorem overlooking the possibilities of supersymmetry [3]. The implicit and unjustified assumptions underlying the latter two theorems seemed so innocuous that they escaped notice for decades. By contrast, Bell’s theorem has faced skepticism and challenges by many from its very inception (cf. footnote 1 in [4]), including by me [4–15], because it depends on a number of questionable implicit and explicit physical assumptions that are not difficult to recognize [9, 15]. In what follows, I bring out one such assumption and demonstrate that Bell’s theorem is based on a circular argument [8]. It unjustifiably assumes the additivity of expectation values for dispersion-free states of hidden variable theories for non-commuting observables involved in the Bell-test experiments [16], which is tautologous to assuming the bounds of ±2 on the Bell-CHSH sum of expectation values. It thus assumes in a different guise what it sets out to prove. As a result, what is ruled out by Bell-test experiments is not local realism but additivity of expectation values, which does not hold for non-commuting observables in dispersion-free states of hidden variable theories to begin with.
    Found 2 hours, 25 minutes ago on PhilSci Archive
  3. 10129.339171
    This at first blush confounding passage contains a lot that is crucial to an understanding of Kant’s project in the Deduction, and beyond. It seems to suggest circularity: combination or synthesis is the representation of the very unity that first brings forth synthesis. Synthesis is the central notion that mediates unity and plurality from early on in Kant’s thought. It is therefore key to the analysis of knowledge of particulars. I provide some brief historical background from Kant’s early work. What I want to do in this paper is to examine the following claims relating to this statement: (1) Combination (or synthesis) and unity, and so representation of unity and unity, are equiprimordial, co-determining features of one multifaceted original act of synthesis of the understanding.
    Found 2 hours, 48 minutes ago on PhilPapers
  4. 13885.339185
    The paper proposes a re-assessment of Reichenbach’s ‘causal’ theory of time. Reichenbach’s version of the theory, first proposed in 1921, is interesting because it is one of the first attempts to construct a causal theory as a relational theory of time, which fully takes the results of the Special theory of relativity into account. The theory derives its name from the cone structure of Minkowski space-time, in particular the emission of light signals. At first Reichenbach defines an ‘order’ of time, a ‘before-after’ relationship between mechanical events. In his later work, he comes to the conclusion that the ‘order’ of time needs to be distinguished from the ‘direction’ of time. He therefore abandons the sole focus on light geometry and turns to Boltzmann’s statistical version of thermodynamics.
    Found 3 hours, 51 minutes ago on PhilSci Archive
  5. 13888.33921
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models correlate with neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed in the brain, it is expected to succeed only when the modeled capacities are such that the brain has a sub-system dedicated to their performance. This is likely to occur when the modeled capacities are the result of a distinct adaptive or developmental process.
    Found 3 hours, 51 minutes ago on PhilSci Archive
  6. 13925.339226
    In recent years, the explanatory term “scaffold” has been gaining prominence in evolutionary biology. This notion has a long history in other areas, in particular, developmental psychology. In this paper, we connect these two traditions and identify a specific type of explanatory strategy shared between them, namely scaffolding explanations. We offer a new definition of “scaffold” anchored in the explanatory practices of evolutionary biologists and developmental psychologists that has yet to be clearly articulated. We conclude by offering a systematic overview of the various dimensions of scaffolding explanations that further suggests both their usefulness and range of application.
    Found 3 hours, 52 minutes ago on PhilSci Archive
  7. 13930.339242
    The deep connection between entropy and information is discussed in terms of both classical and quantum physics. The mechanism of information transfer between systems via entanglement is explored in the context of decoherence theory. The concept of entropic time is then introduced on the basis of information acquisition, which is argued to be effectively irreversible and consistent with both the Second Law of Thermodynamics and our psychological perception of time. This is distinguished from the notion of parametric time, which serves as the temporal parameter for the unitary evolution of a physical state in non-relativistic quantum mechanics. The atemporal nature of the ‘collapse’ of the state vector associated with such information gain is discussed in light of relativistic considerations. The interpretation of these ideas in terms of both subjective and objective collapse models is also discussed. It is shown that energy is conserved under subjective collapse schemes whereas, in general, under objective collapse it is not. This is consistent with the fact that the latter is inherently non-unitary and that energy conservation arises out of time symmetry in the first place.
    Found 3 hours, 52 minutes ago on PhilSci Archive
  8. 13958.339256
    This article addresses the contributions of the literature on the new mechanistic philosophy of science for the scientific practice of model building in ecology. This is reflected in a one-to-one interdisciplinary collaboration between an ecologist and a philosopher of science during science-in-the-making. We argue that the identification, reconstruction and understanding of mechanisms is context-sensitive, and for this case study mechanistic modeling did not present a normative role but a heuristic one. We expect our study to provides useful epistemic tools for the improvement of empirically-riven work in the debates about mechanistic explanation of ecological phenomena.
    Found 3 hours, 52 minutes ago on Federica Russo's site
  9. 163156.339269
    I argue that when we use ‘probability’ language in epistemic contexts—e.g., when we ask how probable some hypothesis is, given the evidence available to us—we are talking about degrees of support, rather than degrees of belief. The epistemic probability of A given B is the mind-independent degree to which B supports A, not the degree to which someone with B as their evidence believes A, or the degree to which someone would or should believe A if they had B as their evidence. My central argument is that the degree-of-support interpretation lets us better model good reasoning in certain cases involving old evidence. Degree-of-belief interpretations make the wrong predictions not only about whether old evidence confirms new hypotheses, but about the values of the probabilities that enter into Bayes’ Theorem when we calculate the probability of hypotheses conditional on old evidence and new background information.
    Found 1 day, 21 hours ago on PhilSci Archive
  10. 172637.339285
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and other normative considerations, such as intersectoral vulnerabilities, at critical stages of the whole process from design and implementation to use and assessment. To connect ethics and epistemology of AI, we perform a double shift of focus. First, we move from trusting the output of an AI system to trusting the process that leads to the outcome. Second, we move from expert assessment to more inclusive assessment strategies, aiming to facilitate expert and non-expert assessment. Together, these two moves yield a framework usable for experts and non-experts when they inquire into relevant epistemological and ethical aspects of AI systems. We dub our framework ‘epistemology-cum-ethics’ to signal the equal importance of both aspects. We develop it from the vantage point of the designers: how to create the conditions to internalize values into the whole process of design, implementation, use, and assessment of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and inspectable by every salient actor involved at any moment.
    Found 1 day, 23 hours ago on Federica Russo's site
  11. 184856.339298
    One of the questions central to linguistics and the philosophy of language, unsurprisingly, is what is meaning? While to an uncritical eye the answer may seem to be straightforward (perhaps that meanings are in the mind and expressions represent them as symbols), the discussions of linguists and philosophers of the last one and half centuries have indicated that the situation is by far not so straightforward. Aside of the representational theories of meaning (which have originated as critical elaborations of the intuitions mentioned above), there emerged theories that perhaps did not go so well with the intuition, but which did away with some problems of the representational theories. The so called use theories of meaning identified the meaning of an expression with the way the expression is used within the relevant language games. And this contribution discusses the kind of use theories that see the language games as rule-governed and see the meaning as the role conferred on the expression by the corresponding rules.
    Found 2 days, 3 hours ago on Jaroslav Peregrin's site
  12. 224489.339311
    We present epistemic multilateral logic, a general logical framework for reasoning involving epistemic modality. Standard bilateral systems use propositional formulae marked with signs for assertion and rejection. Epistemic multilateral logic extends standard bilateral systems with a sign for the speech act of weak assertion (Incurvati & Schl öder, 2019) and an operator for epistemic modality. We prove that epistemic multilateral logic is sound and complete with respect to the modal logic S5 modulo an appropriate translation. The logical framework developed provides the basis for a novel, proof-theoretic approach to the study of epistemic modality. To demonstrate the fruitfulness of the approach, we show how the framework allows us to reconcile classical logic with the contradictoriness of so-called Yalcin sentences and to distinguish between various inference patterns on the basis of the epistemic properties they preserve.
    Found 2 days, 14 hours ago on Luca Incurvati's site
  13. 224734.339327
    This paper explores the prospects of employing a functional approach in order to improve our concept of actual causation. Claims of actual causation play an important role for a variety of purposes. In particular, they are relevant for identifying suitable targets for intervention, and they are relevant for our practices of ascribing responsibility. I argue that this gives rise to the challenge of purpose. The challenge of purpose arises when different goals demand adjustments of the concept that pull in opposing directions. More specifically, I argue that a common distinction between certain kinds of preempted and preempting factors is difficult to motivate from an interventionist viewpoint. This indicates that an appropriately revised concept of actual causation would not distinguish between these two kinds of factors. From the viewpoint of retributivist responsibility, however, the distinction between preempted and preempting factors sometimes is important, which indicates that the distinction should be retained.
    Found 2 days, 14 hours ago on PhilPapers
  14. 224782.33934
    Non-philosophers could be forgiven for thinking that philosophers are a cautious bunch. For philosophers are becoming increasingly preoccupied with prudence. Naturally, however, philosophers have something different in mind than the ordinary sense of ‘prudence’. Rather than denoting the quality of cautiousness, philosophers typically take ‘prudence’ to denote an evaluative or normative standpoint, one whose evaluations are in some sense determined by facts about what is good and bad for us; or, to use some more terminology that is apt to mislead the lay reader, facts about well-being, welfare, or self-interest.
    Found 2 days, 14 hours ago on PhilPapers
  15. 238630.339362
    There is a popular picture of Socrates as someone inviting us to think for ourselves. I was just re-reading the Euthyphro, and realizing that the popular picture is severely incomplete. Recall the setting. …
    Found 2 days, 18 hours ago on Alexander Pruss's Blog
  16. 245657.339377
    In my previous post, I showed that a continuous anti-anti-Bayesian accuracy scoring rule on probabilities defined on a sub-algebra of events satisfying the technical assumption that the full algebra contains an event logically independent of the sub-algebra is proper. …
    Found 2 days, 20 hours ago on Alexander Pruss's Blog
  17. 297640.33939
    Shelly Kagan notices in a recent, influential paper how philosophers of well-being tend to neglect ill-being—the part of the theory of well-being that tells us what is bad in itself for subjects—and explains why we need to give it more attention. This paper does its part by addressing the question, If desire satisfaction is good, what is the corresponding bad? The two most discussed ill-being options for theories on which desire satisfaction is a basic good are the Frustration View and the Aversion View. I aim to show that the Frustration View is more plausible than Kagan and others think; to introduce and evaluate two additional desire-oriented theories of ill-being worth considering, the Pluralist View and the Deflationary View; and to present a new line of argument for the Aversion View.
    Found 3 days, 10 hours ago on Chris Heathwood's site
  18. 304677.339403
    Structured Propositionalism — the view that propositions are mereo-logically complex structured entities — is the regnant paradigm in the philosophy of language. As Steven Schiffer says, “Virtually every propositionalist accepts [compositionality] and rejects unstructured propositions” (2003: 18), and even the “new” theories of propositions defended by Peter Hanks, Jeffrey King, Scott Soames, and Jeff Speaks take propositions to be complex, structured entities.
    Found 3 days, 12 hours ago on John A. Keller's site
  19. 322361.339416
    Philosophical accounts of the nature of belief, at least in the western tradition, are framed in large part by two ideas. One is that believing is a form of representing. The other is that a belief plays a causal role when a person acts on it. The standard picture of belief as a mental entity with representational properties and causal powers merges these two ideas. We are to think of beliefs as things that are true or false and that interact with desires, intentions, and emotions to bring about rational action. Both ideas, I think, are ill-founded. One effect of abandoning them is a further blurring of the distinction between what is inside and what is outside our minds.
    Found 3 days, 17 hours ago on PhilPapers
  20. 322402.339444
    —Precise measurements of well-being would be of profound societal importance. Yet, the sceptical worry that we cannot use social science instruments and tests to measure well-being is widely discussed by philosophers and scientists. A recent and interesting philosophical argument has pointed to the psychometric procedures of construct validation to address this sceptical worry. The argument has proposed that these procedures could warrant confidence in our ability to measure well-being. The present paper evaluates whether this type of argument succeeds. The answer is that it depends on which methodological background assumptions are motivating the sceptical worry to begin with. We show this by doing two things. First, we clarify (a) the different types of well-being theories involved in the science of well-being, and (b) the general methodological dimensions of well-being theorising. Second, we apply these distinctions and argue that construct validation is an unsuccessful response to measurement scepticism if this scepticism is motivated by a form of methodological non-naturalism. In the light of this, the overall point of the paper is that philosophers and scientists, when discussing measurement of well-being, should explicate their deeper methodological commitments. We further suggest that making such explicit commitments might present philosophers with a dilemma.
    Found 3 days, 17 hours ago on PhilPapers
  21. 322406.339469
    Few would say that we are infallible with respect to our own states of mind. Still, our ordinary ways of thinking represent people as having an especially intimate relationship to their own attitudes, sensations, emotions, and so on. This special intimacy is suggested inter alia by the default assumption that when sincerely made, present-tense mental state self-ascriptions – statements like “My feet are aching”, “I want to leave the party”, “I think Justin is bored” – will be true. Following familiar usage, I will call such statements ‘self-ascriptions’, and will label this feature of them their ‘first-person authority’. It is natural to understand first-person authority as bound up with the fact that we are in an especially good position to know about our own states of mind, with the idea being that sincere self-ascriptions express this knowledge. Call this the ‘epistemic approach’ to understanding first-person authority. But there is also a prominent line of thought which rejects the epistemic approach. Expressivists about self-ascriptions think that it fails to capture a second feature of these statements, what I will call (following Bar-On (2004, 6 ff.)) their ‘epistemic asymmetry’ with other kinds of assertion.
    Found 3 days, 17 hours ago on PhilPapers
  22. 322414.339486
    Some belief systems postulate intelligent agents that are deliberately evading detection and thus sabotaging any possible investigation into their existence. These belief systems have the remarkable feature that they predict an absence of evidence in their favor, and even the discovery of counterevidence. Such ‘epistemic black holes’, as we call them, crop up in different guises and in different domains: history, psychology, religion. Because of their radical underdetermination by evidence and their extreme resilience to counterevidence, they develop and evolve in certain predictable ways. Shedding light on how epistemic black holes function can protect us against their allure.
    Found 3 days, 17 hours ago on PhilSci Archive
  23. 322432.339503
    Of course, the networks were smaller then. The learning was shallow, and the language models were little. As the nineties went on, progress in neural networks slowed down, and I got distracted for a few decades by thinking about consciousness. I maintained an amateurish interest in machine learning, and followed the explosion of work in this area over the last ten years as the networks got bigger and more powerful. But it was just this year (2022) that my interests in neural networks and in consciousness began to collide.
    Found 3 days, 17 hours ago on PhilPapers
  24. 324293.339517
    Suppose we have a probability space Ω with algebra F of events, and a distinguished subalgebra H of events on Ω. My interest here is in accuracy H-scoring rules, which take a (finitely-additive) probability assignment p on H and assigns to it an H-measurable score function s(p) on Ω, with values in [−∞,M] for some finite M, subject to the constraint that s(p) is H-measurable. …
    Found 3 days, 18 hours ago on Alexander Pruss's Blog
  25. 327336.33953
    Can reasoning improve moral judgments and lead to moral progress? Pessimistic answers to this question are often based on caricatures of reasoning, weak scientific evidence, and flawed interpretations of solid evidence. In support of optimism, we discuss three forms of moral reasoning (principle reasoning, consistency reasoning, and social proof) that can spur progressive changes in attitudes and behavior on a variety of issues, such as charitable giving, gay rights, and meat consumption. We conclude that moral reasoning, particularly when embedded in social networks with mutual trust and respect, is integral to moral progress.
    Found 3 days, 18 hours ago on Josh May's site
  26. 386018.339543
    One of the most problematic aspects of some science practice is a cut-off, say at 95%, for the evidence-based confidence needed for publication. I just realized, with the help of a mention of p-based biases and improper scoring rules somewhere on the web, that what is going on here is precisely a problem of a reward structure that does not result in a proper scoring rule, where a proper scoring rule is one where your current probability assignment is guaranteed to have an optimal expected score according to that very probability assignment. …
    Found 4 days, 11 hours ago on Alexander Pruss's Blog
  27. 396385.339558
    Scientists imagine constantly. They do this when generating research problems, designing experiments, interpreting data, troubleshooting, drafting papers and presentations, and giving feedback. But when and how do scientists learn how to use imagination? Across six years of ethnographic research, it has been found that advanced career scientists feel comfortable using and discussing imagination, while graduate and undergraduate students of science often do not. In addition, members of marginalized and vulnerable groups tend to express negative views about the strength of their own imaginations, and the general usefulness of imagination in science. After introducing these findings and discussing the typical relationship between a scientist and their imagination across a career, we argue that reducing the number or power of active imaginations in science is epistemically counterproductive, and suggest a number of ways to bring imagination back into science in a more inclusive way, especially through courses on imagination for scientists, role models, and exemplar-based learning.
    Found 4 days, 14 hours ago on PhilSci Archive
  28. 396524.339574
    Practical ability manifested through robust and reliable task performance, as well as information relevance and well-structured representation, are key factors indicative of understanding in the philosophical literature. We explore these factors in the context of deep learning, identifying prominent patterns in how the results of these algorithms represent information. While the estimation applications of modern neural networks do not qualify as the mental activity of persons, we argue that coupling analyses from philosophical accounts with the empirical and theoretical basis for identifying these factors in deep learning representations provides a framework for discussing and critically evaluating potential machine understanding given the continually improving task performance enabled by such algorithms.
    Found 4 days, 14 hours ago on PhilSci Archive
  29. 398368.339598
    Strictly speaking, I am not a realist about “the standard model of particle physics.” The standard model is a partial description, a representation that captures some aspects of how reality behaves, and only an approximate representation at that – extremely accurate within a certain domain, but completely inapplicable in others. What I am a realist about is reality, by which I mean the totality of the physical universe. The standard model of particle physics, like general relativity or Newtonian mechanics, provide useful ways of talking about reality in certain circumstances, but I would not describe them as fundamentally “real.” The same, I would argue, goes for mathematics generally, about which I am not a realist.
    Found 4 days, 14 hours ago on PhilPapers
  30. 454482.339613
    This paper traces the origin of renormalization group concepts back to two strands of 1950s high energy physics: the causal perturbation theory programme, which gave rise to the Stueckelberg-Petermann renormalization group, and the debate about the consistency of quantum electrodynamics, which gave rise to the Gell-Mann-Low renormalization group. Recognising the different motivations that shaped these early approaches sheds light on the formal and interpretive diversity we find in contemporary renormalization group methods.
    Found 5 days, 6 hours ago on PhilSci Archive