1. 46952.549586
    Here is a widespread but controversial idea: those animals who represent correctly are likely to be selected over those who misrepresent. While various versions of this claim have been traditionally endorsed by the vast majority of philosophers of mind, recently, it has been argued that this is just plainly wrong. My aim in this paper is to argue for an intermediate position: that the correctness of some but not all representations is indeed selectively advantageous. It is selectively advantageous to have correct representations that are directly involved in bringing about and guiding the organism’s action. I start with the standard objection to the claim that it is selectively advantageous to represent correctly, the ‘better safe than sorry’ argument and then generalize it with the help of Peter Godfrey Smith’s distinction between Cartesian and Jamesian reliability and the trade-off between them. This generalized argument rules out a positive answer to our question at least as far as the vast majority of our representational apparatus is concerned.
    Found 13 hours, 2 minutes ago on PhilPapers
  2. 81513.54974
    © 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.
    Found 22 hours, 38 minutes ago on Alasdair Richmond's site
  3. 81577.549766
    It would be hard to overestimate the amount of progress on causation and causal explanation since Woodward’s first book, Making Things Happen (2005). It is unusual in philosophy to pronounce so confidently that a discussion has not merely changed, but genuinely progressed. It’s easy to say that this book has been long anticipated, will be widely read, and will shape the discussion for years to come. Given that, this is a perhaps surprising direction toward which to turn current discussions of causation, which have been moving more in the direction of formal methods, machine learning, and automated causal discovery. This book goes almost completely the other direction.
    Found 22 hours, 39 minutes ago on PhilSci Archive
  4. 166165.549794
    Gallow on causal counterfactuals without miracles and backtracking Posted on Friday, 27 Jan 2023. Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions. …
    Found 1 day, 22 hours ago on wo's weblog
  5. 218401.549809
    Proponents of an “extended evolutionary synthesis” (EES) criticize standard evolutionary theory on the grounds that it overlooks the causal roles of developmental and ecological phenomena. On this view, processes such as niche construction and phenotypic plasticity are as much causes of adaptive evolution as they are products. By generating variation, as well as biasing evolutionary processes themselves, these phenomena participate with natural selection in episodes of “reciprocal causation.” To ignore the feedback between ecology, development, and evolution in our theoretical synthesis, proponents argue, is to impede biological progress. The way we conceptualize evolution influences the way we investigate it—the questions we ask, the empirical tools we use, and the assumptions we take for granted. Therefore, according to the proponent of an EES, conceptual revision is warranted.
    Found 2 days, 12 hours ago on PhilSci Archive
  6. 220115.549824
    The Hole Argument presents a formidable challenge against spacetime substantivalism. The doctrine of substantivalism, roughly, holds that spacetime exists independently from matter. In the theory of General Relativity (GR), fields are represented as functions f(x) over a base manifold M, so f(p) represents the value of f at point p. In vacuum GR, the sole field is the metric g(x).
    Found 2 days, 13 hours ago on PhilPapers
  7. 242979.549837
    The main idea expressed in this thesis is that phenomenal character can be somehow understood in terms of representational content. This, if true, represents substantial progress toward closing the mind-body explanatory gap: if we can give a naturalistic account of representational content, we only need to plug in Intentionalism and we get a naturalistic account of phenomenal experience.
    Found 2 days, 19 hours ago on Manolo Martínez's site
  8. 339401.549849
    There is broad agreement among researchers on the senses that sensory systems have evolved to facilitate adaptive responses to the environment. There is also considerable agreement on the issue of how sensory systems promote biologically successful behavior: they do so by conveying information about states of the environment that make a difference to the success of the organism’s outputs. Should we conclude that the senses are similar in nature to our everyday carbon monoxide alarms and smoke detectors, systems limited to the role of conveying information relevant to the user’s practical interests? Or does nature sometimes favor sensory systems more akin to photometers and thermometers designed by physicists to provide a disinterested or detached perspective on the world? To answer this question, we need to get clear about what kinds of problems sensory systems have evolved to confront and how they go about confronting them. The thesis defended in this paper is that, while specialist sensory systems are similar in function to smoke alarms and carbon monoxide detectors, many generalist sensory systems have evolved to impart a more disinterested or objective point of view on the world.
    Found 3 days, 22 hours ago on Todd Ganson's site
  9. 346217.549863
    Determinism is a centrally important notion for physics: it links time to laws and connects events along spatial surfaces to events along the temporal dimension. In the context of space-time theories, failures of determinism have been viewed as pathologies and used to identify superfluous structure. In philosophy, determinism has played its most important role in discussions of free will, where a certain picture of what determinism entails has a strong grip on the imagination. According to that picture, a deterministic universe unfolds with physical necessity from an initial condition that was set long ago. This presents a strong challenge to your sense of agency because it takes two very basic commitments — the idea that the laws of physics place fundamental constraints on what can happen (you throw a ball in the air or set a pendulum in motion and you know exactly what is going to happen) and that the past is fixed — and it uses the laws to leverage the fixity of the past into the fixity of the future. Neither of those commitments seems negotiable. There’s a famous argument that makes this explicit that goes, in simple terms, like this: the past is fixed and out of our control; the laws are fixed and out of our control.
    Found 4 days ago on Jenann Ismael's site
  10. 449405.549876
    Breeds are classifications of domestic animals that share, to a certain degree, a set of conventional phenotypic traits. We are going to defend that, despite classifying biological entities, animal breeds are social kinds. We will adopt Godman’s view of social kinds, classifications with predictive power based on social learning processes. We will show that, although the folk concept of animal breed refers to a biological kind, there is no way to define it. The expert definitions of breeds are instead based on socially learnt conventions and skills (artificial selection), yielding groupings in which scientific predictions are possible. We will discuss in what sense breeds are social, but not human kinds and in what sense the concept of a breed is necessary to make them real.
    Found 5 days, 4 hours ago on PhilSci Archive
  11. 508781.549888
    Cancer biology features the ascription of normal functions to parts of cancers. At least some ascriptions of function in cancer biology track local normality of parts within the global abnormality of the aberration to which those parts belong. That is, cancer biologists identify as functions activities that, in some sense, parts of cancers are supposed to perform, despite cancers themselves having no purpose. The present paper provides a theory to accommodate these normal function ascriptions—I call it the Modeling Account of Normal Function (MA). MA comprises two claims.
    Found 5 days, 21 hours ago on PhilPapers
  12. 512873.5499
    What differentiates scientific research from non-scientific inquiry? Philosophers addressing this question have typically been inspired by the exalted social place and intellectual achievements of science. They have hence tended to point to some epistemic virtue or methodological feature of science that sets it apart. Our discussion on the other hand is motivated by the case of commercial research, which we argue is distinct from (and often epistemically inferior to) academic research. We consider a deflationary view in which science refers to whatever is regarded as epistemically successful, but find that this does not leave room for the important notion of scientific error and fails to capture distinctive social elements of science. This leads us to the view that a demarcation criterion should be a widely upheld social norm without immediate epistemic connotations. Our tentative answer is the communist norm, which calls on scientists to share their work widely for public scrutiny and evaluation.
    Found 5 days, 22 hours ago on Remco Heesen's site
  13. 522337.549924
    Humans can think about possible states of the world without believing in them, an important capacity for high-level cognition. Here we use fMRI and a novel “shell game” task to test two competing theories about the nature of belief and its neural basis. According to the Cartesian theory, information is first understood, then assessed for veracity, and ultimately encoded as either believed or not believed. According to the Spinozan theory, comprehension entails belief by default, such that understanding without believing requires an additional process of “unbelieving”. Participants (N=70) were experimentally induced to have beliefs, desires, or mere thoughts about hidden states of the shell game (e.g., believing that the dog is hidden in the upper right corner). That is, participants were induced to have specific “propositional attitudes” toward specific “propositions” in a controlled way. Consistent with the Spinozan theory, we found that thinking about a proposition without believing it is associated with increased activation of the right inferior frontal gyrus (IFG). This was true whether the hidden state was desired by the participant (due to reward) or merely thought about. These findings are consistent with a version of the Spinozan theory whereby unbelieving is an inhibitory control process. We consider potential implications of these results for the phenomena of delusional belief and wishful thinking.
    Found 6 days, 1 hour ago on Dillon Plunkett's site
  14. 647290.549937
    The Philosophy of Science Can Usefully Be Divided Into Two Broad Areas. On the One Hand is the Epistemology of Science, Which Deals with Issues Relating to the Justification of Claims to Scientific Knowledge. Philosophers Working in This Area Investigate Such Questions as Whether Science Ever Uncovers Permanent Truths, Whether Objective Decisions Between Competing Theories Are Possible and Whether the Results of Experiment Are Clouded by Prior Theoretical Expectations. On the Other Hand Are Topics in the Metaphysics of Science, Topics Relating to Philosophically Puzzling Features of the Natural World Described by Science. Here Philosophers Ask Such Questions as Whether All Events Are Determined by Prior Causes, Whether Everything Can Be Reduced to Physics and Whether There Are Purposes in Nature. You Can Think of the Difference Between the Epistemologists and the Metaphysicians of Science in This Way. The Epistemologists Wonder Whether We Should Believe What the Scientists Tell Us. The Metaphysicians Worry About What the World is Like, If the Scientists Are Right. Readers Will Wish to Consult Chapters on Epistemology (Chapter 1), Metaphysics (Chapter 2), Philosophy of Mathemat- Ics (Chapter 11), Philosophy of Social Science (Chapter 12) and Pragmatism (Chapter 36).
    Found 1 week ago on David Papineau's site
  15. 739564.549959
    This paper concerns the recent revival of entity realism. Having been started with the work of Ian Hacking, Nancy Cartwright and Ronald Giere, the project of entity realism has recently been developed by Matthias Egg, Markus Eronen, and Bence Nanay. The paper opens a dialogue among these recent views on entity realism and integrates them into a more advanced view. The result is an epistemological criterion for reality: the property-tokens of a certain type may be taken as real insofar as only they can be materially inferred from the evidence obtained in a variety of independent ways of detection.
    Found 1 week, 1 day ago on PhilPapers
  16. 777896.54999
    Adaptationism is often taken to be the thesis that most traits are adaptations. In order to assess this thesis, it seems we must be able to establish either an exhaustive set of all traits or a representative sample of this set. Either task requires a more systematic and principled way of individuating traits than is currently available. Moreover, different trait individuation criteria can make adaptationism turn out true or false. For instance, individuation based on natural selection may render adaptationism true, but may do so by presupposing adaptationism. In this paper, we show how adaptationism depends on trait individuation and that the latter is an open and unsolved problem.
    Found 1 week, 2 days ago on PhilSci Archive
  17. 777917.550004
    In this paper, I critically assess Mark Richard’s interesting and important development of the claim that linguistic meanings can be fruitfully analogized with biological species. I argue that linguistic meanings qua cluster of interpretative presuppositions need not and often do not display the population-level independence and reproductive isolation that is characteristic of the biological species concept. After developing these problems in some detail, I close with a discussion of their implications for the picture
    Found 1 week, 2 days ago on PhilPapers
  18. 781181.550023
    Kocurek on chance and would Posted on Friday, 20 Jan 2023. A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022). The paper investigates Al Hajek's argument (e.g. …
    Found 1 week, 2 days ago on wo's weblog
  19. 795653.550036
    In a recent paper, Sprenger (2019) advances what he calls a “suppositional” answer to the question of why a Bayesian agent’s degrees of belief should align with the probabilities found in statistical models. We show that Sprenger’s account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
    Found 1 week, 2 days ago on PhilSci Archive
  20. 906518.550056
    Human languages vary in terms of which meanings they lexicalize, but there are important constraints on this variation. It has been argued that languages are under the pressure to be simple (e.g., to have a small lexicon size) and to allow for an informative (i.e., precise) communication with their lexical items, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures ([ ] and much subsequent work). However, in certain semantic domains, it is possible to reach very high levels of informativeness even if very few meanings from that domain are lexicalized. This is due to productive morphosyntax, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey any natural number meaning using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages are (near-)optimal solutions to a different kind of trade-off problem: the trade-off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances).
    Found 1 week, 3 days ago on Jakub Szymanik's site
  21. 968817.550084
    Einstein made a distinction between principle theories like Newtonian mechanics and constructive theories like kinetic theory of gases. Are these two distinct types of theories fundamentally different from each other or can they be regarded to belong to just one type of theory? We explore this issue with respect to the theory of scientific study and come to the conclusion that there is only one type of (scientific) theory, and the constructive theory is a principle theory with only one principle, which we call the default-principle theory rather than calling it a constructive theory. One reason why constructive theories are considered as default-principle theories is that it provides a natural progression from default-principle theory to a principle theory as science progresses. This also avoids the suggestion that constructive and principle theories are considered as completely distinct entities without any interaction with each other, which may hinder scientific progress.
    Found 1 week, 4 days ago on PhilSci Archive
  22. 983255.550099
    Cynthia rises from the couch to go get that beer. If we accept industrial-strength representationalism, in particular the Kinematics and Specificity theses, then there must be a fact of the matter exactly which representations caused this behavior. …
    Found 1 week, 4 days ago on The Splintered Mind
  23. 984984.550113
    We should be dispositionalists rather than representationalists about belief. According to dispositionalism, a person believes when they have the relevant pattern of behavioral, phenomenal, and cognitive dispositions. According to representationalism, a person believes when the right kind of representational content plays the right kind of causal role in their cognition. Representationalism overcommits on cognitive architecture, reifying a cartoon sketch of the mind. In particular, representationalism faces three problems: the Problem of Causal Specification (concerning which specific representations play the relevant causal role in governing any particular inference or action), the Problem of Tacit Belief (concerning which specific representations any one person has stored, among the hugely many approximately redundant possible representations we might have for any particular state of affairs), and the Problem of Indiscrete Belief (concerning how to model gradual belief change and in-between cases of belief). Dispositionalism, in contrast, is flexibly minimalist about cognitive architecture, focusing appropriately on what we do and should care about in belief ascription.
    Found 1 week, 4 days ago on Eric Schwitzgebel's site
  24. 1026524.550127
    Since mass is defined as the measure of the (experimentally established) resistance a particle offers to its acceleration and as it is also an experimental fact that a particle’s resistance to its acceleration increases when its velocity increases, it follows that, like mass, the concept of relativistic mass also reflects an experimental fact. This means that the rejection of the relativistic velocity dependence of mass amounts to both rejection of the experimental evidence and refusing to face and deal with one of the deepest open questions in fundamental physics – the origin and nature of the inertial resistance of a particle to its acceleration, i.e., the origin and nature of its inertial mass.
    Found 1 week, 4 days ago on PhilSci Archive
  25. 1084186.55014
    We somewhat provocatively titled our target review article “The Markov Blanket Trick” (Raja et al., 2021) and then went on to elaborate why we thought it was a trick at least as deployed in much of the free energy principle (FEP) literature. But perhaps that was unfair. Perhaps it is instead a treat, a delicious boost of energy that can do real work. In the commentaries on the target review article, we find support for both attitudes. Those who are largely content with framing the FEP and active inference deployment of Markov blankets as a trick are, along with us, Di Paolo and Aguilera, and, maybe to a lesser extent, van Es & Hipólito and Palacios. Those who think of Markov blankets as a treat include Albarracin & Pitilya, Friston, Parr, and Ramstead. For these latter authors, this sort of modeling has the power to make otherwise informal ways of understanding cognition more formal and therefore more useful. In the following sections we will tackle the main topics that appear in the commentaries, with the above framing in mind. Much of our discussion will consist of clarifications, but we will also expand and extend our views in light of the topics raised by the commentators. In particular, we will further establish why, and in what ways, we think the Markov blanket trick is a trick, and doesn’t do what the treat proponents hope.
    Found 1 week, 5 days ago on PhilSci Archive
  26. 1141914.550154
    It has become common in foundational discussions to say that we have a variety of possible interpretations of quantum mechanics available to us and therefore we are faced with a problem of underdetermination. In ref [1] Wallace argues that this is not so, because several popular approaches to the measurement problem can’t be fully extended to relativistic quantum mechanics and quantum field theory (QFT), and thus they can’t reproduce many of the empirical phenomena which are correctly predicted by QFT. Wallace thus contends that as things currently stand, only the unitary-only approaches can reproduce all the predictions of quantum mechanics, so at present only the unitary-only approaches are acceptable as solutions to the measurement problem.
    Found 1 week, 6 days ago on PhilSci Archive
  27. 1143496.550177
    How should we account for the extraordinary regularity in the world? Humeans and Non-Humeans sharply disagree. According to Non-Humeans, the world behaves in an extraordinarily regular way because of certain necessary connections in nature. However, Humeans have thought that Non-Humean views are metaphysically objectionable. In particular, there are two general metaphysical principles that Humeans have found attractive that are incompatible with all existing versions of Non-Humeanism. My goal in this paper is to develop a novel version of Non-Humeanism that is consistent with (and even entails) both of these general metaphysical principles. By endorsing such a view, one can have the explanatory benefits of Non-Humeanism while at the same time avoiding two of the major metaphysical objections towards Non-Humeanism.
    Found 1 week, 6 days ago on David Builes's site
  28. 1253600.550193
    In [1] it is claimed that, based on radiation emission measurements described in [2], a certain variant” of the Orch OR theory has been refuted. I agree with this claim. However, the significance of this result for Orch OR per se is unclear. After all, the refuted “variant” was never advocated by anyone, and it contradicts the views of Hameroff and Penrose (hereafter: HP) who invented Orch OR [3].
    Found 2 weeks ago on Kelvin J. McQueen's site
  29. 1430477.550207
    Despite the once-common idea that a universal ideography would have numerous advantages, attempts to develop such ideographies have failed. Here, we make use of the biological idea of fitness landscapes to help us understand the non-evolution of such a universal ideographic code as well as how we might reach this potential global fitness peak in the design space.
    Found 2 weeks, 2 days ago on PhilSci Archive
  30. 1442892.55022
    Simply stated, this book bridges the gap between statistics and philosophy. It does this by delineating the conceptual cores of various statistical methodologies (Bayesian/frequentist statistics, model selection, machine learning, causal inference, etc.) and drawing out their philosophical implications. Portraying statistical inference as an epistemic endeavor to justify hypotheses about a probabilistic model of a given empirical problem, the book explains the role of ontological, semantic, and epistemological assumptions that make such inductive inference possible. From this perspective, various statistical methodologies are characterized by their epistemological nature: Bayesian statistics by internalist epistemology, classical statistics by externalist epistemology, model selection by pragmatist epistemology, and deep learning by virtue epistemology. Another highlight of the book is its analysis of the ontological assumptions that underpin statistical reasoning, such as the uniformity of nature, natural kinds, real patterns, possible worlds, causal structures, etc. Moreover, recent developments in deep learning indicate that machines are carving out their own “ontology” (representations) from data, and better understanding this—a key objective of the book—is crucial for improving these machines’ performance and intelligibility.
    Found 2 weeks, 2 days ago on Jun Otsuka's site