1. 44707.905639
    A high heritability estimate usually corresponds to a situation where trait variation is largely caused by genetic variation. However, in some cases of gene-environment covariance, causal intuitions about the sources of trait difference can vary, leading experts to disagree as to how the heritability estimate should be interpreted. We argue that the source of contention for these cases is an inconsistency in the interpretation of the concepts ‘genotype’, ‘phenotype’ and ‘environment’. We propose an interpretation of these terms under which trait variance initially caused by genetic variance is subsumed into a heritability for all cases of gene-environment covariance.
    Found 12 hours, 25 minutes ago on Pierrick Bourrat's site
  2. 44821.905699
    The issue of independent evidence is of central importance to hypothesis testing in evolutionary biology. Suppose you wanted to test the hypothesis that long fur is an adaptation to cold climate and short fur is an adaptation to warm climate. You look at 20 hour species; 10 live in a cold climate and have long fur, and 10 live in a warm climate and have short fur. Is there any reason to think that the data do not confirm the adaptive hypothesis? One worry is that the species in each group resemble each other merely because they inherited their fur length from a common ancestor of the group (and that the temperatures experienced by ancestors and descendants are similar). This influence of ancestor on descendant is often called phylogenetic inertia (e.g., see Harvey and l’ngel 1991).
    Found 12 hours, 27 minutes ago on Elliott Sober's site
  3. 44883.905725
    There are many reasons for objecting to quantifying the ‘proof beyond reasonable doubt’ standard of criminal law as a percentage probability. They are divided into ethical and policy reasons, on the one hand, and reasons arising from the nature of logical probabilities, on the other. It is argued that these reasons are substantial and suggest that the criminal standard of proof should not be given a precise number. But those reasons do not rule out a minimal imprecise number. ‘Well above 80%’ is suggested as a standard, implying that any attempt by a prosecutor or jury to take the ‘proof beyond reasonable doubt’ standard to be 80% or less should be ruled out as a matter of law.
    Found 12 hours, 28 minutes ago on James Franklin's site
  4. 77661.905749
    Some vegetative state patients show fMRI responses similar to those of healthy controls when instructed to perform mental imagery tasks. Many authors have argued that this provides evidence that such patients are in fact conscious, as response to commands requires intentional agency. I argue for an alternative reading, on which responsive patients have a deficit similar to that seen in severe forms of akinetic mutism. Akinetic mutism is marked by the inability to form and maintain intentions to act. Responsive patients are likely still conscious. However, the route to this conclusion does not support attributions of intentional agency. I argue that aspects of consciousness, rather than broad diagnostic categories, are the more appropriate target of empirical investigation. Investigating aspects of consciousness provides a better method for investigating profound disorders of consciousness.
    Found 21 hours, 34 minutes ago on PhilPapers
  5. 87368.905773
    Charlie Dunbar Broad (1887–1971) was an English philosopher who for the most part of his life was associated with Trinity College, Cambridge. Broad’s early interests were in science and mathematics. Despite being successful in these he came to believe that he would never be a first-rate scientist, and turned to philosophy. Broad’s interests were exceptionally wide-ranging. He devoted his philosophical acuity to the mind-body problem, the nature of perception, memory, introspection, and the unconscious, to the nature of space, time and causation. He also wrote extensively on the philosophy of probability and induction, ethics, the history of philosophy and the philosophy of religion.
  6. 89023.905789
    After a sketch of the optimism and high aspirations of History and Philosophy of Science when I first joined the field in the mid 1960s, I go on to describe the disastrous impact of "the strong programme" and social constructivism in history and sociology of science. Despite Alan Sokal's brilliant spoof article, and the "science wars" that flared up partly as a result, the whole field of Science and Technology Studies (STS) is still adversely affected by social constructivist ideas. I then go on to spell out how in my view STS ought to develop. It is, to begin with, vitally important to recognize the profoundly problematic character of the aims of science. There are substantial, influential and highly problematic metaphysical, value and political assumptions built into these aims. Once this is appreciated, it becomes clear that we need a new kind of science which subjects problematic aims - problematic assumptions inherent in these aims - to sustained imaginative and critical scrutiny as an integral part of science itself. This needs to be done in an attempt to improve the aims and methods of science as science proceeds. The upshot is that science, STS, and the relationship between the two, are all transformed. STS becomes an integral part of science itself. And becomes a part of an urgently needed campaign to transform universities so that they become devoted to helping humanity create a wiser world.
    Found 1 day ago on PhilSci Archive
  7. 89056.905804
    Kantian philosophy of space, time and gravity is significantly affected in three ways by particle physics. First, particle physics deflects Schlick’s General Relativity-based critique of synthetic a priori knowledge. Schlick argued that since geometry was not synthetic a priori, nothing was—a key step toward logical empiricism. Particle physics suggests a Kant-friendlier theory of space-time and gravity presumably approximating General Relativity arbitrarily well, massive spin-2 gravity, while retaining a flat space-time geometry that is indirectly observable at large distances. The theory’s roots include Seeliger and Neumann in the 1890s and Einstein in 1917 as well as 1920s-30s physics. Such theories have seen renewed scientific attention since 2000 and especially since 2010 due to breakthroughs addressing early 1970s technical difficulties.
    Found 1 day ago on PhilSci Archive
  8. 89097.905819
    Résumé : Cet article cherche à montrer comment la pratique mathématique, particulièrement celle admettant des représentations visuelles, peut conduire à de nouveau résultats mathématiques. L’argumentation est basée sur l’étude du cas d’un domaine des mathématiques relativement récent et prometteur: la théorie géométrique des groupes. L’article discute comment la représentation des groupes par les graphes de Cayley rendit possible la découverte de nouvelles propriétés géométriques de groupes. Abstract: The paper aims to show how mathematical practice, in particular with visual representations can lead to new mathematical results. The argument is based on a case study from a relatively recent and promising mathematical subject—geometric group theory. The paper discusses how the representation of groups by Cayley graphs made possible to discover new geometric properties of groups.
    Found 1 day ago on PhilSci Archive
  9. 89141.905834
    The idea that a serious threat to scientific realism comes from unconceived alternatives has been proposed by van Fraassen, Sklar, Stanford and Wray among others. Peter Lipton’s critique of this threat from underconsideration is examined briefly in terms of its logic and its applicability to the case of space-time and particle physics. The example of space-time and particle physics indicates a generic heuristic for quantitative sciences for constructing potentially serious cases of underdetermination, involving one-parameter family of rivals Tm (m real and small) that work as a team rather than as a single rival against default theory T . In important examples this new parameter has a physical meaning (e.g., particle mass) and makes a crucial conceptual difference, shrinking the symmetry group and in some case putting gauge freedom, formal indeterminism vs. determinism, the presence of the hole argument, etc., at risk. Methodologies akin to eliminative induction or tempered subjective Bayesianism are more demonstrably reliable than the custom of attending only to “our best theory”: they can lead either to a serious rivalry or to improved arguments for the favorite theory. The example of General Relativity (massless spin 2 in particle physics terminology) vs. massive spin 2 gravity, a recent topic in the physics literature, is discussed. Arguably the General Relativity and philosophy literatures have ignored the most serious rival to General Relativity.
    Found 1 day ago on PhilSci Archive
  10. 96903.905849
    Okasha, in *Evolution and the Levels of Selection*, convincingly argues that two rival statistical decompositions of covariance, namely contextual analysis and the neighbour approach, are better causal decompositions than the hierarchical Price approach. However, he claims that this result cannot be generalized in the special case of soft selection and argues that the Price approach represents in this case a better option. He provides several arguments to substantiate this claim. In this paper, I demonstrate that these arguments are flawed and argue that neither the Price equation nor the contextual and neighbour partitionings sensu Okasha are adequate causal decompositions in cases of soft selection. The Price partitioning is generally unable to detect cross-level by-products and this naturally also applies to soft selection. Both contextual and neighbour partitionings violate the fundamental principle of determinism that the same cause always produces the same effect. I argue that a fourth partitioning widely used in the contemporary social sciences, under the generic term of ‘hierarchical linear model’ and related to contextual analysis understood broadly, addresses the shortcomings of the three other partitionings and thus represents a better causal decomposition.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  11. 96948.905864
    For evolution by natural selection to occur it is classically admitted that the three ingredients of variation, difference in fitness and heredity are necessary and sufficient. In this paper, I show using simple individual-based models, that evolution by natural selection can occur in populations of entities in which neither heredity nor reproduction are present. Furthermore, I demonstrate by complexifying these models that both reproduction and heredity are predictable Darwinian products (i.e. complex adaptations) of populations initially lacking these two properties but in which new variation is introduced via mutations. Later on, I show that replicators are not necessary for evolution by natural selection, but rather the ultimate product of such processes of adaptation. Finally, I assess the value of these models in three relevant domains for Darwinian evolution.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  12. 96979.905879
    The religious phenomenon is a complex one in many respects. In recent years an increasing number of theories on the origin and evolution of religion have been put forward. Each one of these theories rests on a Darwinian framework but there is a lot of disagreement about which bits of the framework account best for the evolution of religion. Is religion primarily a by-product of some adaptation? Is it itself an adaptation, and if it is, does it benefi ciate individuals or groups? In this chapter, I review a number of theories that link religion to cooperation and show that these theories, contrary to what is often suggested in the literature, are not mutually exclusive. As I present each theory, I delineate an integrative framework that allows distinguishing the explanandum of each theory. Once this is done, it becomes clear that some theories provide good explanations for the origin of religion but not so good explanations for its maintenance and vice versa. Similarly some explanations are good explanations for the evolution of religious individual level traits but not so good explanations for traits hard to defi ne at the individual level. I suggest that to fully understand the religious phenomenon, integrating in a systematic way the different theories and the data is a more successful approach.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  13. 97005.905894
    In this paper, I identify two major problems with the model of evolutionary transitions in individuality (ETIs) developed by Michod and colleagues, and extended by Okasha, commonly referred to as the “export-of-fitness view”. First, it applies the concepts of viability and fertility inconsistently across levels of selection. This leads Michod to claim that once an ETI is complete, lower-level entities composing higher-level individuals have nil fitness. I argue that this claim is mistaken, propose a correct way to translate the concepts of viability and fertility from one level to the other and show that once an ETI is complete, neither viability nor fertility of the lower level entities is nil. Second, the export-of-fitness view does not sufficiently take the parameter of time into account when estimating fitness across levels of selection. As a result fitness is measured over different periods of time at each level. This ultimately means that fitness is measured in different environmental conditions at each level and misleads Okasha into making the claim that the two levels are ontologically distinct levels of selection. I show that once fitness is measured over the same period of time across levels, the claim about two levels of selection can only be an epistemic one.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  14. 97112.905909
    In this critical notice to Robert Wright’s The Evolution of God, we focus on the question of whether Wright’s God is one which can be said to be an adaptation in a well defined sense. Thus we evaluate the likelihood of different models of adaptive evolution of cultural ideas in their different levels of selection. Our result is an emphasis on the plurality of mechanisms that may lead to adaptation. By way of conclusion we assess epistemologically some of Wright’s more controversial claims concerning the directionality of evolution and moral progress.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  15. 97132.905926
    Altruism is one of the most studied topics in theoretical evolutionary biology. The debate surrounding the evolution of altruism has generally focused on the conditions under which altruism can evolve and whether it is better explained by kin selection or multilevel selection. This debate has occupied the forefront of the stage and left behind a number of equally important questions. One of them, which is the subject of this paper, is whether the word “selection” in “kin selection” and “multilevel selection” necessarily refers to “evolution by natural selection”. I show, using a simple individual-centered model, that once clear conditions for natural selection and altruism are specified, one can distinguish two kinds of evolution of altruism, only one of which corresponds to the evolution of altruism by natural selection, the other resulting from other evolutionary processes.
    Found 1 day, 2 hours ago on Pierrick Bourrat's site
  16. 97201.905941
    Drift is often characterized in statistical terms. Yet such a purely statistical characterization is ambiguous for it can accept multiple physical interpretations. Because of this ambiguity it is important to distinguish what sorts of processes can lead to this statistical phenomenon. After presenting a physical interpretation of drift originating from the most popular interpretation of fitness, namely the propensity interpretation, I propose a different one starting from an analysis of the concept of drift made by Godfrey- Smith. Further on, I show how my interpretation relates to previous attempts to make sense of the notion of expected value in deterministic setups. The upshot of my analysis is a physical conception of drift that is compatible with both a deterministic and indeterministic world.
    Found 1 day, 3 hours ago on Pierrick Bourrat's site
  17. 108156.905956
    In What a Plant Knows, Daniel Chamowitz reports what plant biologists apparently have known for a long time: although plants generally stay in one place (they’re sessile), they actively negotiate their environments. …
    Found 1 day, 6 hours ago on The Brains Blog
  18. 215932.905971
    Plants don’t have minds. At least, that’s what most people think. A few years ago, that’s also what I thought. Then, reflecting on the work of Ruth Millikan and Fred Dretske, I started wondering why it seemed obvious, and whether it should. …
    Found 2 days, 11 hours ago on The Brains Blog
  19. 215969.905986
    By Aris Spanos One of R. A. Fisher’s (17 February 1890 — 29 July 1962) most re­markable, but least recognized, achievement was to initiate the recast­ing of statistical induction. Fisher (1922) pioneered modern frequentist statistics as a model-based approach to statistical induction anchored on the notion of a statistical model, formalized by: Mθ(x)={f(x;θ); θ∈Θ}; x∈Rn ;Θ⊂Rm; m < n; (1) where the distribution of the sample f(x;θ) ‘encapsulates’ the proba­bilistic information in the statistical model. …
    Found 2 days, 11 hours ago on D. G. Mayo's blog
  20. 280422.906002
    In its most abstract form, an ontology is an account of fundamental degrees of freedom in nature. The metaphysician asks, what are the independently varying components of nature, their internal degrees of freedom and the configurations they can assume? The rationalist metaphysician supposes that we have some form of rational insight into the nature of reality. The naturalistic metaphysician relies on observation and experiment. Her task is to infer ontology from data. Given an ontology and a set of laws, one can generate a range of possible behavior,ii so the naturalistic metaphysician faces an inverse problem: how does she infer backwards from a range of observed behavior to underlying ontology?
    Found 3 days, 5 hours ago on Jenann Ismael's site
  21. 284022.906017
    Before the development of quantum mechanics, most of the philosophical discussion of probability focused on statistical probabilities. Philosophers of science have a particular interest in statistical probabilities because they play an important role in the testing and confirmation of theories, and they played a central role in the statistical mechanics of Boltzmann and Gibbs developed in the 18th century. Since the introduction of quantum mechanics, however, much of the philosophical attention has become focused on the interpretation of chances. These are the probabilities assigned to particular events (the detection of a photon at a certain location on a photographic plate, or the registration of the result of a spin experiment on a particular electron) by applications of the Born Rule. The appearance of chances in quantum mechanics marked the first time that probabilities made an explicit appearance in a fundamental theory. They raise new kinds of ontological questions. Unlike statistical probabilities (which pertain to classes of events), chances are single-case probabilities. And unlike credences (which represent the epistemic states of believers), chances purport to represent features of the physical world.
    Found 3 days, 6 hours ago on Jenann Ismael's site
  22. 360747.906032
    This paper examines a constellation of ethical and editorial issues that have arisen since philosophers started to conduct, submit and publish empirical research. These issues encompass concerns over responsible authorship, fair treatment of human subjects, ethicality of experimental procedures, availability of data, unselective reporting and publishability of research findings. This study aims to assess whether the philosophical community has as yet successfully addressed such issues. To do so, the instructions for authors, submission process and published research papers of 29 main journals in philosophy have been considered and analyzed. In light of the evidence reported here, it is argued that the philosophical community has as yet failed to properly tackle such issues. The paper also delivers some recommendations for authors, reviewers and editors in the field.
    Found 4 days, 4 hours ago on PhilPapers
  23. 364712.906046
    Several authors have claimed that prediction is essentially impossible in the general theory of relativity, the case being particularly strong, it is said, when one fully considers the epistemic predicament of the observer. Each of these claims rests on the support of an underdetermination argument and a particular interpretation of the concept of prediction. I argue that these underdetermination arguments fail and depend on an implausible explication of prediction in the theory. The technical results adduced in these arguments can be related to certain epistemic issues, but can only be misleadingly or mistakenly characterized as related to prediction.
    Found 4 days, 5 hours ago on PhilPapers
  24. 372974.90606
    Computational complexity theory is a branch of computer science that is dedicated to classifying computational problems in terms of their difficulty. Unlike computability theory, whose object is to determine what we can compute in principle, the object of complexity theory is to inform us with regards to our practical limits. It thus serves as a natural conceptual bridge between the study of mathematics and the study of technology, in the sense that computational complexity theory
    Found 4 days, 7 hours ago on PhilSci Archive
  25. 373041.906075
    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data through data models, via the linkage of experimental models. In this paper, I dispute the HoM account in the context of present-day high-energy physics (HEP) experiments. I argue that even though the HoM account aims to characterize experimentation as a model-based activity, it does not involve a modeling concept for the process of data acquisition and thus fails to provide a model-based characterization of the theory-experiment relationship underlying this process. In order to characterize the foregoing relationship, I propose the concept of a model of data acquisition and illustrate it in the case of the ATLAS experiment at CERN’s Large Hadron Collider, where the Higgs boson was discovered in 2012. I show that the process of data acquisition in the ATLAS experiment is performed according to a model of data acquisition that specifies and organizes the experimental procedures necessary to select the data according to a predetermined set of selection criteria. I also point out that this data acquisition model is theory-laden, in the sense that the underlying data selection criteria are determined in accordance with the testable predictions of the theoretical models that the ATLAS experiment is aimed to test. I take the foregoing theory-ladenness to indicate that the relationship between the procedures of the ATLAS experiment and the theoretical models of the phenomena of interest is first established, prior to the formation of data models, through the data acquisition model of the experiment, thus not requiring the intermediary of other types of models as suggested by the HoM account. I therefore conclude that in the context of HEP experiments, the HoM account does not consistently extend to the process of data acquisition so as to include models of data acquisition.
    Found 4 days, 7 hours ago on PhilSci Archive
  26. 373056.90609
    Electromagnetism is one of the oldest natural phenomena studied by modern science. Laws of electromagnetism gradually evolved from the various experimental observations. In this process, Coulomb’s law was the first one which established the 1/r dependence of force between two charges. Then, Faraday discovered the induction of voltage by changing magnetic field and Ampere quantified the magnetic field generated by electric current. Finally, Maxwell introduced the concept of displacement current (i.e. time varying electric field giving rise to magnetic field) and wrote all the laws of electromagnetism in an elegant form which are commonly known Maxwell’s equations. These equations in differential form are given by,
    Found 4 days, 7 hours ago on PhilSci Archive
  27. 374143.906105
    Formal learning theory is the mathematical embodiment of a normative epistemology. It deals with the question of how an agent should use observations about her environment to arrive at correct and informative conclusions. Philosophers such as Putnam, Glymour and Kelly have developed learning theory as a normative framework for scientific reasoning and inductive inference. Terminology. Cognitive science and related fields typically use the term “learning” for the process of gaining information through observation— hence the name “learning theory”. To most cognitive scientists, the term “learning theory” suggests the empirical study of human and animal learning stemming from the behaviourist paradigm in psychology.
    Found 4 days, 7 hours ago on Stanford Encyclopedia of Philosophy
  28. 374192.906119
    It is not news that we often make discoveries or find reasons for a mathematical proposition by thinking alone. But does any of this thinking count as conducting a thought experiment? The answer to that question is “yes”, but without refinement the question is uninteresting. Suppose you want to know whether the equation [ 8x + 12y = 6 ] has a solution in the integers. You might mentally substitute some integer values for the variables and calculate. In that case you would be mentally trying something out, experimenting with particular integer values, in order to test the hypothesis that the equation has no solution in the integers. Not getting a solution first time, you might repeat the thought experiment with different integer inputs.
    Found 4 days, 7 hours ago on PhilSci Archive
  29. 374225.906134
    In this paper we argue that the different positions taken by Dyson and Feynman on Feynman diagrams’ representational role depend on different styles of scientific thinking. We begin by criticizing the idea that Feynman Diagrams can be considered to be pictures or depictions of actual physical processes. We then show that the best interpretation of the role they play in quantum field theory and quantum electrodynamics is captured by Hughes' Denotation, Deduction and Interpretation theory of models (DDI), where “models” are to be interpreted as inferential, non-representational devices constructed in given social contexts by the community of physicists.
    Found 4 days, 7 hours ago on PhilSci Archive
  30. 374253.906151
    – According to pancomputationalism, all physical systems – atoms, rocks, hurricanes, and toasters – perform computations. Pancomputationalism seems to be increasingly popular among some philosophers and physicists. In this paper, we interpret pancomputationalism in terms of computational descriptions of varying strength—computational interpretations of physical microstates and dynamics that vary in their restrictiveness. We distinguish several types of pancomputationalism and identify essential features of the computational descriptions required to support them. By tying various pancomputationalist theses directly to notions of what counts as computation in a physical system, we clarify the meaning, strength, and plausibility of pancomputationalist claims. We show that the force of these claims is diminished when weaknesses in their supporting computational descriptions are laid bare. Specifically, once computation is meaningfully distinguished from ordinary dynamics, the most sensational pancomputationalist claims are unwarranted, whereas the more modest claims offer little more than recognition of causal similarities between physical processes and the most primitive computing processes.
    Found 4 days, 7 hours ago on PhilSci Archive