1. 13119.036484
    Scary news from Australia: • Marc Rigby, Insect population decline leaves Australian scientists scratching for solutions, ABC Far North, 23 February 2018. I’ll quote the start: A global crash in insect populations has found its way to Australia, with entomologists across the country reporting lower than average numbers of wild insects. …
    Found 3 hours, 38 minutes ago on Azimuth
  2. 97458.036541
    Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett’s work on backward causation and Lewis’s on chance and crystal balls. We show that the possibility that the causal facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis’s own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action.
    Found 1 day, 3 hours ago on PhilSci Archive
  3. 99888.036559
    Fisher/ Neyman This continues my previous post: “Can’t take the fiducial out of Fisher…” in recognition of Fisher’s birthday, February 17. I supply a few more intriguing articles you may find enlightening to read and/or reread on a Saturday night Move up 20 years to the famous 1955/56 exchange between Fisher and Neyman. …
    Found 1 day, 3 hours ago on D. G. Mayo's blog
  4. 126724.036578
    Hardin’s (1988) empirically-grounded argument for color eliminativism has defined the color realism debate for the last thirty years. By Hardin’s own estimation, phenomenal structure – the unique/binary hue distinction in particular – poses the greatest problem for color realism. Examination of relevant empirical findings shows that claims about the unique hues which play a central role in the argument from phenomenal structure should be rejected. Chiefly, contrary to widespread belief amongst philosophers and scientists, the unique hues do not play a fundamental role in determining all color appearances. Among the consequences of this result is that greater attention should be paid to certain proposals for putting the structure of phenomenal color into principled correspondence with surface reflectance properties. While color realism is not fully vindicated, it has much greater empirical plausibility than previously thought.
    Found 1 day, 11 hours ago on Wayne Wright's site
  5. 143050.036592
    I propose a division of the literature on natural kinds into metaphysical worries, semantic worries, and methodological worries. I argue that the latter set of worries, which concern how classification influences scientific practices, should occupy centre stage in philosophy of science discussions about natural kinds. I apply this methodological framework to the problems of classifying chemical species and nanomaterials. I show that classification in nanoscience differs from classification in chemistry because the latter relies heavily on compositional identity, whereas the former must consider additional properties, namely, size, shape, and surface chemistry. I use this difference to argue for a scale-dependent theory of scientific classification.
    Found 1 day, 15 hours ago on PhilPapers
  6. 143079.036606
    Shagrir ([2001]) and Sprevak ([2010]) explore the apparent necessity of representation for the individuation of digits (and processors) in computational systems. I will first offer a response to Sprevak’s argument that does not mention Shagrir’s original formulation, which was more complex. I then extend my initial response to cover Shagrir’s argument, thus demonstrating that it is possible to individuate digits in non-representational computing mechanisms. I also consider the implications that the non-representational individuation of digits would have for the broader theory of computing mechanisms.
    Found 1 day, 15 hours ago on PhilPapers
  7. 143111.03662
    ‘Modus Darwin’ is the name given by Elliott Sober to a form of argument that he attributes to Darwin in the Origin of Species, and to subsequent evolutionary biologists who have reasoned in the same way. In short, the argument form goes: similarity, ergo common ancestry. In this article, I review and critique Sober’s analysis of Darwin’s reasoning. I argue that modus Darwin has serious limitations that make the argument form unsuitable for supporting Darwin’s conclusions, and that Darwin did not reason in this way.
    Found 1 day, 15 hours ago on PhilPapers
  8. 143161.036633
    When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be ‘robust’. This article investigates the logic of such ‘robustness analysis’ (RA). The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful new foothold on the logic undergirding RAs.
    Found 1 day, 15 hours ago on PhilPapers
  9. 143197.036647
    Ruetsche ([2011]) claims that an abstract C*-algebra of observables will not contain all of the physically significant observables for a quantum system with infinitely many degrees of freedom. This would signal that in addition to the abstract algebra, one must use Hilbert space representations for some purposes. I argue to the contrary that there is a way to recover all of the physically significant observables by purely algebraic methods.
    Found 1 day, 15 hours ago on PhilPapers
  10. 178258.03666
    Symmetry plays a number of central roles in modern physics. As the physicist Paul Anderson famously remarked, “it is only slightly overstating the case to say that physics is the study of symmetry” (1972, p. 394). Here I discuss just one role of symmetry: its use as a guide to superfluous structure, with a particular eye on its application to metaphysics. What is symmetry? Generally speaking, a symmetry is an operation that leaves its object unchanged in a certain respect. Rotation by 90 degrees is a symmetry of a square piece of paper, insofar as the paper’s extension through space is the same after the rotation as before. But we will focus on symmetries of physical theories, not paper. Roughly speaking, these are operations on possible physical systems that leave some aspect of the theory unchanged. Which aspect? That depends: different symmetries preserve different aspects. But an important class of symmetries are those that leave the dynamical laws of the theory unchanged; these are known as dynamical symmetries.
    Found 2 days, 1 hour ago on Shamik Dasgupta's site
  11. 191274.036674
    R.A. Fisher: February 17, 1890 – July 29, 1962 Continuing with posts in recognition of R.A. Fisher’s birthday, I post one from a couple of years ago on a topic that had previously not been discussed on this blog: Fisher’s fiducial probability. …
    Found 2 days, 5 hours ago on D. G. Mayo's blog
  12. 210163.03669
    Phenotypic flexibility includes systems such as individual learning, social learning, and the adaptive immune system. Since the evolution of genes by natural selection is a relatively slow process, mechanisms of phenotypic flexibility are evolved to adapt to contingencies on the time scales ranging
    Found 2 days, 10 hours ago on Peter Richerson's site
  13. 312649.036704
    How does language (spoken or written) impact thought? One useful way to approach this important but elusive question may be to consider language itself as a cognition-enhancing animal-built structure. To take this perspective is to view language as a kind of self-constructed cognitive niche. These self-constructed cognitive niches play, I suggest, three distinct but deeply interlocking roles in human thought and reason.
    Found 3 days, 14 hours ago on Andy Clark's site
  14. 469131.036718
    Hohwy (Hohwy 2016, Hohwy 2017) argues there is a tension between the free energy principle and leading depictions of mind as embodied, enactive, and extended (so-called ‘EEE1 cognition’). The tension is traced to the importance, in free energy formulations, of a conception of mind and agency that depends upon the presence of a ‘Markov blanket’ demarcating the agent from the surrounding world. In what follows I show that the Markov blanket considerations do not, in fact, lead to the kinds of tension that Hohwy depicts. On the contrary, they actively favour the EEE story. This is because the Markov property, as exemplified in biological agents, picks out neither a unique nor a stationary boundary. It is this multiplicity and mutability– rather than the absence of agent-environment boundaries as such - that EEE cognition celebrates.
    Found 5 days, 10 hours ago on Andy Clark's site
  15. 470743.036733
    In the following, I explain the “Twin Paradox”, which is supposed to be a paradoxical consequence of the Special Theory of Relativity (STR). I give the correct resolution of the “paradox,” explaining why STR is not inconsistent as it appears at first glance. I also debunk two common, incorrect responses to the paradox. This should help the reader to understand Special Relativity and to see how the theory is coherent.
    Found 5 days, 10 hours ago on Michael Huemer's site
  16. 554488.036746
    In March, I’ll be talking at Spencer Breiner‘s workshop on Applied Category Theory at the National Institute of Standards and Technology. I’ll be giving a joint talk with John Foley about our work using operads to design networks. …
    Found 6 days, 10 hours ago on Azimuth
  17. 594393.036759
    . As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012/2017. The comments from 2017 lead to a troubling issue that I will bring up in the comments today. …
    Found 6 days, 21 hours ago on D. G. Mayo's blog
  18. 721446.036775
    The distribution of matter in our universe is strikingly time asymmetric. Most famously, the Second Law of Thermodynamics says that entropy tends to increase toward the future but not toward the past. But what explains this time-asymmetric distribution of matter? In this paper, I explore the idea that time itself has a direction by drawing from recent work on grounding and metaphysical fundamentality. I will argue that positing such a direction of time, in addition to time-asymmetric boundary conditions (such as the so-called “past hypothesis”), enables a better explanation of the thermodynamic asymmetry than is available otherwise.
    Found 1 week, 1 day ago on PhilPapers
  19. 789287.036788
    There are two standard responses to the discrepancy between observed galactic rotation curves and the theoretical curves calculated on the basis of luminous matter: postulate dark matter, or modify gravity. Most physicists accept the former as part of the concordance model of cosmology; the latter encompasses a family of proposals, of which MOND is perhaps the best-known example. Don Saari, however, claims to have found a third alternative: to explain this discrepancy as a result of approximation methods which are unfaithful to the underlying Newtonian dynamics. If he is correct, eliminating the problematic approximations should allow physicists and astronomers to preserve the validity of Newtonian dynamics in galactic systems without invoking dark matter.
    Found 1 week, 2 days ago on PhilSci Archive
  20. 789308.036802
    We defend the many-worlds interpretation of quantum mechanics (MWI) against the objection that it cannot explain why measurement outcomes are predicted by the Born probability rule. We understand quantum probabilities in terms of an observer’s self-location probabilities. We formulate a probability postulate for the MWI: the probability of self-location in a world with a given set of outcomes is the absolute square of that world’s amplitude. We provide a proof of this postulate, which assumes the quantum formalism and two principles concerning symmetry and locality. We also show how a structurally similar proof of the Born rule is available for collapse theories. We conclude by comparing our account to the recent account offered by Sebens and Carroll.
    Found 1 week, 2 days ago on PhilSci Archive
  21. 894421.036815
    Use of ‘representation’ pervades the literature in cognitive science? But, do representations actually play a role in cognitive-scientific explanation, or is such talk merely colorful commentary? Are, for instance, patterns of cortical activity in motion-sensitive visual area MT or strings of symbols in a language-processing parser genuine representations? Do they have content? And if they do, can a naturalist assign such contents in a well-motivated and satisfying way?
    Found 1 week, 3 days ago on PhilPapers
  22. 962280.036828
    Modern medicine is often said to have originated with 19th century germ theory, which attributed diseases to particular bacterial contagions. The success of this theory is often associated with an underlying principle referred to as the “doctrine of specific etiology,” which refers to the theory’s specificity at the level of disease causation or etiology. Despite the perceived importance of this doctrine the literature lacks a clear account of the types of specificity it involves and why exactly they matter. This paper argues that the 19th century germ theory model involves two types of specificity at the level of etiology. One type receives significant attention in the literature, but its influence on modern medicine has been misunderstood. A second type is present in this model, but it has been overlooked in the extant literature. My analysis clarifies how these types of specificity led to a novel conception of etiology, which continues to figure in medicine today.
    Found 1 week, 4 days ago on PhilSci Archive
  23. 1027650.036841
    Humean accounts of natural lawhood (such as Lewis’s) have often been criticized as unable to account for the laws’ characteristic explanatory power in science. Loewer (Philos Stud 160:115–137, 2012) has replied that these criticisms fail to distinguish grounding explanations from scientific explanations. Lange (Philos Stud 164:255–261, 2013) has replied by arguing that grounding explanations and scientific explanations are linked by a transitivity principle, which can be used to argue that Humean accounts of natural law violate the prohibition on self-explanation. Lange’s argument has been sharply criticized by Hicks and van Elswyk (Philos Stud 172:433– 443, 2015), Marshall (Philos Stud 172:3145–3165, 2015), and Miller (Philos Stud 172:1311–1332, 2015). This paper shows how Lange’s argument can withstand these criticisms once the transitivity principle and the prohibition on self-explanation are properly refined. The transitivity principle should be refined to accommodate contrasts in the explanans and explanandum. The prohibition on self-explanation should be refined so that it precludes a given fact p from helping to explain why some other fact q helps to explain why p. In this way, the transitivity principle avoids having counterintuitive consequences in cases involving macrostates having multiple possible microrealizations. The transitivity principle is perfectly compatible with the irreducibility of macroexplanations to microexplanations and with the diversity of the relations that can underwrite scientific explanations.
    Found 1 week, 4 days ago on Marc Lange's site
  24. 1067318.036857
    I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of “internal organizational coherence”.
    Found 1 week, 5 days ago on PhilPapers
  25. 1067438.036871
    Optogenetic techniques are described as “revolutionary” for the unprecedented causal control they allow neuroscientists to exert over neural activity in awake behaving animals. In this paper, I demonstrate by means of a case study that optogenetic techniques will only illuminate causal links between the brain and behavior to the extent that their error characteristics are known and, further, that determining these error characteristics requires (1) comparison of optogenetic techniques with techniques having well known error characteristics (methodological pluralism) and (2) consideration of the broader neural and behavioral context in which the targets of optogenetic interventions are situated (perspectival pluralism).
    Found 1 week, 5 days ago on PhilPapers
  26. 1068874.036884
    Comparativism is the position that the fundamental doxastic state consists in comparative beliefs (e.g., believing p to be more likely than q), with partial beliefs (e.g., believing p to degree x) being grounded in and explained by patterns amongst comparative beliefs that exist under special conditions. In this paper, I develop a version of comparativism that originates with a suggestion made by Frank Ramsey in his ‘Probability and Partial Belief’ (1929). By means of a representation theorem, I show how this ‘Ramseyan comparativism’ can be used to weaken the (unrealistically strong) conditions required for probabilistic coherence that comparativists usually rely on, while still preserving enough structure to let us retain the usual comparativists’ account of quantitative doxastic comparisons.
    Found 1 week, 5 days ago on Edward Elliott's site
  27. 1077572.036899
    A number of naturalistic philosophers of mind endorse a realist attitude towards the results of Bayesian cognitive science. This realist attitude is currently unwarranted, however. It is not obvious that Bayesian models possess special epistemic virtues over alternative models of mental phenomena involving uncertainty. In particular, the Bayesian approach in cognitive science is not more simple, unifying and rational than alternative approaches; and it not obvious that the Bayesian approach is more empirically adequate than alternatives. It is at least premature, then, to assert that mental phenomena involving uncertainty are best explained within the Bayesian approach. To continue on with an exclusive praise for Bayes would be dangerous as it risks monopolizing the center of attention, leading to the neglect of different but promising formal approaches. Naturalistic philosophers of mind would be wise instead to endorse an agnostic, instrumentalist attitude towards Bayesian cognitive science to correct their mistake.
    Found 1 week, 5 days ago on PhilSci Archive
  28. 1077590.036913
    The ontic conception of explanation, according to which explanations are "full-bodied things in the world," is fundamentally misguided. I argue instead for what I call the eikonic conception, according to which explanations are the product of an epistemic activity involving representations of the phenomena to be explained. What is explained in the first instance is a particular conceptualization of the explanandum phenomenon, contextualized within a given research program or explanatory project. I conclude that this eikonic conception has a number of benefits, including making better sense of scientific practice and allowing for the full range of normative constraints on explanation.
    Found 1 week, 5 days ago on PhilSci Archive
  29. 1079951.036926
    People often talk about the synchronic Dutch Book argument for Probabilism and the diachronic Dutch Strategy argument for Conditionalization. But the synchronic Dutch Book argument for the Principal Principle is mentioned less. …
    Found 1 week, 5 days ago on M-Phi
  30. 1116616.03694
    [The following is a guest post by Bob Lockie. — JS]He who says that all things happen of necessity can hardly find fault with one who denies that all happens by necessity; for on his own theory this very argument is voiced by necessity (Epicurus 1964: XL).Lockie, Robert. …
    Found 1 week, 5 days ago on The Brains Blog