1. 24161.168914
    The New England Journal of Medicine NEJM announced new guidelines for authors for statistical reporting  yesterday*. The ASA describes the change as “in response to the ASA Statement on P-values and Statistical Significance and subsequent The American Statistician special issue on statistical inference” (ASA I and II, in my abbreviation). …
    Found 6 hours, 42 minutes ago on D. G. Mayo's blog
  2. 33328.168966
    The dispute between defenders and opponents of extended cognition (EC) has come to a dead end as no agreement on what the mark of the cognitive is could be found. Recently, many authors, therefore, have pursued a different strategy: they focus on the notion of constitution rather than the notion of cognition to determine whether constituents of cognitive phenomena can be external to the brain. One common strategy is to apply the new mechanists’ mutual manipulability account (MM). In this paper, I will analyze whether this strategy can be successful. Thereby, I will focus on David Kaplan’s (2012) version of this strategy. It will turn out that MM alone is insufficient for answering the question whether EC is true or not. What I call the Challenge of Trivial Extendedness arises due to the fact that mechanisms for cognitive behaviors are extended in a way that nobody would want to count as cases of EC. I will argue that this challenge can be met by adding a further necessary condition: cognitive constituents of mechanisms satisfy MM and they are what I call behavior unspecific.
    Found 9 hours, 15 minutes ago on PhilPapers
  3. 92083.168983
    It has seemed, to many, that there is an important connection between the ways in which some theoretical posits explain our observations, and our reasons for being ontologically committed to those posits. One way to spell out this connection is in terms of what has become known as the explanatory criterion of ontological commitment. This is, roughly, the view that we ought to posit only those entities that are indispensable to our best explanations. The motivation for a criterion such as this is clear: it aims to rule out commitment to ‘ontologically dubious’ entities—entities such as undetectable fairies at the bottom of one’s garden. The explanatory criterion is sometimes framed as a fairly strong thesis: that we ought (epistemically) to posit all and only those entities that are indispensable to the best available explanations of our observations.
    Found 1 day, 1 hour ago on PhilPapers
  4. 93288.168998
    According to a nowadays widely discussed analysis by Itamar Pitowsky, the theoretical problems of QT are originated from two ‘dogmas’: the first forbidding the use of the notion of measurement in the fundamental axioms of the theory; the second imposing an interpretation of the quantum state as representing a system’s objectively possessed properties and evolution. In this paper I argue that, contrarily to Pitowsky analysis, depriving the quantum state of its ontological commitment is not sufficient to solve the conceptual issues that affect the foundations of QT.
    Found 1 day, 1 hour ago on PhilSci Archive
  5. 101221.169012
    Convergent and divergent thought are promoted as key constructs of creativity. Convergent thought is defined and measured in terms of the ability to perform on tasks where there is one correct solution, and divergent thought is defined and measured in terms of the ability to generate multiple solutions. However, these characterizations of convergent and divergent thought presents inconsistencies, and do not capture the reiterative processing, or ‘honing’ of an idea that characterizes creative cognition. Research on formal models of concepts and their interactions suggests that different creative outputs may be projections of the same underlying idea at different phases of a honing process. This leads us to redefine convergent thought as thought in which the relevant concepts are considered from conventional contexts, and divergent thought as thought in which they are considered from unconventional contexts. Implications for the assessment of creativity are discussed.
    Found 1 day, 4 hours ago on Liane Gabora's site
  6. 101259.169026
    Although Darwinian models are rampant in the social sciences, social scientists do not face the problem that motivated Darwin’s theory of natural selection: the problem of explaining how lineages evolve despite that any traits they acquire are regularly discarded at the end of the lifetime of the individuals that acquired them. While the rationale for framing culture as an evolutionary process is correct, it does not follow that culture is a Darwinian or selectionist process, or that population genetics provides viable starting points for modeling cultural change. This paper lays out step-by-step arguments as to why a selectionist approach to cultural evolution is inappropriate, focusing on the lack of randomness, and lack of a self-assembly code. It summarizes an alternative evolutionary approach to culture: self-other reorganization via context-driven actualization of potential.
    Found 1 day, 4 hours ago on Liane Gabora's site
  7. 124072.16904
    The paper discusses two contemporary views about the foundation of statistical mechanics and deterministic probabilities in physics: one that regards a measure on the initial macro-region of the universe as a probability measure that is part of the Humean best system of laws (Mentaculus) and another that relates it to the concept of typicality. The first view is tied to Lewis’ Principal Principle, the second to a version of Cournot’s principle. We will defend the typicality view and address open questions about typicality and the status of typicality measures.
    Found 1 day, 10 hours ago on Dustin Lazarovici's site
  8. 150014.169054
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
    Found 1 day, 17 hours ago on PhilPapers
  9. 179237.169067
    In his 1961 paper, “Irreversibility and Heat Generation in the Computing Process,” Rolf Landauer speculated that there exists a fundamental link between heat generation in computing devices and the computational logic in use. According to Landauer, this heating effect is the result of a connection between the logic of computation and the fundamental laws of thermodynamics. The minimum heat generated by computation, he argued, is fixed by rules independent of its physical implementation. The limits are fixed by the logic and are the same no matter the hardware, or the way in which the logic is implemented. His analysis became the foundation for both a new literature, termed “the thermodynamics of computation” by Charles Bennett, and a new physical law, Landauer’s principle.
    Found 2 days, 1 hour ago on John Norton's site
  10. 208880.169081
    Radical Embodied Cognitive Science (REC) tries to understand as much cognition as it can without positing contentful mental entities. Thus, in one prominent formulation, REC claims that content is involved neither in visual perception nor in any more elementary form of cognition. Arguments for REC tend to rely heavily on considerations of ontological parsimony, with authors frequently pointing to the difficulty of explaining content in naturalistically acceptable terms. However, many classic concerns about the difficulty of naturalizing content likewise threaten the credentials of intentionality, which even advocates of REC take to be a fundamental feature of cognition. In particular, concerns about the explanatory role of content and about indeterminacy can be run on accounts of intentionality as well. Issues about explanation can be avoided, intriguingly if uncomfortably, by dramatically reconceptualizing or even renouncing the idea that intentionality can explain. As for indeterminacy, Daniel Hutto and Erik Myin point the way toward a response, appropriating an idea from Ruth Millikan. I take it a step further, arguing that attention to the ways that beliefs’ effects on behavior are modulated by background beliefs can help illuminate the facts that underlie their intentionality and content.
    Found 2 days, 10 hours ago on PhilSci Archive
  11. 208910.169095
    Let me begin with an admission: I am neither a Neo-Kantian myself nor a historian of philosophy. I became aware of Cassirer’s work through my search for precedents for the kind of structural realism that Ladyman was developing, as captured in the slogan ‘The world is structure’. As is now well-known, this differs from Worrall’s form of structural realism in that the latter maintains ‘All that we know is structure’ and in his early writings, Worrall followed Poincaré in his insistence that the nature of the world, beyond this structure, was unknown to us. Subsequently he adopted a kind of agnosticism with regard to this ‘nature’ but in that earlier form we find certain Kantian resonances, which is not surprising of course, given its ancestry in Poincaré’s work. One might initially think that the neo-Kantian would find Ladyman’s collapse of ‘nature’ into ‘structure’ to be unfortunate but, of course, if one takes ‘the world’ of the realist to be the phenomenal world, with the noumena taken negatively and not regarded as the world of determinate but unknowable objects, in the way that Worrall conceives of it (and here I recognise that I am stepping into a minefield!) then there may not be such a chasm between these two views as might at first appear.
    Found 2 days, 10 hours ago on PhilSci Archive
  12. 220027.169108
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and those for a society relying on mechanisms that are not necessarily controllable. The difficulty by companies in regulating the spread of misinformation, as well as those by authorities to protect task-workers managed by a software infrastructure, could be just some of the effects of this technological paradigm.
    Found 2 days, 13 hours ago on PhilPapers
  13. 266949.169122
    Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept learning, we show that it is easier to infer the meaning of complex concepts than that of simple concepts.
    Found 3 days, 2 hours ago on PhilSci Archive
  14. 267031.169136
    Although the interest about emergence has grown during the last years, there does not seem to be consensus on whether it is a non-trivial, interesting notion and whether the concept of reduction is relevant to its characterization. Another key issue is whether emergence should be understood as an epistemic notion or if there is a plausible ontological concept of emergence. The aim of this work is to propose an epistemic notion of contextual emergence on the basis of which one may tackle those issues.
    Found 3 days, 2 hours ago on PhilSci Archive
  15. 286640.169149
    From the point of view of cognitive development, the present paper by Bart Geurts is highly relevant, welcome and timely. It speaks to a fundamental puzzle in developmental pragmatics that used to be seen as such, then was considered to be resolved by many researchers, but may return nowadays with its full puzzling force.
    Found 3 days, 7 hours ago on Hannes Rakoczy's site
  16. 546881.169163
    « John Wright joins UT Austin On two blog posts of Jerry Coyne A few months ago, I got to know Jerry Coyne, the recently-retired biologist at the University of Chicago who writes the blog “Why Evolution Is True.” The interaction started when Jerry put up a bemused post about my thoughts on predictability and free will, and if I pointed out that if he wanted to engage me on those topics, there was more to go on than an 8-minute YouTube video. …
    Found 6 days, 7 hours ago on Scott Aaronson's blog
  17. 556974.169176
    Multiple realisation prompts the question: how is it that multiple systems all exhibit the same phenomena despite their different underlying properties? In this paper I develop a framework for addressing that question and argue that multiple realisation can be reductively explained. I defend this position by applying the framework to a simple example – the multiple realisation of electrical conductors. I go on to compare my position to those advocated in Polger & Shapiro (2016), Batterman (2018), and Sober (1999). Contra these respective authors I claim that multiple realisation is commonplace, that it can be explained, but that it requires a sui generis reductive explanatory strategy. As such, multiple realisation poses a non-trivial challenge to reduction which can, nonetheless, be met.
    Found 6 days, 10 hours ago on PhilSci Archive
  18. 557004.16919
    In this chapter I urge a fresh look at the problem of explaining equilibration. The process of equilibration, I argue, is best seen, not as part of the subject matter of thermodynamics, but as a presupposition of thermodynamics. Further, the relevant tension between the macroscopic phenomena of equilibration and the underlying microdynamics lies not in a tension between time-reversal invariance of the microdynamics and the temporal asymmetry of equilibration, but in a tension between preservation of distinguishability of states at the level of microphysics and the continual effacing of the past at the macroscopic level. This suggests an open systems approach, where the puzzling question is not the erasure of the past, but the question of how reliable prediction, given only macroscopic data, is ever possible at all. I suggest that the answer lies in an approach that has not been afforded sufficient attention in the philosophical literature, namely, one based on the temporal asymmetry of causal explanation.
    Found 6 days, 10 hours ago on PhilSci Archive
  19. 557042.169206
    Lyons’s (2003, 2018) axiological realism holds that science pursues true theories. I object that despite its name, it is a variant of scientific antirealism, and is susceptible to all the problems with scientific antirealism. Lyons (2003, 2018) also advances a variant of surrealism as an alternative to the realist explanation for success. I object that it does not give rise to understanding because it is an ad hoc explanans and because it gives a conditional explanation. Lyons might use axiological realism to account for the success of a theory. I object that some alternative axiological explanations are better than the axiological realist explanation, and that the axiological realist explanation is teleological. Finally, I argue that Putnam’s realist position is more elegant than Lyons’s.
    Found 6 days, 10 hours ago on PhilSci Archive
  20. 595422.16922
    Since many years national and international science organizations have recommended the inclusion of philosophy, history, and ethics courses in science curricula at universities. Chemists may rightly ask: What is that good for? Don’t primary and secondary school provide been taught to you to be the edifice of science, and take it only as a provisional state in the course of the ongoing research process of which your work is meant to become a part. Next let’s see what kind of philosophy, history, and enough general education such that universities can ethics is needed for chemical research, and what not. back to an antiquated form of higher education? Or do they want us to learn some “soft skills” that can at best improve our eloquence at the dinner table but is entirely useless in our chemical work?
    Found 6 days, 21 hours ago on Joachim Schummer's site
  21. 603937.169234
    Work on chance has, for some time, focused on the normative nature of chance: the way in which objective chances constrain what partial beliefs, or credences, we ought to have. According to me, an agent is an expert if and only if their credences are maximally accurate; they are an analyst expert with respect to a body of evidence if and only if their credences are maximally accurate conditional on that body of evidence. I argue that the chances are maximally accurate conditional on local, intrinsic information. This matches nicely with a requirement that Schaffer (2003, 2007) places on chances, called at different times (and in different forms) the Stable Chance Principle and the Intrinsicness Requirement. I call my account the Accuracy-Stability account. I then show how the Accuracy-Stability account underlies some arguments for the New Principle, and show how it revives a version of Van Fraassen’s calibrationist approach. But two new problems arise: first, the Accuracy-Stability account risks collapsing into simple frequentism. But simple frequentism is a bad view. I argue that the same reasoning which motivates the Stability requirement motivates a continuity requirement, which avoids at least some of the problems of frequentism. I conclude by considering an argument from Briggs (2009) that Humean chances aren’t fit to be analyst experts; I argue that the Accuracy-Stability account overcomes Briggs’ difficulties.
    Found 6 days, 23 hours ago on Michael Townsen Hicks's site
  22. 603965.16925
    Humeans are often accused positing laws which fail to explain or are involved in explanatory circularity. Here, I will argue that these arguments are confused, but not because of anything to do with Humeanism: rather, they rest on false assumptions about causal explanation. I’ll show how these arguments can be neatly sidestepped if one takes on two plausible commitments which are motivated independently of Humeanism: first, that laws don’t directly feature in scientific explanation (a view defended recently by Ruben (1990) and Skow (2016)) and second, the view that explanation is contrastive. After outlining and motivating these views, I show how they bear on explanation-based arguments against Humeanism.
    Found 6 days, 23 hours ago on Michael Townsen Hicks's site
  23. 614774.169266
    We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time.
    Found 1 week ago on PhilSci Archive
  24. 614822.169282
    Lyons (2016, 2017, 2018) formulates Laudan’s (1981) historical objection to scientific realism as a modus tollens. I present a better formulation of Laudan’s objection, and then argue that Lyons’s formulation is supererogatory. Lyons rejects scientific realism (Putnam, 1975) on the grounds that some successful past theories were (completely) false. I reply that scientific realism is not the categorical hypothesis that all successful scientific theories are (approximately) true, but rather the statistical hypothesis that most successful scientific theories are (approximately) true. Lyons rejects selectivism (Kitcher, 1993; Psillos, 1999) on the grounds that some working assumptions were (completely) false in the history of science. I reply that selectivists would say not that all working assumptions are (approximately) true, but rather that most working assumptions are (approximately) true.
    Found 1 week ago on PhilSci Archive
  25. 614865.169296
    In 2012, CERN scientists announced the discovery of the Higgs boson, claiming their experimental results finally achieved the 5σ criterion for statistical significance. Although particle physicists apply especially stringent standards for statistical significance, their use of “classical” (rather than Bayesian) statistics is not unusual at all. Classical hypothesis testing—a hybrid of techniques developed by Fisher, Neyman and Pearson—remains the dominant form of statistical analysis, and p-values and statistical power are often used to quantify evidential strength.
    Found 1 week ago on PhilSci Archive
  26. 615057.16931
    The study of psychological and cognitive mechanisms is an interdisciplinary endeavor, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this paper, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly. On the other hand, if we are cautious in the application of our metaphysical commitments, and careful with the inferences we draw from them, then they can provide new insights into how we might find connections between models and theories of mechanisms that appear incompatible.
    Found 1 week ago on PhilSci Archive
  27. 615094.169324
    It is well known that there is a freedom-of-choice loophole or superdeterminism loophole in Bell’s theorem. Since no experiment can completely rule out the possibility of superdeterminism, it seems that a local hidden variable theory consistent with relativity can never be excluded. In this paper, we present a new analysis of local hidden variable theories. The key is to notice that a local hidden variable theory assumes the universality of the Schrodinger equation, and it permits that a measurement can be in principle undone in the sense that the wave function of the composite system after the measurement can be restored to the initial state. We propose a variant of the EPR-Bohm experiment with reset operactions that can undo measurements. We find that according to quantum mechanics, when Alice’s measurement is undone after she obtained her result, the correlation between the results of Alice’s and Bob’s measurements depends on the time order of these measurements, which may be spacelike separated. Since a local hidden variable theory consistent with relativity requires that relativistically non-invariant relations such as the time order of space-like separated events have no physical significance, this result means that a local hidden variable theory cannot explain the correlation and reproduce all predictions of quantum mechanics even when assuming superdeterminism. This closes the major superdeterminism loophole in Bell’s theorem.
    Found 1 week ago on PhilSci Archive
  28. 615134.169338
    We compare and contrast two distinct approaches to understanding the Born rule in de Broglie-Bohm pilot-wave theory, one based on dynamical relaxation over time (advocated by this author and collaborators) and the other based on typicality of initial conditions (advocated by the ‘Bohmian mechanics’ school). It is argued that the latter approach is inherently circular and physically misguided. The typicality approach has engendered a deep-seated confusion between contingent and law-like features, leading to misleading claims not only about the Born rule but also about the nature of the wave function. By artificially restricting the theory to equilibrium, the typicality approach has led to further misunderstandings concerning the status of the uncertainty principle, the role of quantum measurement theory, and the kinematics of the theory (including the status of Galilean and Lorentz invariance). The restriction to equilibrium has also made an erroneously-constructed stochastic model of particle creation appear more plausible than it actually is. To avoid needless controversy, we advocate a modest ‘empirical approach’ to the foundations of statistical mechanics. We argue that the existence or otherwise of quantum nonequilibrium in our world is an empirical question to be settled by experiment.
    Found 1 week ago on PhilSci Archive
  29. 636505.169352
    Extended cognition is when cognitive processes extend beyond the brain and nervous system of the subject, and in the process properly include such ‘external’ devices as technology. This paper explores what relevance extended cognitive processes might have for humility, and especially for the specifically cognitive aspect of humility—viz., intellectual humility. As regards humility in general, it is argued that there are no in principle barriers to extended cognitive processes helping to enable the development and manifestation of this character trait, but that there may be limitations to the extent to which one’s manifestation of humility can be dependent upon these processes, at least insofar as we follow orthodoxy and treat humility as a virtue. As regards the cognitive trait of intellectual humility in particular, the question becomes whether this can itself be an extended cognitive process. It is argued that this wouldn’t be a plausible conception of intellectual humility, at least insofar as we treat intellectual humility (like humility in general) as a virtue.
    Found 1 week ago on Duncan Pritchard's site
  30. 640700.16937
    Brian Haig, Professor Emeritus Department of Psychology University of Canterbury Christchurch, New Zealand The American Statistical Association’s (ASA) recent effort to advise the statistical and scientific communities on how they should think about statistics in research is ambitious in scope. …
    Found 1 week ago on D. G. Mayo's blog