1. 141229.963489
    [Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. …
    Found 1 day, 15 hours ago on John Danaher's blog
  2. 279060.963556
    One of the central philosophical debates prompted by general relativity concerns the status of the metric field. A number of philosophers have argued that the metric field should no longer be regarded as part of the background arena in which physical fields evolve; it should be regarded as a physical field itself. Earman and Norton write, for example, that the metric tensor in general relativity ‘incorporates the gravitational field and thus, like other physical fields, carries energy and momentum’.1 Indeed, they baldly claim that according to general relativity ‘geometric structures, such as the metric tensor, are clearly physical fields in spacetime’.2 On such a view, spacetime itself— considered independently of matter—has no metrical properties, and the mathematical object that best represents spacetime is a bare topological manifold. As Rovelli puts the idea: ‘the metric/gravitational field has acquired most, if not all, the attributes that have characterized matter (as opposed to spacetime) from Descartes to Feynman...
    Found 3 days, 5 hours ago on PhilSci Archive
  3. 394372.963573
    Fuchs and Peres (2000) claimed that standard Quantum Mechanics needs no interpretation. In this essay, I show the flaws of the arguments presented in support to this thesis. Specifically, it will be claimed that the authors conflate QM with Quantum Bayesianism (QBism) - the most prominent subjective formulation of quantum theory; thus, they endorse a specific interpretation of the quantum formalism. Secondly, I will explain the main reasons for which QBism should not be considered a physical theory, being it concerned exclusively with agents’ beliefs and silent about the physics of the quantum regime. Consequently, the solutions to the quantum puzzles provided by this approach cannot be satisfactory from a physical perspective. In the third place, I evaluate Fuchs and Peres arguments contra the non-standard interpretations of QM, showing again the fragility of their claims. Finally, it will be stressed the importance of the interpretational work in the context of quantum theory.
    Found 4 days, 13 hours ago on PhilSci Archive
  4. 394474.963587
    Within the context of the Quine-Putnam indispensability argument, one discussion about the status of mathematics is concerned with the ‘Enhanced Indispensability Argument’, which makes explicit in what way mathematics is supposed to be indispensable in science, namely explanatory. If there are genuine mathematical explanations of empirical phenomena, an argument for mathematical platonism could be extracted by using inference to the best explanation. The best explanation of the primeness of the life cycles of Periodical Cicadas is genuinely mathematical, according to Baker (2005, 2009). Furthermore, the result is then also used to strengthen the platonist position (e.g. Baker 2017a). We pick up the circularity problem brought up by Leng (2005) and Bangu (2008). We will argue that Baker’s attempt to solve this problem fails, if Hume’s Principle is analytic. We will also provide the opponent of the Enhanced Indispensability Argument with the so-called ‘interpretability strategy’, which can be used to come up with alternative explanations in case Hume’s Principle is non-analytic.
    Found 4 days, 13 hours ago on PhilSci Archive
  5. 528116.963601
    I am delighted to announce the next symposium in our series on articles from Neuroscience of Consciousness. Neuroscience of Consciousness is an interdisciplinary journal focused on the philosophy and science of consciousness, and gladly accepts submissions from both philosophers and scientists working in this fascinating field.We have two types of symposia. …
    Found 6 days, 2 hours ago on The Brains Blog
  6. 789057.963617
    The combination of panpsychism and priority monism leads to priority cosmopsychism, the view that the consciousness of individual sentient creatures is derivative of an underlying cosmic consciousness. It has been suggested that contemporary priority cosmopsychism parallels central ideas in the Advaita Vedānta tradition. The paper offers a critical evaluation of this claim. It argues that the Advaitic account of consciousness cannot be characterized as an instance of priority cosmopsychism, points out the differences between the two views, and suggests an alternative positioning of the Advaitic canon within the contemporary debate on monism and panpsychism.
    Found 1 week, 2 days ago on PhilPapers
  7. 892285.96363
    Curiously, people assign less punishment to a person who attempts and fails to harm somebody if their intended victim happens to suffer the harm for coincidental reasons. This “blame blocking” effect provides an important evidence in support of the two-process model of moral judgment (Cushman, 2008). Yet, recent proposals suggest that it might be due to an unintended interpretation of the dependent measure in cases of coincidental harm (Prochownik, 2017; also Malle, Guglielmo, & Monroe, 2014). If so, this would deprive the two-process model of an important source of empirical support. We report and discuss results that speak against this alternative account.
    Found 1 week, 3 days ago on Fiery Cushman's site
  8. 892340.963643
    The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
    Found 1 week, 3 days ago on PhilPapers
  9. 916351.963656
    We argue that comparative psychologists have been too quick to jump to metacognitive interpretations of their data. We examine two such cases in some detail. One concerns so-called “uncertainty monitoring” behavior, which we show to be better explained in terms for first-order estimates of risk. The other concerns informational search, which we argue is better explained in terms of a first-order curiosity-like motivation that directs questions at the environment.
    Found 1 week, 3 days ago on Peter Carruthers's site
  10. 1086741.96367
    Paul Busch has emphasized on various occasions the importance for physics of going beyond a merely instrumentalist view of quantum mechanics. Even if we cannot be sure that any particular realist interpretation describes the world as it actually is, the investigation of possible realist interpretations helps us to develop new physical ideas and better intuitions about the nature of physical objects at the micro level. In this spirit, Paul Busch himself pioneered the concept of “unsharp quantum reality”, according to which there is an objective non-classical indeterminacy—a lack of sharpness—in the properties of individual quantum systems. We concur with Busch’s motivation for investigating realist interpretations of quantum mechanics and with his willingness to move away from classical intuitions. In this article we try to take some further steps on this road. In particular, we pay attention to a number of prima facie implausible and counter-intuitive aspects of realist interpretations of unitary quantum mechanics. We shall argue that from a realist viewpoint, quantum contextuality naturally leads to “perspectivalism” with respect to properties of spatially extended quantum systems, and that this perspectivalism is important for making relativistic covariance possible.
    Found 1 week, 5 days ago on PhilSci Archive
  11. 1086802.963684
    Cultural evolutionary theory has been alternatively compared to a theory of forces, such as Newtonian mechanics, or the kinetic theory of gases. In this article, I clarify the scope and significance of these metatheoretical characterisations. First, I discuss the kinetic analogy, which has been recently put forward by Tim Lewens. According to it, cultural evolutionary theory is grounded on a bottom-up methodology, which highlights the additive effects of social learning biases on the emergence of large-scale cultural phenomena. Lewens supports this claim by arguing that it is a consequence of cultural evolutionists’ widespread commitment to population thinking. While I concur with Lewens that cultural evolutionists often actually conceive cultural change in aggregative terms, I think that the kinetic framework does not properly account for the explanatory import of population-level descriptions in cultural evolutionary theory. Starting from a criticism of Lewens’ interpretation of population thinking, I argue that the explanatory role of such descriptions is best understood within a dynamical framework – that is, a framework according to which cultural evolutionary theory is a theory of forces. After having spelled out the main features of this alternative interpretation, I elucidate in which respects it helps to outline a more accurate characterisation of the overarching structure of cultural evolutionary theory.
    Found 1 week, 5 days ago on PhilSci Archive
  12. 1113065.963697
    Is it possible to introduce a small number of agents into an environment, in such a way that an equilibrium results in which almost everyone (including the original agents) cooperates almost all the time? This is a compelling question for those interested in the design of beneficial game-theoretic AI, and it may also provide insights into how to get human societies to function better. We investigate this broad question in the specific context of finitely repeated games, and obtain a mostly positive answer. Our main novel technical tool is the use of limited altruism (LA) types, which behave altruistically towards other LA agents but not towards selfish agents. The uncertainty about which type of agent one is facing turns out to be essential in establishing cooperation. We provide characterizations in several families of games of which LA types are effective for our purposes.
    Found 1 week, 5 days ago on Vincent Conitzer's site
  13. 1210744.96371
    According to a conventional view, there exists no common cause model of quantum correlations satisfying locality requirements. Indeed, Bell’s inequality is derived from some locality requirements and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argued that in the derivation of the inequality, the existence of a common common-cause for multiple correlations is implicitly assumed and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, Graßhoff et al. showed that when there are three pairs of perfectly correlated events and a common cause of each correlation exist, we cannot construct a common cause model that is consistent with quantum mechanical prediction and also meets several locality requirements. In this paper, first, as a consequence of the fact shown by Graßhoff et al., we will confirm that there exists no local common cause model when a two-particle system is in any maximally entangled state. After that, based on Hardy’s famous argument, we will prove that there exists no local common cause model when a two-particle system is in any non-maximally entangled state. Therefore, it will be concluded that for any entangled state, there exists no local common cause model. It will be revealed that the non-existence of a common cause model satisfying locality is not limited to a particular state like the singlet state.
    Found 2 weeks ago on PhilSci Archive
  14. 1210770.963724
    Absolutism about mass within Newtonian Gravity claims that mass ratios obtain in virtue of absolute masses. Comparativism denies this. Defenders of comparativism promise to recover all the empirical and theoretical virtues of absolutism, but at a lower ‘metaphysical cost’. This paper develops a Machian form of comparativism about mass in Newtonian Gravity, obtained by replacing Newton’s constant in the law of Universal Gravitation by another constant divided by the sum over all masses. Although this form of comparativism is indeed empirically equivalent to the absolutist version of Newtonian Gravity—thereby meeting the challenge posed by the comparativist’s bucket argument—it is argued that the explanatory power and metaphysical parsimony of comparativism (and especially its Machian form) are highly questionable.
    Found 2 weeks ago on PhilSci Archive
  15. 1210855.96374
    There’s been a lot of excitement about the new gene-editing tool CRISPR-Cas9. Discussion of the technology has largely focused on its precision, accuracy, customizability, and affordability. But the CRISPR-Cas system from which the technology was derived has a fascinating life of its own. The work of Eugene V. Koonin’s lab is mapping the rich histories of CRISPR-Cas systems in microbial populations. In “CRISPR: A New Principle of Genome Engineering Linked to Conceptual Shifts in Evolutionary Biology,” Koonin argues that fundamental research studying adaptive immune mechanisms has (among other things) illuminated “fundamental principles of genome manipulation.” I think Koonin’s discussion provides important philosophical insights for how we should understand the significance of CRISPR-Cas systems, and the technologies derived from them. Yet the analysis he provides is only part of a larger story that fully captures the biological significance that CRISPR-Cas systems represent. There is also a human element to the CRISPR-Cas story that concerns its development as a technology. Accounting for the human history of CRISPR-Cas reveals that the story Koonin provides requires greater nuance. I’ll show how CRISPR-Cas technologies are not “natural” genome editing systems but are partly artifacts of human ingenuity. Furthermore, I’ll argue that when it comes to the story of CRISPR-Cas, fundamental and applied research are importantly intertwined.
    Found 2 weeks ago on PhilSci Archive
  16. 1390273.963753
    In research on action explanation, philosophers and developmental psychologists have recently proposed a teleological account according to which we typically don’t explain an agent’s action by appealing to her mental states but by referring to the objective, publically accessible facts of the world that count in favor of performing the action. Advocates of the teleological account claim that this strategy is our main way of understanding people’s actions. I argue that common motivations mentioned to support the teleological account are insufficient to sustain its generalization from children to adults. Moreover, social psychological studies, combined with theoretical considerations, suggest that we do not explain actions mainly by invoking publically accessible, reasoning-giving facts alone but by ascribing mental states to the agent. The point helps advance the theorizing on the teleological account and on the nature of action explanation.
    Found 2 weeks, 2 days ago on PhilPapers
  17. 1397965.963766
    In this article, it is argued that, for a classical Hamiltonian system which is closed, the ergodic theorem emerge from the Gibbs-Liouville theorem in the limit that the system has evolved for an infinitely long period of time. In this limit, from the perspective of an ignorant observer, who do not have perfect knowledge about the complete set of degrees of freedom for the system, distinctions between the possible states of the system, i.e. the information content, is lost leading to the notion of statistical equilibrium where states are assigned equal probabilities. Finally, by linking the concept of entropy, which gives a measure for the amount of uncertainty, with the concept of information, the second law of thermodynamics is expressed in terms of the tendency of an observer to loose information over time.
    Found 2 weeks, 2 days ago on PhilSci Archive
  18. 1397989.963779
    In this article, it is argued that the Gibbs- Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never lost. Thus, it is proposed that the Gibbs-Liouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the Gibbs-Liouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation.
    Found 2 weeks, 2 days ago on PhilSci Archive
  19. 1398035.963792
    The human being is a paradox. We, a result of evolution, have developed the theory of evolution. Namely, the evolutionary process, in an unprecedented attempt, has been thought by one of its products — the bootstrapping is in place: the explanandum nominates itself as the explanans . Yet, the concept of evolution is one thing, while evolution itself is another. Upfront, this is an attempt to rescue Bergson’s intuitionsii on heterogeneous continuity, his notion of multiplicity, so as to recover that which, being at the core of evolution, has been lost by our habitual ways of thinking about it.
    Found 2 weeks, 2 days ago on PhilSci Archive
  20. 1451393.963806
    Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that these skeptical worries are misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate.
    Found 2 weeks, 2 days ago on PhilPapers
  21. 1466375.963818
    With a few exceptions, the literature on evolutionary transitions in individuality (ETIs) has mostly focused on the relationships between lower-level (particle-level) and higher-level (collective-level) selection, leaving aside the question of the relationship between particle-level and collective-level inheritance. Yet, without an account of this relationship, our hope to fully understand the evolutionary mechanisms underlying ETIs is impeded. To that effect, I present a highly idealized model to study the relationship between particle-level and collective-level heritability both when a collective-level trait is a linear function and when it is a nonlinear function of a particle-level trait. I first show that when a collective trait is a linear function of a particle-level trait, collective-level heritability is a by-product of particle-level heritability. It is equal to particle-level heritability, whether the particles interact randomly or not to form collectives. Second, I show that one effect of population structure is the reduction in variance in offspring collective-level character for a given parental collective. I propose that this reduction in variance is one dimension of individuality. Third, I show that even in the simple case of a nonlinear collective-level character, collective-level heritability is not only weak but also highly dependent on the frequency of the different types of particles in the global population. Finally, I show that population structure, because one of its effects is to reduce the variance in offspring collective-level character, allows not only for an increase in collective-level character but renders it less context dependent. This in turn permits a stable collective-level response to selection. The upshot is that population structure is a driver for ETIs. These results are particularly significant in that the relationship between population structure and collective-level heritability has, to my knowledge, not been previously explored in the context of ETIs.
    Found 2 weeks, 2 days ago on Pierrick Bourrat's site
  22. 1575448.963831
    In this article, it is argued that the Gibbs- Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never lost. Thus, it is proposed that the Gibbs-Liouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the Gibbs-Liouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation.
    Found 2 weeks, 4 days ago on PhilSci Archive
  23. 1583399.963844
    Cancer is a worldwide epidemic. It is the first or second leading cause of death before age 70 in ninety-one countries, as of 2015. According to the International Agency for Research on Cancer, “there will be an estimated 18.1 million new cancer cases and 9.6 million cancer deaths in 2018,” and cancer is expected to be the “leading cause of death in every country of the world in the 21st century” (Bray, et al., 2018). While overall cancer mortality has declined in the U.S. annually since 2005, progress has been slow in some cases, and mortality is rising in others. In particular, “death rates rose from 2010 to 2014 by almost 3% per year for liver cancer and by about 2% per year for uterine cancer,” and, “pancreatic cancer death rates continued to increase slightly (by 0.3% per year) in men” (Siegel, et al., 2017).
    Found 2 weeks, 4 days ago on Wes Morriston's site
  24. 1667475.963857
    Agents make predictions based on similar past cases, while also learning the relative importance of various attributes in judging similarity. We ask whether the resulting "empirically optimal similarity function (EOSF) is unique, and how easy it is to find it. We show that with many observations and few relevant variables, uniqueness holds. By contrast, when there are many variables relative to observations, non-uniqueness is the rule, and finding the EOSF is computationally hard. The results are interpreted as providing conditions under which rational agents who have access to the same observations are likely to converge on the same predictions, and conditions under which they may entertain different probabilistic beliefs.
    Found 2 weeks, 5 days ago on Itzhak Gilboa's site
  25. 1676057.963872
    This paper briefly discusses some of David Bohm’s views on mind and matter and suggests that they allow for a stronger possibility for conscious free will to influence quantum dynamics than Henry Stapp’s approach.
    Found 2 weeks, 5 days ago on PhilPapers
  26. 1676072.963885
    There is conflicting experimental evidence about whether the “stakes” or importance of being wrong affect judgments about whether a subject knows a proposition. To date, judgments about stakes effects on knowledge have been investigated using binary paradigms: responses to “low” stakes cases are compared with responses to “high stakes” cases. However, stakes or importance are not binary properties—they are scalar: whether a situation is “high” or “low” stakes is a matter of degree. So far, no experimental work has investigated the scalar nature of stakes effects on knowledge: do stakes effects increase as the stakes get higher? Do stakes effects only appear once a certain threshold of stakes has been crossed? Does the effect plateau at a certain point? To address these questions, we conducted experiments that probe for the scalarity of stakes effects using several experimental approaches. We found evidence of scalar stakes effects using an “evidence-seeking” experimental design, but no evidence of scalar effects using a traditional “evidence-fixed” experimental design. In addition, using the evidence-seeking design, we uncovered a large, but previously unnoticed framing effect on whether participants are skeptical about whether someone can know something, no matter how much evidence they have. The rate of skeptical responses and the rate at which participants were willing to attribute “lazy knowledge”—that someone can know something without having to check— were themselves subject to a stakes effect: participants were more skeptical when the stakes were higher, and more prone to attribute lazy knowledge when the stakes were lower. We argue that the novel skeptical stakes effect provides resources to respond to criticisms of the evidence-seeking approach that argue that it does not target knowledge.
    Found 2 weeks, 5 days ago on PhilPapers
  27. 1676092.963898
    Metaphysical orthodoxy holds that a privileged minority of properties carve reality at its joints. These are the so-called fundamental properties. This thesis concerns the contemporary philosophical debate about the nature of fundamental properties. In particular, it aims to answer two questions: (1) What is the most adequate conception of fundamental properties? (2) What is the “big picture” world-view that emerges by adopting such a conception? I argue that a satisfactory answer to both questions requires us to embrace a novel conception of powerful qualities, according to which properties are at once dispositional and qualitative. By adopting the proposed conception of powerful qualities, an original theory of fundamental properties comes to light. I call it Dual-Aspect Account. In this thesis, I defend the Dual-Aspect Account and its superiority with respect to rival views of fundamental properties. I illustrate this claim by examining Dispositionalism, the view defended among others by Alexander Bird and Stephen Mumford, Categoricalism, which has been advocated notably by David Lewis and David Armstrong, and the Identity Theory of powerful qualities, primarily championed by C. B. Martin and John Heil. The latter is the standard conception of powerful qualities. However, in the literature, the Identity Theory faces the charge of contradiction. A preliminary task is therefore to show that a conception of powerful qualities is coherent. To accomplish this aim, I introduce the notion of an aspect of a property. On this interpretation, powerful qualities can be thought of as having dispositional and qualitative aspects. I show that such a conception allows us to disambiguate the claim that a property’s dispositionality is identical with its qualitativity, and evade the charge of contradiction. Aspects bring us other theoretical benefits. I illustrate this claim by showing how the Dual-Aspect Account offers us a promising theory of resemblance. I then compare its merits with David Armstrong’s theory of partial identity. The conclusion of this thesis is that the Dual-Aspect Account is better suited to capturing the world as we find it in everyday life and scientific investigation as compared to the theoretical positions examined.
    Found 2 weeks, 5 days ago on PhilPapers
  28. 1713487.963911
    We identify several ongoing debates related to implicit measures, surveying prominent views and considerations in each. First, we summarize the debate regarding whether performance on implicit measures is explained by conscious or unconscious representations. Second, we discuss the cognitive structure of the operative constructs: are they associatively or propositionally structured? Third, we review debates about whether performance on implicit measures reflects traits or states. Fourth, we discuss the question of whether a person's performance on an implicit measure reflects characteristics of the person who is taking the test or characteristics of the situation in which the person is taking the test. Finally, we survey the debate about the relationship between implicit measures and (other kinds of) behavior.
    Found 2 weeks, 5 days ago on Alex Madva's site
  29. 1863320.963927
    . We’ve reached our last Tour (of SIST)*: Pragmatic and Error Statistical Bayesians (Excursion 6), marking the end of our reading with Souvenir Z, the final Souvenir, as well as the Farewell Keepsake in 6.7. …
    Found 3 weeks ago on D. G. Mayo's blog
  30. 1917715.963942
    In this paper I claim that perceptual discriminatory skills rely on a suitable type of environment as an enabling condition for their exercise. This is because of the constitutive connection between environment and perceptual discriminatory skills, inasmuch as such connection is construed from an ecological approach. The exercise of a discriminatory skill yields knowledge of affordances of objects, properties, or events in the surrounding environment. This is practical knowledge in the first-person perspective. An organism learns to perceive an object by becoming sensitized to its affordances. I call this position ecological disjunctivism. A corollary of this position is that a case of perception and its corresponding case of hallucination—which is similar to the former only in some respects—are different in nature. I show then how the distinguishability problem is addressed by ecological disjunctivism.
    Found 3 weeks, 1 day ago on PhilPapers