What if your brain could talk to you? ’That’s a silly question’, I hear you say, ‘My brain already talks to me.’
To the best of our current knowledge, the mind is the brain, and the mind is always talking. …
Creativity is the production of things that are novel and valuable (whether physical artefacts, actions, or ideas). Humans are unique in the extent of their creativity, which plays a central role in innovation and problem solving, as well as in the arts. But what are the cognitive sources of novelty? More particularly, what are the cognitive sources of stochasticity in creative production? I will argue that they belong to two broad categories. One is associative, enabling the selection of goal-relevant ideas that have become activated by happenstance in an unrelated context. The other relies on selection processes that leverage stochastic fluctuations in neural activity. While the components appealed to in these accounts are well established, the ways in which I combine them together are new.
Loop quantum gravity has formalized a robust scheme in resolving classical singularities in a variety of symmetry-reduced models of gravity. In this essay, we demonstrate that the same quantum correction which is crucial for singularity resolution is also responsible for the phenomenon of signature change in these models, whereby one effectively transitions from a ‘fuzzy’ Euclidean space to a Lorentzian space-time in deep quantum regimes. As long as one uses a quantization scheme which respects covariance, holonomy corrections from loop quantum gravity generically leads to non-singular signature change, thereby giving an emergent notion of time in the theory. Robustness of this mechanism is established by comparison across large class of midisuperspace models and allowing for diverse quantization ambiguities. Conceptual and mathematical consequences of such an underlying quantum-deformed space-time are briefly discussed.
Harvey Brown’s Physical Relativity defends a view, the dynamical perspective, on the nature of spacetime that goes beyond the familiar dichotomy of substantivalist/relationist views. A full defense of this view requires attention to the way that our use of spacetime concepts connect with the physical world. Reflection on such matters, I argue, reveals that the dynamical perspective affords the only possible view about the ontological status of spacetime, in that putative rivals fail to express anything, either true or false. I conclude with remarks aimed at clarifying what is and isn’t in dispute with regards to the explanatory priority of spacetime and dynamics, at countering an objection raised by John Norton to views of this sort, and at clarifying the relation between background and effective spacetime structure.
In his paper, ‘Regarding the ‘Hole Argument”, Weatherall suggests that models of general relativity related by a hole diffeomorphism must be regarded as being physically equivalent. At a later stage in the paper, however, he also argues that there is a sense in which two such models may be regarded as being empirically distinct—a fortiori physically distinct. We attempt to delineate the logic behind these two prima facie contradictory claims. We argue that the latter sense rests upon a misunderstanding of the import of shift arguments in the foundations of spacetime theories.
According to Michael Friedman’s theory of explanation, a law X explains laws Y1 2, ... , Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Philip Kitcher criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes of the concept of independent acceptability.
There are good reasons to believe that the classical structure of space-time, as it appears in general relativity, breaks down at small length scales of the order of the Planck scale . This poses a problem in particular for any theory of quantum gravity, which should extend to such short length scales. Assuming that the classical concept of space-time (described as a manifold) is no longer viable as a fundamental concept in such a theory, one needs to explain how it emerges as an approximate concept in the appropriate (long distance) limit.
This paper attempts to reconcile critics and defenders of inclusive fitness by constructing a synthesis that does justice to the insights of both. I argue that criticisms of the regression-based version of Hamilton’s rule, although they undermine its use for predictive purposes, do not undermine its use as an organizing framework for social evolution research. I argue that the assumptions underlying the concept of inclusive fitness, conceived as a causal property of an individual organism, are unlikely to be exactly true in real populations, but they are approximately true given a specific type of weak selection that Hamilton took, on independent grounds, to be responsible for the cumulative assembly of complex adaptation. Finally, I reflect on the uses and limitations of “design thinking” in social evolution research.
In a series of recent papers, two of which appeared in this journal, a group of philosophers, physicists, and climate scientists have argued that something they call the ‘hawkmoth effect’ poses insurmountable difficulties for those who would use nonlinear models, including climate simulation models, to make quantitative predictions or to produce ‘decision-relevant probabilites.’ Such a claim, if it were true, would undermine much of climate science, among other things. Here, we examine the two lines of argument the group has used to support their claims. The first comes from a set of results in dynamical systems theory associated with the concept of ‘structural stability.’ The second relies on a mathematical demonstration of their own, using the logistic equation, that they present using a hypothetical scenario involving two apprentices of Laplace’s omniscient demon. We prove two theorems that are relevant to their claims, and conclude that both of these lines of argument fail. There is nothing out there that comes close to matching the characteristics this group attributes to the ‘hawkmoth effect.’
On a very intuitive way of thinking, if it is already determined that some event will happen, then there is no non-trivial chance (no chance between 0 and 1) of it failing to happen, and if it is already determined that some event will not happen, then there is no non-trivial chance of it happening. On this way of thinking, it does not make sense to claim both that it is already determined that Always Dreaming will win this year’s Kentucky Derby and that the chance of Classic Empire winning instead is 1/2.
We reflect on the information paradigm in quantum and gravitational physics and on how it may assist us in approaching quantum gravity. We begin by arguing, using a reconstruction of its formalism, that quantum theory can be regarded as a universal framework governing an observer’s acquisition of information from physical systems taken as information carriers. We continue by observing that the structure of spacetime is encoded in the communication relations among observers and more generally the information flow in spacetime. Combining these insights with an information-theoretic Machian view, we argue that the quantum architecture of spacetime can operationally be viewed as a locally finite network of degrees of freedom exchanging information. An advantage – and simultaneous limitation – of an informational perspective is its quasi-universality, i.e. quasi-independence of the precise physical incarnation of the underlying degrees of freedom. This suggests to exploit these informational insights to develop a largely microphysics independent top-down approach to quantum gravity to complement extant bottom-up approaches by closing the scale gap between the unknown Planck scale physics and the familiar physics of quantum (field) theory and general relativity systematically from two sides. While some ideas have been pronounced before in similar guise and others are speculative, the way they are strung together and justified is new and supports approaches attempting to derive emergent spacetime structures from correlations of quantum degrees of freedom.
According to the Fine-Tuning Argument (FTA), the existence of life in our universe confirms the Multiverse Hypothesis (HM). A standard objection to FTA is that it violates the Requirement of Total Evidence (RTE). I argue that RTE should be rejected in favor of the Predesignation Requirement, according to which, in assessing the outcome of a probabilistic process, we should only use evidence characterizable in a manner available prior to observing the outcome. This produces the right verdicts in some simple cases in which RTE leads us astray; and, when applied to FTA, it shows that our evidence does confirm HM.
I’m visiting the University of Genoa and talking to two category theorists: Marco Grandis and Giuseppe Rosolini. Grandis works on algebraic topology and higher categories, while Rosolini works on the categorical semantics of programming languages. …
In this paper, I will argue that metaphysicians ought to utilize quantum theories of gravity (QG) as incubators for a future metaphysics. In §2, I will argue why this ought to be done. In §3, I will present case studies from the history of science where physical theories have challenged both the dogmatic and speculative metaphysician. In §4, I will present two theories of QG and demonstrate the challenge they pose to certain aspects of our current metaphysics; in particular, how they challenge our understanding of the abstract-concrete distinction. The central goal of this paper is to encourage metaphysicians to look to physical theories, especially those involving cosmology such as string theory and loop quantum gravity, when doing metaphysics.
Kant saw science as presupposing that the natural laws bring maximal diversity under maximal unity. Many philosophers, such as David Lewis, have regarded objective chances as upshots of science’s aim at systematic unity—as ideal credences projected onto the world. This Kantian projectivism has seemed the only possible way to account for the rational constraint (codified by the ‘Principal Principle’) that our credences about chances impose on our credences regarding what they are chances of. This paper examines three ways of elaborating Lewis’s Kantian strategy for explaining this rational constraint. After arguing that none of these three approaches is unproblematic, the paper proposes a non-Kantian alternative account according to which a chance measures the strength of a causal tendency.
In this paper, I identify two general positions with respect to the relationship between environment and natural selection. These positions consist in claiming that selective claims need and, respectively, need not be relativized to homogenous environments. I then show that adopting one or the other position makes a difference with respect to the way in which the effects of selection are to be measured in certain cases in which the focal population is distributed over heterogeneous environments. Moreover, I show that these two positions lead to two different interpretations – the Pricean and contextualist ones – of a type of selection scenarios in which multiple groups varying in properties affect the change in the metapopulation mean of individual-level traits. Showing that these two interpretations stem from different attitudes towards environmental homogeneity allows me to argue: a) that, unlike the Pricean interpretation, the contextualist interpretation can only claim that drift or selection is responsible for the change in frequency of the focal trait in a given metapopulation if details about whether or not group formation is random are specified; b) that the traditional main objection against the Pricean interpretation – consisting in arguing that the latter takes certain side-effects of individual selection to be effects of group selection – is unconvincing. This leads me to suggest that the ongoing debate about which of the two interpretations is preferable should concentrate on different issues than previously thought.
The Enhanced Indispensability Argument (EIA) appeals to the existence of Mathematical Explanations of Physical Phenomena (MEPPs) to justify mathematical Platonism, following the principle of Inference to the Best Explanation. In this paper, I examine one example of a MEPP —the explanation of the 13-year and 17-year life cycle of magicicadas— and argue that this case cannot be used to justify mathematical Platonism. I then generalize my analysis of the cicada case to other MEPPs, and show that these explanations rely on what I will call ‘optimal representations’, which are representations that capture all that is relevant to explain a physical phenomenon at a specified level of description. In the end, because the role of mathematics in MEPPs is ultimately representational, they cannot be used to support mathematical Platonism. I finish the paper by addressing the claim, advanced by many EIA defendants, that quantification over mathematical objects results in explanations that have more theoretical virtues, especially that they are more general and modally stronger than alternative explanations. I will show that the EIA cannot be successfully defended by appealing to these notions.
. I blogged this exactly 2 years ago here, seeking insight for my new book (Mayo 2017). Over 100 (rather varied) interesting comments ensued. This is the first time I’m incorporating blog comments into published work. …
Does perceptual consciousness require cognitive access? Ned Block argues that it does not. Central to his case are visual memory experiments that employ post-stimulus cueing—in particular, Sperling’s classic partial report studies, change-detection work by Lamme and colleagues, and a recent paper by Bronfman and colleagues that exploits our perception of ‘gist’ properties. We argue contra Block that these experiments do not support his claim. Our reinterpretations differ from previous critics’ in challenging as well a longstanding and common view of visual memory as involving declining capacity across a series of stores. We conclude by discussing the relation of probabilistic perceptual representations and phenomenal consciousness.
The so-called Many Worlds Interpretation of quantum mechanics has been extant now for nearly sixty years, beginning as H. Everett III’s doctoral dissertation in 1957 [?], with further contributions by B. DeWitt and N. Graham in their 1973 book, The Many Worlds Interpretation of Quantum Mechanics [?]. The Everett approach takes quantum mechanics both realistically and as a stand-alone, autonomous theory of the world, not in need of a separate theory of measurement to bridge the apparent gap between the deterministic evolution of the wave-function in a highly abstract, probabilistic space and empirically observable statistics in the laboratory. Instead, Everett proposed that all of the apparently contradictory macroscopic results assigned some finite probability by the theory are equally real, coexisting in distinct sets of relative states. DeWitt and others later identified these clusters of mutually consistent relative states with distinct and co-existing worlds or branches of the world.
Aristotle's theory of nature offered a number of advantages from a Christian point of view. It allowed for a profound difference between human beings and other material entities based on a distinction between rationality and sub-rationality, which fit nicely with the Biblical conception of humans as the unique bearers of the divine image in the physical world. At the same time, Aristotelianism conceived of human desires and aspirations as continuous with the striving of all natural entities to their essence-determined ends, providing an objective and scientific basis for objective norms in ethics, aesthetics, and politics. The Scientific Revolution of the last three hundred years, while clearly enabling an amazing degree of progress in our understanding of the physical basis of the world (both at the very small and very large ends of the scale), occasioned the unnecessary loss of many metaphysical insights of Aristotle and the Aristotelian tradition, insights which remain essential to the understanding of middle-sized objects-- like human beings. The quantum revolution of the last one hundred years has gradually transformed the imaginative landscape of natural science, creating new opportunities for the recovery of those same Aristotelian themes. (191)
In a series of at least ten books and articles over the last twenty-two years, Timothy O’Connor and his collaborators have developed one of the most rigorous, subtle, and influential accounts of the relation between mind and body, which for present purposes we can call ‘emergent individualism’. My own work has been shaped and enriched by this body of work. Consequently, the critique I offer here is a decidedly friendly, intended to advance our understanding of the mind while building on the contributions of O’Connor and his co-authors (Wong, Churchill, Theiner, and Jacobs).
[Editor's Note: The following new entry by Thomas Nickles replaces the
on this topic by the previous authors.] Many scientists, philosophers, and laypersons have regarded science as
the one human enterprise that successfully escapes the contingencies
of history to establish eternal truths about the universe, via a
special, rational method of inquiry. Historicists oppose this view. In
the 1960s several historically informed philosophers of science
challenged the then-dominant accounts of scientific method advanced by
the Popperians and the positivists (the logical positivists and
logical empiricists) for failing to fit historical scientific practice
and failing particularly to account for deep scientific change.
E.S. Pearson (11 Aug, 1895-12 June, 1980)
E.S. Pearson died on this day in 1980. Pearson was interested in philosophical aspects of statistical inference. A question he asked is this: Are methods with good error probabilities of use mainly to supply procedures which will not err too frequently in some long run? …
The instability of meaning under theory change, construed as instability of reference, requires a dense meaning space. That is, the space of referents is densely populated so that slight change in theory can shift the referent of a term to another nearby in the space. If, however, the meaning space is sparse, there are no suitable referents nearby in the space. The referents of terms can remain unchanged, even with substantial changes in the theory, simply because there are no better referents to which to attach the terms. Dense and sparse meaning spaces are associated with antirealist and realist inclinations, respectively.
This paper is about a question that many readers will think has already been settled: are there different sizes of infinity? That is, are there infinite sets of different sizes? This is one of the most natural questions that one can ask about the infinite. But it is of course generally taken to be settled by mathematical results, such as Cantor’s theorem, to the effect that there are infinite sets without bijections between them. An answer to our question is entailed by these results (which I of course do not dispute), given the following almost universally accepted principle relating size to the existence of functions.
Why does Mary learn something when she leaves the room? One answer, endorsed by some physicalists as well as most dualists, is as follows. Mary learns something because phenomenal knowledge requires direct acquaintance with phenomenal properties. For this reason, there is an epistemic gap between the physical and the phenomenal: phenomenal facts cannot be deduced from physical facts. This is the acquaintance response to the Knowledge Argument. The physicalist and dualist versions of the acquaintance response diverge as to whether this epistemic gap reveals an ontological gap between the physical and the phenomenal.
Standard physicalism about consciousness faces a well-known problem. We cannot understand how soggy grey matter should necessitate technicolor phenomenology. In fact, we can easily conceive of “Zombie cases” and “altered qualia cases” where the facts about consciousness vary independently of the physical facts. Call this the conceivability problem. This suggests dualism. But dualism about consciousness has its own well-known problem: it is a decidedly uneconomical view of the world. Call this the complexity problem.
We propose a coherence account of the conjunction fallacy applicable to both of its two paradigms (the M-A paradigm and the A-B paradigm). We compare our account with a recent proposal by Tentori, Crupi and Russo (2013) that attempts to generalize earlier confirmation accounts. Their model works better than its predecessors in some respects, but it exhibits only a shallow form of generality and is unsatisfactory in other ways as well: it is strained, complex, and untestable as it stands. Our coherence account inherits the strength of the confirmation account, but in addition to being applicable to both paradigms, it is natural, simple, and readily testable. It thus constitutes the next natural step for Bayesian theorizing about the conjunction fallacy.
[Editor's Note: The following new entry by Peter Taylor
and Richard Lewontin replaces the
on this topic by Richard Lewontin.] The predominant current-day meaning of genotype is some
relevant part of the DNA passed to the organism by its parents. The
phenotype is the physical and behavioral traits of the
organism, for example, size and shape, metabolic activities, and
patterns of movement. The distinction between them is
especially important in evolutionary theory, where the survival and
mating of organisms depends on their traits, but it is the DNA, held
to be unaffected by the development of the traits over the life
course, that is transmitted to the next generation.