. The allegation that P-values overstate the evidence against the null hypothesis continues to be taken as gospel in discussions of significance tests. All such discussions, however, assume a notion of “evidence” that’s at odds with significance tests–generally Bayesian probabilities of the sort used in Jeffrey’s-Lindley disagreement (default or “I’m selecting from an urn of nulls” variety). …
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories of agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action.
Constructive empiricism is the version of scientific anti-realism
promulgated by Bas van Fraassen in his famous book The Scientific
Image (1980). Van Fraassen defines the view as follows:
Science aims to give us theories which are empirically adequate; and
acceptance of a theory involves as belief only that it is empirically
adequate. (1980, 12)
With his doctrine of constructive empiricism, van Fraassen is widely
credited with rehabilitating scientific anti-realism. There has been a
contentious debate within the philosophy of science community over
whether constructive empiricism is true or false.
[Editor's Note: The following new entry by Helen De Cruz replaces the
on this topic by the previous author.] The relationship between religion and science is the subject of
continued debate in philosophy and theology. To what extent are
religion and science compatible? Are religious beliefs sometimes
conducive to science, or do they inevitably pose obstacles to
scientific inquiry? The interdisciplinary field of “science and
religion”, also called “theology and science”, aims
to answer these and other questions. It studies historical and
contemporary interactions between these fields, and provides
philosophical analyses of how they interrelate.
The classification of the sciences is one of the most discussed and analysed aspects of Peirce’s corpus of work. I propose that Peirce’s attempt at systematising the sciences is characterised by a distinctive historicity, which I construe in two complementary senses. First, I investigate Peirce’s classification as part of a broader nineteenth-century move toward classifying the sciences, a move that was at the same time motivated by social and epistemological goals. I claim that this re-contextualisation adds an entirely new layer to the otherwise distinctively internalist readings of Peirce’s classification. I then look at how Peirce’s scheme, especially in the form it displayed in the early 1890s, relates to his own historical writings, particularly his history of science. Looking at Peirce as a historical actor in his own right through the lens of his classification, I claim, is indispensable to understand the contemporary relevance of his contributions to the history and historiography of the sciences.
Biologist Steve Jones claims that a piece of research cannot be science if the person who did the research does not communicate their findings. He then dismisses Fermat’s proof of his last theorem as something that Fermat might as well not have done. I give reasons to reject the argument Jones offers for his communication requirement, the requirement itself and what he says about Fermat’s last theorem.
It is well-known that the invocation of ‘equilibrium processes’ in thermodynamics is oxymoronic. However, their prevalence and utility particularly in elementary accounts presents a problem. We consider a way in which their role can be played by curves carrying the property of accessibility. We also examine the vexed question of whether equilibrium processes can be considered to be reversible and the revision of this property in relation to curves of accessibility.
There are two reasons for asking such an apparently unanswerable question. First, Max Born’s recollections of what Minkowski had told him about his research on the physical meaning of the Lorentz transformations and the fact that Minkowski had created the full-blown four-dimensional mathematical formalism of spacetime physics before the end of 1907 (which could have been highly improbable if Minkowski had not been developing his own ideas), both indicate that Minkowski might have arrived at the notion of spacetime independently of Poincar´e (who saw it as nothing more than a mathematical space) and at a deeper understanding of the basic ideas of special relativity (which Einstein merely postulated) independently of Einstein. So, had he lived longer, Minkowski might have employed successfully his program of regarding four-dimensional physics as spacetime geometry to gravitation as well. Moreover, Hilbert (Minkowski’s closest colleague and friend) had derived the equations of general relativity simultaneously with Einstein. Second, even if Einstein had arrived at what is today called Einstein’s general relativity before Minkowski, Minkowski would have certainly reformulated it in terms of his program of geometrizing physics and might have represented gravitation fully as the manifestation of the non-Euclidean geometry of spacetime (Einstein regarded the geometrical representation of gravitation as pure mathematics) exactly like he reformulated Einstein’s special relativity in terms of spacetime.
Necessitarianism, dispositionalism, and dynamical laws
Posted on Saturday, 14 Jan 2017
Necessitarian and dispositionalist accounts of laws of nature have
a well-known problem with "global" laws like the conservation of
energy, for these laws don't seem to arise from the dispositions of
individual objects, nor from necessary connections between fundamental
How should we explain ‘what it is like’ to perceive colour? One of the reasons why naïve realist theories of colour are interesting is that they promise to contribute towards a solution to the problem of consciousness. …
Computer programs are particular kinds of texts. It is therefore
natural to ask what is the meaning of a program or, more generally,
how can we set up a formal semantical account of a programming
language. There are many possible answers to such questions, each motivated by
some particular aspect of programs. So, for instance, the fact that
programs are to be executed on some kind of computing machine gives
rise to operational semantics, whereas the similarities of programming
languages with the formal languages of mathematical logic has
motivated the denotational approach that interprets programs and their
constituents by means of set-theoretical models.
Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature in the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by (i) the prior credibility of a potential explanation, (ii) the causal framing used to describe the explanation, (iii) the generalizability of the explanation, and (iv) its statistical relevance for the evidence. Collectively, the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations were not susceptible to causal framing effects. Instead, highly credible hypotheses were sensitive to the effects of factors which are usually considered relevant from a normative point of view: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they illuminate the close relationship between prior beliefs and explanatory power. Third, they clarify the relationship between abductive and probabilistic reasoning.
Xenocrates (of Chalcedon, a city on the Asian side of the Bosporus
opposite Byzantium, according to Diogenes Laertius (D.L.) iv 14),
became head of the Academy after Speusippus died, in 339/338
(“in the second year of the 110th Olympiad”). D.L. says he
held that position for twenty-five years, and died at 82. So his dates
work out to 396/395–314/313. On the death of Plato, when Speusippus became head of the Academy,
Xenocrates and Aristotle may have left Athens together at the
invitation of Hermeias of Atarneus (see Strabo XIII 57, printed in
Gaiser 1988, 380–381, discussed at 384–385), and
Xenocrates returned to succeed Speusippus.
A small probability space representation of quantum mechanical probabilities is defined as a collection of Kolmogorovian probability spaces, each of which is associated with a context of a maximal set of compatible measurements, that portrays quantum probabilities as Kolmogorovian probabilities of classical events. Bell’s theorem is stated and analyzed in terms of the small probability space formalism.
Inspired by possible connections between gravity and foundational question in quantum theory, we consider an approach for the adaptation of objective collapse models to a general relativistic context. We apply these ideas to a list of open problems in cosmology and quantum gravity, such as the emergence of seeds of cosmic structure, the black hole information issue, the problem of time in quantum gravity and, in a more speculative manner, to the nature of dark energy and the origin of the very special initial state of the universe. We conclude that objective collapse models offer a rather promising path to deal with all of these issues.
Both advocates and critics of experimental philosophy often describe it in narrow terms as being the empirical study of people’s intuitions about philosophical cases. This conception corresponds with a narrow origin story for the field—it grew out of a dissatisfaction with the uncritical use of philosophers’ own intuitions as evidence for philosophical claims. In contrast, a growing number of experimental philosophers have explicitly embraced a broad conception of the sub-discipline, which treats it as simply the use of empirical methods to inform philosophical problems. And this conception has a corresponding broad origin story—the field grew out of a recognition that philosophers often make empirical claims and that empirical claims call for empirical support. In this paper, I argue that the broad conception should be accepted, offering support for the broad origin story.
It is a noticeable feature of intellectual life that many people research the same topics, but do so using different conceptual and disciplinary baggage, and consequently fail to appreciate how the conclusions they reach echo or complement the conclusions reached by others. …
Suppose a Newtonian universe where an elastic and perfectly round ball is dropped. At some point in time, the surface of the ball will no longer be spherical. If an object is F at one time and not F at another, while existing all the while, at least normally the object changes in respect of being F. I am not claiming that that is what change in respect of F is (as I said recently in a comment, I think there is more to change than that), but only that normally this is a necessary and sufficient condition for it. …
The second main claim made by the naïve realist is that colours are distinct from the physical properties of objects. In saying that colours are distinct from the physical properties of objects, the naïve realist is not necessarily saying that are ‘perfectly simple’ properties whose nature cannot be described further; indeed, on the face of it this is inconsistent with the claim, outlined in yesterday’s post, that colours are mind-independent properties. …
Roger White has drawn my attention to an interesting problem, having to do with what to believe in a situation in which you have evidence that the world is infinite. I will build up to the situation in stages.
According to radical versions of embodied cognition, human cognition and agency should be explained without the ascription of representational mental states. According to a standard reply, accounts of embodied cognition can explain only instances of cognition and agency that are not “representation-hungry”. Two main types of such representation-hungry phenomena have been discussed: cognition about “the absent” and about “the abstract”. Proponents of representationalism have maintained that a satisfactory account of such phenomena requires the ascription of mental representations. Opponents have denied this. I will argue that there is another important representation-hungry phenomenon that has been overlooked in this debate: temporally extended planning agency. In particular, I will argue that it is very difficult to see how planning agency can be explained without the ascription of mental representations, even if we grant, for the sake of argument, that cognition about the absent and abstract can. We will see that this is a serious challenge for the radical as well as the more modest anti-representationalist versions of embodied cognition, and we will see that modest anti-representationalism is an unstable position.
It is remarkably difficult to describe any aspect of Gottfried Leibniz’s metaphysical system in a way that is completely uncontroversial. Interpreters disagree widely, even about the most basic Leibnizian doctrines. One reason for these disagreements is the fact that Leibniz characterizes central elements of his system in multiple different ways, often without telling us how to reconcile these different accounts. Leibniz’s descriptions of the most fundamental entities in his ontology are a case in point, and they will be the focus of this paper. Even if we look only at texts from the monadological or mature period—that is, the period starting in the mid-1690s—we find Leibniz portraying the inhabitants of the metaphysical ground floor in at least three different ways. In some places, he describes them as mind-like, immaterial substances that perceive and strive, or possess perceptions and appetitions—analogous in many ways to Cartesian souls. Elsewhere, he presents them as hylomorphic compounds, each consisting of primary matter and a substantial form. In yet other passages, he characterizes them in terms of primitive and derivative forces.
This contribution explains several “roads to self-awareness”, all of them based on the natural sciences. The first one follows our bio-psychological evolution. The second road starts with the engineer’s point of view and mainly builds on information science and technology, in particular robotics. The third road taken is the most abstract; it exploits complex dynamic systems and their emergent properties.
Participants evaluated whether emotions expressed in facial displays by self and a stranger were responses to particular emotion-eliciting photos or not. Performance on self was superior to a stranger when paired eliciting stimuli produce different emotions (e.g. sad vs cute), but not the same emotion (e.g. both amusing), supporting a “common code” not memory account.
Peter Adamson, host of History of Philosophy Without Any Gaps, recently posted twenty "Rules for the History of Philosophy". Mostly, they are terrific rules. I want to quibble with one. Like almost every historian of philosophy I know, Adamson recommends that we be "charitable" to the text. …
According to the naïve realist, colours are mind-independent properties of objects that are distinct from their physical properties. In today’s post I outline the argument for the first part of the view: the claim that colours are mind-independent. …
Network analysis is increasingly used to discover and represent the organization of complex systems. Focusing on examples from neuroscience in particular, I argue that whether network models explain, how they explain, and how much they explain cannot be answered for network models generally but must be answered by specifying an explanandum, by addressing how the model is applied to the system, and by specifying which kinds of relations count as explanatory.
The thesis of physical supervenience (PS) is widely understood and endorsed as the weakest assertion that all facts are tethered to the physical facts. Here I entertain a weaker tethering relation, stochastic physical supervenience (SPS), the possibility of which is suggested by analogy with the apparent failure of causal determinism (CD) in certain areas of physical science. Puzzling over this possibility helps to clarify the commitments of and the motivations for accepting the PS thesis.
Around the turn of the twenty-first century, what has come to be
called the new mechanical philosophy (or, for brevity,
the new mechanism) emerged as a framework for thinking about
the philosophical assumptions underlying many areas of science,
especially in sciences such as biology, neuroscience, and
psychology. In this entry, we introduce and summarize the distinctive
features of this framework, and we discuss how it addresses a range of
classic issues in the philosophy of science, including explanation,
metaphysics, the relations between scientific disciplines, and the
process of scientific discovery.
I examine explanations’ realist commitments in relation to dynamical systems theory. First I rebut an ‘explanatory indispensability argument’ for mathematical realism from the explanatory power of phase spaces (Lyon and Colyvan 2007). Then I critically consider a possible way of strengthening the indispensability argument by reference to attractors in dynamical systems theory. The take-home message is that understanding of the modal character of explanations (in dynamical systems theory) can undermine platonist arguments from explanatory indispensability.