Lakatos’s analysis of progress and degeneration in the Methodology of Scientific Research Programmes is well-known. Less known, however, are his thoughts on degeneration in Proofs and Refutations. I propose and motivate two new criteria for degeneration based on the discussion in Proofs and Refutations – superfluity and authoritarianism. I show how these criteria augment the account in Methodology of Scientific Research Programmes, providing a generalized Lakatosian account of progress and degeneration. I then apply this generalized account to a key transition point in the history of entropy – the transition to an information-theoretic interpretation of entropy – by assessing Jaynes’s 1957 paper on information theory and statistical mechanics.
What is to be a human person? Since the cognitive revolution half a century ago, the analytic philosophy of mind has interpreted the question as the mind-body problem: how are mental states that have cognitive or semantic content related to their concomitant brain states or causal neural processes? Let me call this the vertical problem. Functionalism seems to offer the most convincing account of this relationship: the mind is not the brain; the mind is what the brain does.
The present paper proposes a route to modal claims that allows us to infer to certain possibilities even if they are sensorily unimaginable and beyond the evidential capacity of stipulative imagining. After a brief introduction, Sect. 2 discusses imaginative resistance to help carve a niche for the kinds of inferences about which this essay is chiefly concerned. Section provides three classic examples, along with a discussion of their similarities and differences. Section 4 recasts the notion of potential explanation in Lipton’s (Inference to the best explanation, Routledge, Abingdon, 2004) in order to accommodate inferences to possibility claims; Sect. 5 then attempts to characterise a principle underlying such inferences. Section 6 concludes by discussing how the proposal relates to other modal epistemologies, with emphasis on the potential of such inferences to produce genuinely new ideas.
We have various everyday measures for identifying the presence of consciousness, such as the capacity for verbal report and the intentional control of behaviour. However, there are many contexts in which these measures are difficult (if not impossible) to apply, and even when they can be applied one might have doubts as to their validity in determining the presence/absence of consciousness. Everyday measures for identifying consciousness are particularly problematic when it comes to ‘challenging cases’—human infants, people with brain damage, non-human animals, and AI systems. There is a pressing need to identify measures of consciousness that can be applied to challenging cases. This paper explores one of the most promising strategies for identifying and validating such measures—the natural kind strategy. The paper is in two broad parts. Part I introduces the natural kind strategy, and contrasts it with other influential approaches in the field. Part II considers a number of objections to the approach, arguing that none succeeds.
The structure of benzene is fascinating. Look at all these different attempts to depict it! Let me tell you a tiny bit of the history. In 1865, August Kekulé argued that benzene is a ring of carbon atoms with alternating single and double bonds. …
The kinds of real or natural kinds that support explanation and prediction in the social sciences are difficult to identify and track because they change through time, intersect with one another, and they do not always exhibit their properties when one encounters them. As a result, conceptual practices directed at these kinds will often refer in ways that are partial, equivocal, or redundant. To improve this epistemic situation, it is important to employ open-ended classificatory concepts, to understand when different research programs are tracking the same real kind, and to maintain an ongoing commitment to interact causally with real kinds to focus reference on those kinds. A tempting view of these non-idealized epistemic conditions should be avoided: that they signal an ontological structure of the social world so plentiful that it would permit ameliorated (norm-driven, conceptually engineered) classificatory schemes to achieve their normative aims regardless of whether they defer (in ways to be described) to real-kind classificatory schemes. To ground these discussions, the essay appeals to an overlooked convergence in the systematic naturalistic frameworks of Richard Boyd and Ruth Millikan.
Recent years have seen growing interest in modifying interventionist accounts of causal explanation in order to characterise noncausal explanation. However, one surprising element of such accounts is that they have typically jettisoned the core feature of interventionism: interventions. Indeed, the prevailing opinion within the philosophy of science literature suggests that interventions exclusively demarcate causal relationships. This position is so prevalent that, until now, no one has even thought to name it. We call it “intervention puritanism”. In this paper, we mount the first sustained defence of the idea that there are distinctively noncausal explanations which can be characterized in terms of possible interventions; and thus, argue that I-puritanism is false. We call the resultant position “intervention liberalism” (I-liberalism, for short). While many have followed Woodward (Making Things Happen: A Theory of Causal Explanation, Oxford University Press, Oxford, 2003) in committing to I-pluralism, we trace support for I-liberalism back to the work of Kim (in: Kim (ed) Supervenience and mind, Cambridge University Press, Cambridge, 1974/1993). Furthermore, we analyse two recent sources of scepticism regarding I-liberalism: debate surrounding mechanistic constitution; and attempts to provide a monistic account of explanation. We show that neither literature provides compelling reasons for adopting I-puritanism. Finally, we present a novel taxonomy of available positions upon the role of possible interventions in explanation: weak causal imperialism; strong causal imperialism; monist intervention puritanism; pluralist intervention puritanism; monist intervention liberalism; and finally, the specific position defended in this paper, pluralist intervention liberalism.
Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. …
This paper aims to clarify Merleau-Ponty’s contribution to an embodied-enactive account of mathematical cognition. I first identify the main points of interest in the current discussions of embodied higher cognition and explain how they relate to Merleau-Ponty and his sources, in particular Husserl’s late works. Subsequently, I explain these convergences in greater detail by more specifically discussing the domains of geometry and algebra and by clarifying the role of gestalt psychology in Merleau-Ponty’s account. Beyond that, I explain how, for Merleau-Ponty, mathematical cognition requires not only the presence and actual manipulation of some concrete perceptible symbols but, more strongly, how it is fundamentally linked to the structural transformation of the concrete configurations of symbolic systems to which these symbols appertain. Furthermore, I fill a gap in the literature by explaining Merleau-Ponty’s claim that these structural transformations are operated through motor intentionality. This makes it possible, in turn, to contrast Merleau-Ponty’s approach to ontologically idealistic and realistic views on mathematical objects. On Merleau-Ponty’s account, mathematical objects are relational entities, that is, gestalts that necessarily imply situated cognizers to whom they afford a specific type of engagement in the world and on whom they depend in their eventual structural transformations. I argue that, by attributing a strongly constitutive role to phenomenal configurations and their motor transformation in mathematical thinking, Merleau-Ponty contributes to clarifying the worldly, historical, and socio-cultural aspects of mathematical truths without compromising what we perceive as their universality, certainty, and necessity.
In this essay, I discuss what can be the underlying principle to the philosophy of cognitive science that is useful for us to understand human nature. Reviewing the principles of science as already presented by Noam Chomsky, I expand the discussion by briefly discussing the computational aspect of the human mind, the key I argued, to unify the mental and physical aspects of the human brain/mind. The discussion led to Aristotelian psychology (or epistemology) as the suggestion for a way forward in the understanding of the nature of human mind from the mysteriousness of its nature as understood by the rationalists started by René Descartes.
Newton’s First Law of Motion is typically understood to govern only the motion of force-free bodies. This paper argues on textual and conceptual grounds that it is in fact a stronger, more general principle. The First Law limits the extent to which any body can change its state of motion –– even if that body is subject to impressed forces. The misunderstanding can be traced back to an error in the first English translation of Newton’s Principia, which was published a few years after Newton’s death.
Paleontological evidence suggests that human artefacts with intentional markings might have originated already in the Lower Paleolithic, up to 500.000 years ago and well before the advent of ‘behavioural modernity’. These markings apparently did not serve instrumental, tool-like functions, nor do they appear to be forms of figurative art. Instead, they display abstract geometric patterns that potentially testify to an emerging ability of symbol use. In a variation on Ian Hacking’s speculative account of the possible role of “likeness-making” in the evolution of human cognition and language, this essay explores the central role that the embodied processes of making and the collective practices of using such artefacts might have played in early human cognitive evolution. Two paradigmatic findings of Lower Paleolithic artefacts are discussed as tentative evidence of likenesses acting as material scaffolds in the emergence of symbolic reference-making. They might provide the link between basic abilities of mimesis and imitation and the development of modern language and thought.
Historical explanations in evolutionary biology are commonly characterized as narrative explanations. Examples include explanations of the evolution of particular traits and explanations of macroevolutionary transitions. In this paper, I present two case studies of explanations in accounts of pathogen evolution and host-pathogen coevolution, respectively, and argue that one of them is captured well by established accounts of time-sequenced narrative explanation. The other one differs from narrative explanations in important respects, even though it shares some characteristics with them as it is also a population-level historical explanation. I thus argue that the second case represents a different kind of explanation that I call historical explanation of type phenomena. The main difference between the two kinds of explanation is the conceptualization of the explanandum phenomena as a particulars or type phenomena, respectively. Narrative explanations explain particulars but also deal with generalization, regularities and type phenomena. Historical explanations of type phenomena, on the other hand, explain multiply realizable phenomena but also deal with particulars. The two kinds of explanation complement each other because they explain different aspects of evolution.
A phenomenon resulting from a computationally irreducible (or computationally incompressible) process is supposedly unpredictable except via simulation. This notion of unpredictability has been deployed to formulate some recent accounts of computational emergence. Via a technical analysis of computational irreducibility, I show that computationally irreducibility can establish the impossibility of prediction only with respect to maximum standards of precision. By articulating the graded nature of prediction, I show that unpredictability to maximum standards is not equivalent to being unpredictable in general. I conclude that computational irreducibility fails to fulfill its assigned philosophical roles in theories of computational emergence.
Newton’s Principia re-conceptualizes rational mechanics and physics, and offers a novel unification of these heretofore distinct disciplines. I argue for a reading of the Principia that insists on a strict distinction between the rational mechanics (in Books 1 and 2) and the physics (in Book 3), in which the Definitions and the Axioms/Laws play a surprising dual role that both distinguishes the rational mechanics from the physics and unifies them into a single project: a philosophical mechanics. This offers a new angle on existing questions in the secondary literature, including the sense in which Books 1 and 2 are to be understood as “mathematical”; whether or not the Principia is a text in mechanics; why Newton came to adopt the dual label “Axioms, or laws of motion”; the epistemic status of the axioms; the relationship between the axioms and the Definitions; in what sense Book 3 is incomplete as a physics; and the problem of applicability.
Separatists about grounding take explanations to be separate from their corresponding grounding-facts. Grounding-facts are supposed to underlie, or back, such explanations. However, the backing relation hasn’t received much attention in the literature. The aim of this paper is to provide an informative definition of backing. First, I examine two prominent proposals: backing as explaining (Kovacs 2017; 2019a) and backing as grounding (see Sjölin Wirling 2020). Finally, I put forward my own proposal. I argue that under plausible assumptions about the role of backing and the nature of explanation, backing should be understood as a form of truthmaking, minimally construed.
Many issues in metaphysics and philosophy of science concern the status, significance, or theoretical role of properties such as charge and mass. This paper is about the surprising differences in metaphysical character between mass and charge properties. It develops a novel, three-fold analysis of color charge, and it shows that the same analysis for electric charge is degenerate. Additionally, the formalism for mass raises a different set of considerations for its metaphysical status. Since mass, color charge, and electric charge have these differences, metaphysicians and philosophers of science must reevaluate the ways in which they are accustomed to appealing to these properties.
We present an empirically supported theoretical and methodological framework for quantifying the system-level properties of person-plus-tool interactions in order to answer the question: “Are person-plus-tool-systems extended cognitive systems?” Nineteen participants provided perceptual judgments regarding their ability to pass through apertures of various widths while using visual information, blindfolded wielding a rod, or blindfolded wielding an Enactive Torch—a vibrotactile sensory-substitution device for detecting distance. Monofractal, multifractal, and recurrence quantification analyses were conducted to assess features of person-plus-tool movement dynamics. Trials where people utilized the rod or Enactive Torch demonstrated stable “self-similarity,” or indices of healthy and adaptive single systems, regardless of aperture width, trial order, features of the participants’ judgments, and participant characteristics. Enactive Torch trials exhibited a somewhat greater range of dynamic fluctuations than the rod trials, as well as less movement recurrence, suggesting that the Enactive Torch allowed for more exploratory movements. Findings provide support for the notion that person-plus-tool systems can be classified as extended cognitive systems and a framework for quantifying system-level properties of these systems. Implications concerning future research on extended cognition are discussed.
A wide range of problems of the relationship between consciousness and matter are discussed. Particular attention is paid to the analysis of the structure and properties of consciousness in the framework of information evolution. The role of specific (non-computational) properties of consciousness in the procedure of classical and quantum measurements is analyzed. In particular, the issue of "cloning" of consciousness (the possibility of copying its properties onto a new material carrier) is discussed in detail. We hope that the generalized principle of complementarity formulated by us will open up new ways for studying the problems of consciousness within the framework of the fundamental physical picture of the world.
Capgras delusion is generally defined as the belief that close relatives have been replaced by strangers. But such replacement beliefs have also been reported to occur in response to encountering an acquaintance, or the voice of a familiar person, or a pet, or some personal possession. All five scenarios involve believing something familiar has been replaced by something unfamiliar. So should these five kinds of delusional belief all count as subtypes of the same delusion – that is, should all be referred to as Capgras delusion? We argue in favour of this position.
In recent years, theories of social understanding have moved away from arguing that just one epistemic strategy, such as theory-based inference or simulation constitutes our ability of social understanding. Empirical observations speak against any monistic view and have given rise to pluralistic accounts arguing that humans rely on a large variety of epistemic strategies in social understanding. We agree with this promising pluralist approach, but highlight two open questions: what is the residual role of mindreading, i.e. the indirect attribution of mental states to others within this framework, and how do different strategies of social understanding relate to each other? In a first step, we aim to clarify the arguments that might be considered in evaluating the role that epistemic strategies play in a pluralistic framework. On this basis, we argue that mindreading constitutes a core epiststrategy in human social life that opens new central spheres of social understanding. In a second step, we provide an account of the relation between different epistemic strategies which integrates and demarks the important role of mindreading for social understanding.
The new mechanists and the autonomy approach both aim to account for how biological phenomena are explained. One identifies appeals to how components of a mechanism are organized so that their activities produce a phenomenon. The other directs attention towards the whole organism and focuses on how it achieves self-maintenance. This paper discusses challenges each confronts and how each could benefit from collaboration with the other: the new mechanistic framework can gain by taking into account what happens outside individual mechanisms, while the autonomy approach can ground itself in biological research into how the actual components constituting an autonomous system interact and contribute in different ways to realize and maintain the system. To press the case that these two traditions should be constructively integrated we describe how three recent developments in the autonomy tradition together provide a bridge between the two traditions: (1) a framework of work and constraints, (2) a conception of function grounded in the organization of an autonomous system, and (3) a focus on control.
I define two metaphysical positions that anti-physicalists can take in response to Jonathan Schaffer’s ground functionalism. Ground functionalism is a version of physicalism where explanatory gaps are everywhere. If ground functionalism is true, arguments against physicalism based on the explanatory gap between the physical and experiential facts fail. In response, first, I argue that some anti-physicalists are already safe from Schaffer’s challenge. These anti-physicalists reject an underlying assumption of ground functionalism: the assumption that macrophysical entities are something over and above the fundamental entities. I call their position “lightweight anti-physicalism.” Second, I go on to argue that even if anti-physicalists accept Schaffer’s underlying assumption, they can still argue that the consciousness explanatory gap is especially mysterious and thus requires a special explanation. I call the resulting position “heavyweight anti-physicalism.” In both cases, the consciousness explanatory gap is a good way to argue against physicalism.
Mainstream epistemology has typically taken for granted a traditional picture of the metaphysics of mind, according to which cognitive processes (e.g. memory storage and retrieval) play out entirely within the bounds of the skull and skin. But this simple ‘intracranial’ picture is falling increasingly out of step with contemporary thinking in the philosophy of mind and cognitive science. Likewise, though, proponents of active externalist approaches to the mind—e.g. the hypothesis of extended cognitition (HEC)—have proceeded by and large without asking what epistemological ramifications should arise once cognition is understood as criss-crossing the bounds of brain and world. This paper aims to motivate a puzzle that arises only once these two strands of thinking are brought in contact with one another. In particular, we want to first highlight a kind of condition of epistemological adequacy that should be accepted by proponents of extended cognition; once this condition is motivated, the remainder of the paper demonstrates how attempts to satisfy this condition seem to inevitably devolve into a novel kind of epistemic circularity. At the end of the day, proponents of extended cognition have a novel epistemological puzzle on their hands.
“Al-Fârâbî’s metaphysics”, as
understood here, means not just his views, and arguments for those
views, on a series of metaphysical topics, but his project of
reconstructing and reviving metaphysics as a science. This is part of
his larger project of reconstructing and reviving “the sciences
of the ancients”: his scientific project in metaphysics is
inseparable from his interpretation and assimilation of
Aristotle’s Metaphysics. We start with some motivation
for Fârâbî’s larger project of reconstructing
“the sciences of the ancients”, then turn to what he says
about metaphysics as one such science and about Aristotle’s
Metaphysics, and then to details of his reconstruction of
metaphysics as a science, both in his account of maximally universal
concepts such as being and unity, and in his account of God as the
first cause of existence.
Discussion of cognitive scaffolding is dominated by attention to ways that external structure can support cognitive activity or augment an agent’s cognitive capacities. We call instances where the interests of the user are served benign and argue for the possibility of hostile scaffolding. This is scaffolding which depends on the same capacities of an agent to rely on external structure, but that undermines or exploits that agent while serving the interests of another. We offer one defence of hostile scaffolding by developing an account of a neglected complementarity between extended phenotype thinking and extended functionalism. We support this with a second defence, an account of design features of electronic gambling machines and casino management systems that show how they exemplify hostile scaffolding.
At the Topos Institute this summer, a group of folks started talking about thermodynamics and category theory. It probably started because Spencer Breiner and my former student Joe Moeller, both working at NIST, were talking about thermodynamics with some people there. …
Philosophers of biology have recently been worried about the question: what is a biological individual? This worry is prompted by the new salience of the microbiome in biology and medicine. How should we conceptualize the relationship between individual organisms like birds or mammals and the microscale life forms – millions of bacteria – that inhabit their bodies and perform functions necessary for their survival? Are those life forms biological individuals? Or does their dependence on a host make them something less than a full-fledged individual? But, if the host bodies are equally dependent on the microbiome, in what sense could they count as individuals? How should we then define full-fledged individuality in order to encompass those entities we want to include and those we want to exclude? C. K. Waters takes the pluralist-pragmatist view, arguing that we should not ask what biological individuals are, but how the concept is deployed, what work it does, in different biological contexts (Waters 2018). There is not one thing that biological individuals are, but different contexts require different distinctions and boundaries. But there is another question we might ask: why do we care about defining individuality in a metaphysically robust way? This is a question that deserves a genealogical answer: how did individuality come to play such a key role in our various analytical endeavors? Put differently: why do individuals, their behavior, and their properties constitute the subject matter of our investigations?
I argue that high level causal relationships are often more fundamental than low level causal relationships. My argument is based on some general principles governing when one causal relationship will metaphysically ground another—a phenomenon I term derivative causation. These principles are in turn based partly on our intuitive judgments concerning derivative causation in a series of representative examples, and partly on some powerful theoretical considerations in their favour. I show how these principles entail that low level causation can derive from high level causation, and in particular that neural causation can derive from mental causation. I then draw out several important consequences of this result. Most immediate among these are the implications the result has for aspirations to reduce high level causation to its low level counterpart. But the result also bears on the possibility of downward causation, the relationship between counterfactuals and causation, and the idea—familiar from both the literature on the exclusion problem and the literature on proportionality constraints on causation—that causal relationships at different levels compete for their existence.
The scientific community takes for granted a view of science that may be called standard empiricism. This holds that the basic intellectual aim of science is truth, nothing being presupposed about the truth, the basic method being to assess theories with respect to evidence. A basic tenet of the view is that science must not accept any thesis about the world as a part of scientific knowledge independent of evidence, let alone in violation of evidence. But physics only accepts unified theories, and persistently rejects infinitely many ad hoc rivals that fit the phenomena even better. In persistently rejecting these infinitely many empirically more successful rival theories, physics thereby makes a substantial assumption about the universe – it is such that all ad hoc theories are false – an assumption that is accepted implicitly independently of evidence, even in a sense against the evidence. That contradicts standard empiricism. The scientific community needs to adopt a new conception of science that represents the assumption of physics as a hierarchy of assumptions, thus facilitating the improvement of the assumption that is made, as science proceeds.