In this paper I develop a concept of behavioural ecological individuality. Using findings from a case study which employed qualitative methods, I argue that individuality in behavioural ecology should be defined as phenotypic and ecological uniqueness, a concept that is operationalised in terms of individual differences such as animal personality and individual specialisation. This account make sense of how the term “individuality” is used in relation to intrapopulation variation in behavioural ecology. The concept of behavioural ecological individuality can sometimes be used to identify individuals. It also shapes research agendas and methodological choices in behavioural ecology, leading researchers to account for individuals as sources of variation. Overall, this paper draws attention to a field that has been largely overlooked in philosophical discussions of biological individuality and highlights the importance of individual differences and uniqueness for individuality in behavioural ecology.
Robustness analysis (RA) is the prescription to consider a diverse range of evidence and only regard a hypothesis as well-supported if all the evidence agrees on it. In contexts like climate science, the evidence in support of a hypothesis often comes from scientific models. This leads to model-based RA (MBRA), whose core notion is that a hypothesis ought to be regarded as well-supported on grounds that a sufficiently diverse set of models agrees on the hypothesis. This chapter, which is the second part of a two-part review of MBRA, addresses the thorny issue of justifying the inferential steps taking us from the premises to the conclusions. We begin by making explicit what exactly the problem is. We then turn to a discussion of two broad families of justificatory strategies, namely top-down and bottom-up justifications. In the latter group we distinguish between the likelihood approach, independence approaches, and the explanatory approach. This discussion leads us to the sober conclusion that multi-model situations raise issues that are not yet fully understood and that the methods and approaches that MBRA has not yet reached a stage of maturity. Important questions remain open, and these will have to be addressed in future research.
Robustness analysis (RA) is the prescription to consider a diverse range of evidence and only regard a hypothesis as well-supported if all the evidence agrees on it. In contexts like climate science, the evidence in support of a hypothesis often comes in the form of model results. This leads to model-based RA (MBRA), whose core notion is that a hypothesis ought to be regarded as well-supported on grounds that a sufficiently diverse set of models agrees on the hypothesis. This chapter, which is the first part of a two-part review of MBRA, begins by providing a detailed statement of the general structure of MBRA. This statement will make visible the various parts of MBRA and will structure our discussion in the remainder of the chapter. We explicate the core concepts of independence and agreement, and we discuss what they mean in the context of climate modelling. Our statement shows that MBRA is based on three premises, which concern robust properties, common structures, and so-called robust theorems. We analyse what these involve and what problems they raise in the context of climate science. In the next chapter, which is the second part of the review, we analyse how the conclusions of MBRA can be justified.
This paper examines how Plato’s rejection of the friends of the forms at 248a–249b in the Sophist is continuous with the arguments that he develops shortly after this part of the dialogue for the interrelatedness of the forms. I claim that the interrelatedness of the forms implies that they are changed, and that this explains Plato’s rejection of the friends of the forms. Much here turns on the kind of change that Plato wants to attribute to the forms. I distinguish my view of the sort of change that the forms experience from other kinds of change—such as ‘Cambridge change’—that scholars have believed Plato has in mind in rejecting the friends of the forms. On the view that I advance, a form experiences a change (which I call ‘perfect change’) in its association with another form that distinguishes it as the distinctive being that it is—that is, through its possession of its distinctive properties.
Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. Here we challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, overgeneralized conclusions. We support this account of scientific induction by integrating a range of disparate findings from across the cognitive sciences that have until now not been connected to research on the nature of scientific induction. The view that scientific induction involves by default a generalization bias calls for a revision of our current thinking about scientific induction and highlights an overlooked cause of the replication crisis in the sciences. Commonly proposed interventions to tackle scientific overgeneralizations that may feed into this crisis need to be supplemented with cognitive debiasing strategies to most effectively improve science.
Science is a cultural practice, and cultural practices tend to change over time via processes of cultural selection and social learning. There is a long history of philosophers of science arguing that scientific theories evolve through a “critical” evolutionary process where new hypotheses are criticized, modified, eliminated, or replaced (Popper 1972; Hull 1988). More recent work has suggested that other features of science such as methodologies, beliefs, and norms may develop likewise. Such features of science exhibit key characteristics that make them suitable for evolutionary analysis. They are reliably transmitted via pedagogy and cultural imitation, and produce non-random variation that leads to differential success in subsequent transmission. For this reason, a new body of work has emerged looking at cultural evolutionary processes in science. This research addresses topics ranging from the persistence of poor statistical practices, to conservatism in science, to the ideal communication structure for scientific communities.
This article develops a new account of the relation “before” between events. It does so by taking the set of all states of an object, irrespective of any presupposed order, and then ordering it by exploiting a characteristic asymmetry which appears on this set. It is shown that this asymmetry both implies temporal order, and is arguably also necessary for defining it. The upshot is that temporal ordering is a local phenomenon and requires no global temporal structure of spacetime.
London School of Economics and Political Science, London, UK ‘spontaneous order’, antithetical to design, they now design markets to achieve specific purposes. This paper reconstructs how this change in what markets are and can do came about and considers some consequences. Two decisive developments in economic theory are identified: first, Hurwicz’s view of institutions as mechanisms, which should be designed to align incentives with social goals; and second, the notion of marketplaces – consisting of infrastructure and algorithms – which should be designed to exhibit stable properties. These developments have empowered economists to create marketplaces for specific purposes, by designing appropriate algorithms. I argue that this power to create marketplaces requires a shift in ethical reasoning, from whether markets should reach into certain spheres of life, to how market algorithms should be designed. I exemplify this shift, focusing on bias, and arguing that transparency should become a goal of market design.
Meaningful predicates come in two kinds. Predicates of the first kind characterize ways in which objects can resemble each other other; two examples are ‘electron’ and ‘red’. Predicates of the second kind don’t correspond to any real dimension of similarity; two examples are ‘electron or red’ and ‘such that something is red’. Underlying this distinction between predicates is a distinction in reality: predicates of the first kind express natural properties and predicates of the latter express unnatural or gerrymandered properties.
Taking a pragmatist stance toward the practices and products of science shapes our answers to central philosophical questions53. In this paper, I will explicate how scientists’ conceptual and representational practices work in concert with their observational and experimental ones to stabilize acceptance of scientific realism.
Drawing on the epistemology of logic literature on anti-exceptionalism about logic, we set out to investigate the following metaphilosophical questions empirically: Is philosophy special? Are its methods (dis)continuous with science? More specifically, we test the following metaphilosophical hypotheses empirically: philosophical deductivism, philosophical inductivism, and philosophical abductivism. Using indicator words to classify arguments by type (namely, deductive, inductive, and abductive arguments), we searched through a large corpus of philosophical texts mined from the JSTOR database (n = 435,703) to find patterns of argumentation. The results of our quantitative, corpus-based study suggest that deductive arguments are significantly more common than abductive arguments and inductive arguments in philosophical texts overall, but they are gradually and steadily giving way to non-deductive (i.e., inductive and abductive) arguments in academic philosophy.
Developing tools is a crucial aspect of experimental practice, yet most discussions of scientific change traditionally emphasize theoretical over technological change. To elaborate on the role of tools in scientific change, I offer an account that shows how scientists use tools in exploratory experiments to form novel concepts. I apply this account to two cases in neuroscience and show how tool development and concept formation are often intertwined in episodes of tool-driven change. I support this view by proposing common normative principles that specify when exploratory concept formation and tool development succeed (rather than fail) to initiate scientific change.
Economic theory comprises three types of inquiry. One examines economic phenomena, one develops analytical tools, and one studies the scientific endeavor in economics in general and in economic theory in particular. We refer to the first as economics, the second as the development of economic methods, and the third as the methodology of economics. The same mathematical result can often be interpreted as contributing to more than one of these categories. We discuss and clarify the distinctions between these categories, and argue that drawing the distinctions more sharply can be useful for economic research.
Many of our best scientific explanations incorporate idealizations, that is, false assumptions. Philosophers of science disagree about whether and to what extent we must as a result give up on truth as a prerequisite for explanation and thus understanding. Here I propose reframing this. Factivism or veritism about Factivism or veritism about explanation is not, I think, an obvious and preferable view to be given up only under duress. Rather, it is philosophically fruitful to emphasize how departures from the truth facilitate explanation (and understanding). I begin by motivating one version of the idea that idealizations positively contribute to understanding, and then I make the case that it is philosophically important to emphasize this contribution of idealizations. I conclude with a positive account of what theorists about science stand to gain by acknowledging, even emphasizing, how certain departures from the truth benefit our scientific explanations.
Common philosophical accounts of creativity align creative products and processes with a particular kind of agency: namely, that deserving of praise or blame. Considering evolutionary examples, we explore two ways of denying that creativity requires forms of agency. First, we argue that decoupling creativity from praiseworthiness comes at little cost: accepting that evolutionary processes are non-agential, they nonetheless exhibit many of the same characteristics and value associated with creativity. Second, we develop a ‘product-first’ account of creativity by which a process is creative just in case it gives rise to products deserving of certain forms of aesthetic engagement.
Jan Westerhoff’s The Non-Existence of the Real World ambitiously and admirably shows the relevance of certain developments in contemporary analytic philosophy and cognitive neuroscience in illuminating a radical but less known form of non-foundationalism associated with the Madhyamaka (‘Middle Way’) school of thought championed by the Indian Buddhist philosopher Nagarjuna. Westerhoff deploys these developments to critique both epistemic foundational-ism, the view that knowledge ultimately rests on a foundation of non-inferential beliefs, and ontological priority foundationalism, the view that certain entities and certain relations between them are basic. In four meticulously argued chapters, Westerhoff considers various arguments against the existence of an external world of mind-independent objects (Chapter 1) and against the existence of an internal world of enduring subjects (Chapter 2), various beliefs in the existence of an ultimate foundation that grounds all things (Chapter 3) and lastly, various reasons against the assumption that an ultimately true theory of the world is possible (Chapter 4). Taken together, these chapters advance one of the most thoroughgoing and sustained defenses of global anti-realism to date.
We develop a category-theoretic criterion for determining the equivalence of causal models having different but homomorphic directed acyclic graphs over discrete variables. Following Jacobs et al. (2019), we define a causal model as a probabilistic interpretation of a causal string diagram, i.e., a functor from the “syntactic” category SynG of graph G to the category Stoch of finite sets and stochastic matrices. The equivalence of causal models is then defined in terms of a natural transformation or isomorphism between two such functors, which we call a Φ-abstraction and Φ-equivalence, respectively. It is shown that when one model is a Φ-abstraction of another, the intervention calculus of the former can be consistently translated into that of the latter. We also identify the condition under which a model accommodates a Φ-abstraction, when transformations are deterministic.
Proponents of the extended mind have suggested that phenomenal transparency may be important to the way we evaluate putative cases of cognitive extension. In particular, it has been suggested that in order for a bio-external resource to count as part of the machinery of the mind, it must qualify as a form of transparent equipment or transparent technology. The present paper challenges this claim. It also challenges the idea that phenomenological properties can be used to settle disputes regarding the constitutional (versus merely causal) status of bio-external resources in episodes of extended cognizing. Rather than regard phenomenal transparency as a criterion for cognitive extension, we suggest that transparency is a feature of situations that support the ascription of certain cognitive/mental dispositional properties to both ourselves and others. By directing attention to the forces and factors that motivate disposition ascriptions, we arrive at a clearer picture of the role of transparency in arguments for extended cognition and the extended mind. As it turns out, transparency is neither necessary nor sufficient for cognitive extension, but this does not mean that it is entirely irrelevant to our understanding of the circumstances in which episodes of extended cognizing are apt to arise.
The problem of variability concerns the fact that empirical data does not support the existence of a coordinated set of biological markers, either in the body or the brain, which correspond to our folk emotion categories; categories like anger, happiness, sadness, disgust and fear. Barrett (2006a, b, 2013, 2016, 2017a, b) employs this fact to argue (i) against the faculty psychology approach to emotion, e.g. emotions are the products of emotion-specific mechanisms, or “modules”, and (ii) for the view that emotions are constructed from domain-general “core systems” with the aid of our folk concepts. The conjunction of (i) and (ii), she argues, heralds a paradigm shift in our understanding of emotion: emotions aren’t triggered but made. In this paper, I argue such a shift is premature for a faculty psychology framework can accommodate the neurobiological variability of emotion. This can be done by treating emotions as developmental modules: non-innate systems which behave like modules, but form as a product of ontogenetic development.
Peirce’s Sign Theory, or Semiotic, is an account of signification,
representation, reference and meaning. Although sign theories have a
long history, Peirce’s accounts are distinctive and innovative for
their breadth and complexity, and for capturing the importance of
interpretation to signification. For Peirce, developing a
thoroughgoing theory of signs was a central philosophical and
intellectual preoccupation. The importance of semiotic for Peirce is
wide ranging. As he himself said, “[…] it has never been
in my power to study anything, – mathematics, ethics, metaphysics,
gravitation, thermodynamics, optics, chemistry, comparative anatomy,
astronomy, psychology, phonetics, economics, the history of science,
whist, men and women, wine, metrology, except as a study of
semiotic” (SS 1977, 85–6).
Social norms are commonly understood as rules that dictate which behaviors are appropriate, permissible, or obligatory in different situations for members of a given community.
Pandemics do take place. When exactly they begin and end, and why, is harder to determine, as also demonstrated in early 2020 at the start of the Covid pandemic and the many debates in 2022 on calling it over. To determine these points, one has to know which criteria have to be satisfied and which not, respectively. This requires a clear definition of what a pandemic is, with at least its necessary and sufficient characteristics. There is no such crisp and clear definition, neither in the expert documentation nor in domain ontologies. In this paper, we assess mentions of ‘pandemic’ in domain ontologies, evaluate the argument that foundational ontologies may provide guidance, and examine the characteristics that domain experts have put forward for pandemics. The guidance from foundational ontologies is underwhelming when taken together, but tooling greatly simplified the alignment. The assessment of characteristics show that pandemic is not bearer of them all but they are of attendant entities, elucidates which ones are dependent and which essential, and it demonstrates why one may compute more than one unique start and end of a pandemic. Considering the complexities, it may be of use to develop an ontology of pandemics.
This paper is concerned with the question of how this gap can be bridged. The leading idea is that a suitable conceptual framework can be culled from the work of Wittgenstein on the philosophy of mathematics and, more generally, that on epistemic practices. Wittgenstein’s analyses combine observations on natural abilities and (broadly) cultural dimensions in a unified framework and connect with a 4E approach to cognition  that transcends some of the limitations of neurocognitive research. By viewing the results from cognitive neuroscience from this perspective, we gain insight both into the content and scope of neuroscientific results and into the potential relevance of a Wittgensteinian naturalistic approach in the analysis of mathematics.
Scientific revolution has been one of the most controversial topics in the history and philosophy of science. Yet it has been no consensus on what is the best unit of analysis in the historiography of scientific revolutions. Nor is there a consensus on what best explains the nature of scientific revolutions. This chapter provides a critical examination of the historiography of scientific revolutions. It begins with a brief introduction to the historical development of the concept of scientific revolution, followed by an overview of the five main philosophical accounts of scientific revolutions. It then challenges two historiographical assumptions of the philosophical analyses of scientific revolutions.
Why does time reversal involve two operations, a temporal reflection and the operation of complex conjugation? Why is it that time reversal preserves position and reverses momentum and spin? This puzzle of time reversal in quantum mechanics has been with us since Wigner’s first presentation. In this paper, I propose a new approach to solving this puzzle. First, I argue that the standard account of time reversal can be derived from the requirement that the continuity equation in quantum mechanics is time reversal invariant. Next, I analyze the physical meaning of the continuity equation and explain why it should be time reversal invariant. Finally, I discuss how this new analysis help solve the puzzle of time reversal in quantum mechanics.
In a well-known episode from 19th century medicine, Ignaz Semmelweis puzzled over a correlation between the clinic in which a woman gave birth (the First Clinic vs. the Second Clinic of the Vienna General Hospital), and her probability of succumbing to Puerperal Fever after the birth (10% vs. less than 4%). Expectant mothers (among others) seemed to accept that there was some causal relationship between giving birth in the First Clinic and the increased maternal mortality – indeed, women begged to be admitted to the Second Clinic, and Semmelweis entertained a variety of hypotheses about the relevant causal factor. Although the evidence for a causal relationship was reasonably strong, what seemed to be missing was an explanation: why were women who gave birth in the First Clinic at greater risk?
Open texture is a kind of semantic indeterminacy first systematically studied by Waismann. In this paper, extant definitions of open texture will be compared and contrasted, with a view towards the consequences of open-textured concepts in mathematics. It has been suggested that these would threaten the traditional virtues of proof, primarily the certainty bestowed by proof-possession, and this suggestion will be critically investigated using recent work on informal proof. It will be argued that informal proofs have virtues that mitigate the danger posed by open texture. Moreover, it will be argued that while rigor in the guise of formalisation and axiomatisation might banish open texture from mathematical theories through implicit definition, it can do so only at the cost of restricting the tamed concepts in certain ways.
Bertrand Russell famously argued that causation plays no role in science: it is ‘a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.’  Cartwright  and later writers moderated this conclusion somewhat, and it is now largely accepted that in a macroscopic setting causal concepts are an important part of the assessments we make about possible strategies for action. But the view that causation in the usual sense of the term is not present in fundamental physics, or at least that not all fundamental physical processes are causal, remains prevalent [3, 4] - for example, Norton writes that ‘(causes and causal principles) are heuristically useful notions, licensed by our best sciences, but we should not mistake them for the fundamental principles of nature’ . Furthermore, many influential philosophical analyses of causation posit that causation arises only at a macroscopic level, as a result of the thermodynamic gradient [6,7], interventions [8,9], the perspectives of agents , or some such feature of reality which plays no role in fundamental physics.
There is an important analogy between languages and games. Just as a scoresheet records features of the evolution of a game to determine the effect of a move in that game, a conversational score records features of the evolution of a conversation to determine the effect of the linguistic moves that speakers make. Chess is particularly interesting for the study of conversational dynamics because it has language-like notations, and so serves as a simplified study in how the effect of an assertion depends on, as well as evolves, the scoreboard. In this paper, we offer a compositional semantics for chess notation and a simple formal picture for determining the full information conveyed by an entry. We will also discuss an alternative model resembling accounts of centered assertion.
Every so often, I give a brief overview of my perspective on belief to audiences of psychologists. After the 2021 Creditions conference, I was asked to write up my thoughts and publish them in a special issue of Frontiers in Psychology (ed. …