guest post by Amy Kind
Is there something that it’s like to be plant? I suspect that most people hearing this question would unhesitatingly answer in the negative. In this respect, plants seem quite different from animals. …
Yesterday, I showed that an artifact’s function wasn’t defined by the maker’s intention that it be used for that function. For instance, a historical weapons recreationist might make a halberd without intending that it kill anyone, even though killing is the function of the halberd, and we might make a nuclear weapon without intending that it kill, but only for deterrent—and yet, once again, killing is the function of the weapon. …
In a recent paper, S. Gao has claimed that, under the assumption that the initial state of the universe is a pure quantum state, only the many worlds interpretation can account for the observed arrow of time. We show that his argument is untenable and that if endorsed it potentially leads to undermine the search for a scientific explanation of certain phenomena.
Multiscale modeling techniques have attracted increasing attention by philosophers of science, but the resulting discussions have almost exclusively focused on issues surrounding explanation (e.g., reduction and emergence). In this paper, I argue that besides explanation, multiscale techniques can serve important exploratory functions when scientists model systems whose organization at different scales is ill-understood. My account distinguishes explanatory and descriptive multiscale modeling based on which epistemic goal scientists aim to achieve when using multiscale techniques. In explanatory multiscale modeling, scientists use multiscale techniques to select information that is relevant to explain a particular type of behavior of the target system. In descriptive multiscale modeling scientists use multiscale techniques to explore lower-scale features which could be explanatorily relevant to many different types of behavior, and to determine which features of a target system an upper-scale data pattern could refer to. Using multiscale models from data-driven neuroscience as a case study, I argue that descriptive multiscale models have an exploratory function because they are a sources of potential explanations and serve as tools to reassess our conception of the target system.
Some authors, inspired by the theoretical requirements for the formulation of a quantum theory of gravity, proposed a relational reconstruction of the quantum parameter-time—the time of the unitary evolution, which would make quantum mechanics compatible with relativity. The aim of the present work is to follow the lead of those relational programs by proposing a relational reconstruction of the event-time—which orders the detection of the definite values of the system’s observables. Such a reconstruction will be based on the modal- Hamiltonian interpretation of quantum mechanics, which provides a clear criterion to select which observables acquire a definite value and to specify in what situation they do so.
The existence and fundamentality of spacetime has been questioned in quantum gravity where spacetime is frequently described as emerging from a more fundamental nonspatiotemporal ontology. This is supposed to lead to various philosophical issues such as the problem of empirical coherence. Yet those issues assume beforehand that we actually understand and agree on the nature of spacetime. Reviewing popular conceptions of spacetime, we find that there is substantial disagreement on this matter, and little hope of resolving it. However, we argue that this should not trouble us as these issues, which seem to suggest the need for an account of spacetime in quantum gravity, can be addressed without one.
This chapter provides a systematic overview of topological explanations in the philosophy of science literature. It does so by presenting an account of topological explanation that I (Kostić and Khalifa 2021; Kostić 2020a; 2020b; 2018) have developed in other publications and then comparing this account to other accounts of topological explanation. Finally, this appraisal is opinionated because it highlights some problems in alternative accounts of topological explanations, and also it outlines responses to some of the main criticisms raised by the so-called new mechanists.
Conspiracy theories and conspiracy theorists have been accused of a great many sins, but are the conspiracy theories conspiracy theorists believe epistemically problematic? Well, according to some recent work (such as Cassam Quassim, Keith Harris, and M. Guilia Napolitano), yes, they are. Yet a number of other philosophers (myself included) like Brian L. Keeley, Charles Pigden, Kurtis Hagen, Lee Basham, and the like have argued ‘No!’ I will argue that there are features of certain conspiracy theories which license suspicion of such theories. I will also argue that these features only license a limited suspicion of these conspiracy theories, and thus we need to be careful about generalising from such suspicions to a view of the warrant of conspiracy theories more generally. To understand why, we need to get to the bottom of what exactly makes us suspicious of certain conspiracy theories, and how being suspicious of a conspiracy theory does not always tell us anything about how likely the theory in question is to be false.
The Kepler problem is the study of a particle moving in an attractive inverse square force. In classical mechanics, this problem shows up when you study the motion of a planet around the Sun in the Solar System. …
John MacFarlane has wondered whether relativism is expressivism done right. ... In contrast, I would venture ... that it is worth taking seriously the idea that expressivism is relativism done right. (Schroeder 2015, 25).
Artifacts have defining functions. It is tempting to think of these functions as coming from their maker’s intention that they be used for those functions. But that is actually incorrect. Modern-day blacksmiths routinely make weapons of war such as swords and halberds (cf. …
probabilistic incorrectness in the (over)rating of the subject, (ii) the possibility of imagining non-quantum scenarios but completely similar to that experiment (iii) lack of ratified practical tests having genuine essence (i.e., non-counterfeit). So, the aforesaid experiment appears as a simplistic thought exercise without any notable significance for quantum physics.
Reverse inference is a crucial inferential strategy used in cognitive neuroscience to derive conclusions about the engagement of cognitive processes from patterns of brain activation. While widely employed in experimental studies, it is now viewed with increasing scepticism within the neuroscience community. One problem with reverse inference is that it is logically invalid, being an instance of abduction in Peirce’s sense. In this paper, we offer the first systematic analysis of reverse inference as a form of abductive reasoning and highlight some relevant implications for the current debate. We start by formalising an important distinction that has been entirely neglected in the literature, namely the distinction between weak (strategic) and strong (justificatory) reverse inference. Then, we rely on case studies from recent neuroscientific research to systematically discuss the role and limits of both strong and weak reverse inference; in particular, we offer the first exploration of weak reverse inference as a discovery strategy within cognitive neuroscience.
About thirty years ago I attended a summer ‘retreat’ of the Harvard Medical School MD/PhD programme in neuroscience and I vividly remember one lab director’s opening remarks in the first session. ‘In our lab, if you work on one neuron, that’s neuroscience; if you work on two neurons, that’s psychology’ — a term of abuse in his corner of the world. This bottom-up approach to neuroscience is still advocated in many quarters, and there are not a few neuroscientists who dismiss the ‘cognitive’ neurosciences as mere headline-grabbing speculation, but while the brute empirical researchers have provided a wealth of hard-won data in recent years, they have had next to nothing of importance to say about the mind or consciousness. This is not surprising when you confront the fact that the brain has literally trillions of moving parts. Billions of individual cells, each a complicated and rather autonomous micro-agent with an agenda, and no two exactly alike, are somehow coordinated to produce impressively accurate intelligence on the world outside the skulls they labour in, generating appropriate behaviour under most circumstances. How can anybody think responsibly and creatively about such a complicated organ? Clearly, one needs a model at a higher level which can systematize and rationalize the astronomical number of transactions and interactions between the parts.
Some animal research is arguably morally wrong, and some animal research is morally bad but could be improved. Who is most likely to be able to identify wrong or bad animal research and advocate for improvements? I argue that philosophical ethicists have the expertise that makes them the likely best candidates for these tasks. I review the skills, knowledge, and perspectives that philosophical ethicists tend to have that makes them ethical experts. I argue that, insofar as Institutional Animal Care and Use Committees are expected to ensure that research is ethical, they must have philosophical ethicists as members.
Hate speech is a concept that many people find intuitively easy to
grasp, while at the same time many others deny it is even a coherent
concept. A majority of developed, democratic nations have enacted hate
speech legislation—with the contemporary United States being a
notable outlier—and so implicitly maintain that it is coherent,
and that its conceptual lines can be drawn distinctly enough. Nonetheless, the concept of hate speech does indeed raise many
difficult questions: What does the ‘hate’ in hate speech
refer to? Can hate speech be directed at dominant groups, or is it by
definition targeted at oppressed or marginalized communities?
This paper defends a relational account of personhood. I argue that the structure of personhood consists of dyadic relations between persons who can wrong or be wronged by one another, even if some of them lack moral competence. I draw on recent work on directed duties to outline the structure of moral communities of persons. The upshot is that we can construct an inclusive theory of personhood that can accommodate nonhuman persons based on shared community membership. I argue that, once we unpack the internal relation between directed duties, moral status, and flourishing, relations can ground personhood. Both the basis and the form of personhood are relational, and both can eschew anthropocentrism.
If the laws of nature are deterministic, then it seems possible that a Laplacean intelligence that knows the initial conditions and the laws would be able to accurately predict everything that will ever happen. However, it would be easy to construct a counterpredictive device that falsifies any revealed prediction about its future behavior. What would then occur if a Laplacean intelligence encountered a counterpredictive device? This is the paradox of predictability. A number of philosophers have proposed solutions to it, though part of my aim here is to argue that the paradox is more pernicious than has thus far been appreciated, and therefore that extant solutions are inadequate. My broader aim is to argue that the paradox motivates Humeanism about laws of nature.
Some divine “command” theories do not ground obligations in commands as such, but in divine mental states, such as his willings, intentions or desires. It’s occurred to me that there is a down-side to such theories. …
In my previous post, I argued against divine desire versions of divine command theory. Reflecting on that post, I saw that there is a simple variant of divine desire that helps with some of the problems in that post. …
The hierarchy of life is the result of a succession of evolutionary transitions in individuality (ETIs). During an ETI, individuals at a particular level of organization interact in such a way as to produce larger-level entities that become individuals in their own right. These new individuals are defined by their capacity to exhibit Darwinian properties of variation, differences in fitness, and heredity. One difficulty in accounting for ETIs is articulating how these properties are acquired at a higher level from the lower ones. Collaborators and I recently proposed the ‘ecological scaffolding’ model in which imposing an ecological scaffold (i.e., a structure in the environment) on lower-level entities initiates an ETI. Here, I present a new model that extends this work. Within this new model, I propose a mechanism of scaffold endogenization, demonstrating that collectives can become resilient to the ecological scaffold being removed. This type of resilience is not observed in the ecological scaffolding model. However, classically, a biological individual would be regarded as an entity capable of withstanding environmental changes. Thus, the new model proposed here represents a step towards a more complete explanation for ETIs.
Despite being widely used in both biology and psychology as if it were a single notion, heritability is not a unified concept. This is also true in evolutionary theory, in which the word ‘heritability’ has at least two technical definitions that only partly overlap. These yield two approaches to heritability: the ‘variance approach’ and the ‘regression approach.’ In this paper, I aim to unify these two approaches. After presenting them, I argue that a —‘general applicability’ and ‘separability of the general notion of heritability ought to satisfy two desiderata causes of resemblance.’ I argue that neither the variance nor the regression approach satisfies these two desiderata concomitantly. From there, I develop a general definition of heritability that relies on the distinction between intrinsic and extrinsic properties. I show that this general definition satisfies the two desiderata. I then illustrate the potential usefulness of this general definition in the context of microbiome research. evolutionary biology, genetics, and psychology. When asking whether a trait is heritable in everyday language, one typically wants to know whether this trait is transmitted across generations so that offspring resemble their parents (Fox Keller, 2010, chap. 3). Following a more genetic-centred usage of the term, heritability is often associated with the process of genetic transmission. That it recurs over generations does not entail that a trait is heritable—sharing genes with one’s parents must be the reason the phenotype is transmitted. (Lynch & Walsh, 1998 , pp. 170–175).
Philosophers are often credited with particularly well-developed conceptual skills. The ‘expertise objection’ to experimental philosophy builds on this assumption to challenge inferences from findings about laypeople to conclusions about philosophers. We draw on psycholinguistics to develop and assess this objection. We examine whether philosophers are less or differently susceptible than laypersons to cognitive biases that affect how people understand verbal case descriptions and judge the cases described. We examine two possible sources of difference: Philosophers could be better at deploying concepts, and this could make them less susceptible to comprehension biases (‘linguistic expertise objection’). Alternatively, exposure to different patterns of linguistic usage could render philosophers vulnerable to a fundamental comprehension bias, the linguistic salience bias, at different points (‘linguistic usage objection’). Together, these objections mount a novel ‘master argument’ against experimental philosophy. To develop and empirically assess this argument, we employ corpus analysis and distributional semantic analysis and elicit plausibility ratings from academic philosophers and psychology undergraduates. Our findings suggest philosophers are better at deploying concepts than laypeople but are susceptible to the linguistic salience bias to a similar extent and at similar points. We identify methodological consequences for experimental philosophy and for philosophical thought experiments.
In this paper I discuss two features of laws in physics and ask to what extent these features are compatible with different philosophical accounts of laws of nature. These features are (i) that laws in physics fit what Richard Feynman has dubbed the "Babylonian conception" of physics, according to which laws in physics form an interlocking set of ‘theorems’; and (ii) that the distinction between dynamics and kinematics is to some extent contextual. These features, I argue, put pressure on any philosophical account of laws that presupposes that the laws of physics have a unique quasi-axiomatic structure, such as the Mill-Ramsey- Lewis account of laws and metaphysical accounts of laws that assume that there is a privileged explanatory nomological hierarchy.
In his 1956 book ‘The direction of Time’, Hans Reichenbach offered a comprensive analysis of the physical ground of the direction of time, the notion of physical cause, and the relation between the two. I review its conclusions and argue that at the light of recent advances Reichenbach analysis provides the best account of the physical underpinning of these notions. I integrate recent results in cosmology, and relative to the physical underpinning of records and agency into Reichenbach’s account, and discuss which questions it leaves open.
Are there causal explanations in physics? Answers to this question range from the claim that there are no causal explanations in physics, since the notion of cause plays no legitimate role in physics (and, perhaps, elsewhere) to the claim that all explanations in physics are causal in virtue of the fact that all explanations in general (or at least all scientific explanations) are causal. In addition to these two polar opposite positions some philosophers have argued for pluralist views that allow for both causal explanations and non-causal explanations in physics.
Recent research on thick terms like ‘rude’ and ‘friendly’ has revealed a polarity effect, according to which the evaluative content of positive thick terms like ‘friendly’ and ‘courageous’ can be more easily cancelled than the evaluative content of negative terms like ‘rude’ and ‘selfish’. In this paper, we study the polarity effect in greater detail. We first demonstrate that the polarity effect is insensitive to manipulations of embeddings (Study 1). Second, we show that the effect occurs not only for thick terms but also for thin terms such as ‘good’ or ‘bad’ (Study 2). We conclude that the polarity effect is indicative of a pervasive linguistic asymmetry that holds between positive and negative evaluative terms.
In epistemology and in ordinary life, we make many normative claims about beliefs. We say that you ought to believe in the reality of climate change; that there are strong reasons to believe that there was no significant fraud in the November 2020 US election; that you should believe the testimony of victims of domestic violence; and so on. (And, of course, some people say the converse things about each of these matters.) These are perfectly ordinary and normal uses of language. As with all normative claims, philosophical questions arise about what – if anything – underwrites these kinds of normative claims. On one view, epistemic instrumentalism, facts about what we (epistemically) ought to believe, or about what is an (epistemic, normative) reason to believe what, obtain at least partly in virtue of our goals (or aims, ends, intentions, desires, etc.). More particularly, our having certain goals makes it the case that we have epistemic reasons to believe certain things, because doing so would instrumentally serve those goals. The converse view, anti-instrumentalism, denies this, and holds that the facts about what we ought or have reasons to believe are independent of our goals.
On the divine desire version of divine command theory, the right thing to do is what God wants us to do. But what if God’s desires conflict? God does’t want us to commit murder. But suppose a truthful evildoer tells me that if I don’t murder one innocent person, then a thousand persons will be given a choice to murder an innocent person or die. …
Humeans about laws maintain that laws of nature are nothing over and above the complete distribution of non-modal, categorical properties in spacetime. ‘Humean compatibilists’ argue that if Humeanism about laws is true, then agents in a deterministic world can do otherwise than they are lawfully determined to do because of the distinctive nature of Humean laws. More specifically, they reject a central premise of the Consequence argument by maintaining that deterministic laws of nature are ‘up to us’. In this paper, we present a new argument for Humean compatibilism. We argue that Humeans about laws indeed have resources for defending compatibilism that non-Humeans lack (though not for the reasons typically discussed in the literature). Moreover, we show that utilizing these resources does not lead to objectionable consequences. Humeans about laws should thus embrace Humean compatibilism.