Pretense is often characterized as a form of imagination, more specifically as a sort of enactive imagination. But for the most part, pretending and imagining interact with one’s evaluative / affective systems differently. One tends to respond to imagined content with emotions similar to (albeit more attenuated than) those one would feel if that content was real. When pretending, however, one’s affective responses are often much more generalized, and insensitive to the content of the pretense. We suggest that this is because one’s attentional focus in pretense is on the actions themselves, and their correspondence with the scripts or roles being used to generate the pretense. Moreover, because pretense is intrinsically motivated, pretending is generally fun, irrespective of what, in particular, is being pretended.
Uncertainty in climate science has drawn increasing attention in recent years (e.g., Parker 2006, 2010, 2011, 2013; Stainforth et al. 2007; Knutti 2008; Frigg et al. 2013, 2014; Parker and Risbey 2015). The topic is important epistemically and politically: epistemically, because scientists have only limited abilities to validate and confirm the output of climate models ; and politically, because policymakers have to take into account the current knowledge concerning the climate and its uncertainty.
Big Data promises to revolutionise the production of knowledge within
and beyond science, by enabling novel, highly efficient ways to plan,
conduct, disseminate and assess research. The last few decades have
witnessed the creation of novel ways to produce, store, and analyse
data, culminating in the emergence of the field of data
science, which brings together computational, algorithmic,
statistical and mathematical techniques towards extrapolating
knowledge from big data. At the same time, the Open Data
movement—emerging from policy trends such as the push for Open
Government and Open Science—has encouraged the sharing and
interlinking of heterogeneous research data via large digital
According to traditional Aristotelianism, what makes you and me be distinct entities is that although we are of the same species, we’re made of distinct chunks of matter. Here is a quick initial problem with this. …
Many macroscopic physical processes are known to occur in a time-directed way despite the apparent time-symmetry of the known fundamental laws. A popular explanation is to postulate an unimaginably atypical state for the early universe — a ‘Past Hypothesis’ (PH) — that seeds the time-asymmetry from which all others follow. I will argue that such a PH faces serious new difficulties. First I strengthen the grounds for existing criticism by providing a systematic analytic framework for assessing the status of the PH. I outline three broad categories of criticism that put into question a list of essential requirements of the proposal. The resulting analysis paints a grim picture for the prospects of providing an adequate formulation for an explicit PH. I then provide a new argument that substantively extends this criticism by showing that any time-independent measure on the space of models of the universe must necessarily break one of its gauge symmetries. The PH then faces a new dilemma: reject a gauge symmetry of the universe and introduce a distinction without difference or reject the time-independence of the measure and lose explanatory power.
The assertion by Yu and Nikolic that the delayed choice quantum eraser experiment of Kim et al. empirically falsifies the consciousness-causes-collapse hypothesis of quantum mechanics is based on the unfounded and false assumption that the failure of a quantum wave function to collapse implies not be surprising, as confirmed by , that the distribution recorded at D is the sum of two closely-spaced single-slit Fraunhofer distributions. In other words, the detection of which-path information by detectors D1 and D2 guarantees no interference distribution at D . FIG. 1. When which-path information of idler photons is recorded by detectors D1 and D2 , detector D does not produce an interference pattern.
In this talk, I propose to sketch the contents of Noether’s 1918 article, “Invariante Variationsprobleme”, as it may be seen against the background of the work of her predecessors and in the context of the debate on the conservation of energy that had arisen in the general theory of relativity.
The neocortex figures importantly in human cognition, but it is not the only locus of cognitive activities or even at the top of a hierarchy of cognitive processing areas in the central nervous system. Moreover, the form of information processing employed in the neocortex is not representative of information processing elsewhere in the nervous system. In this paper, we articulate and argue against cortico-centrism in cognitive science, contending instead that the nervous system constitutes a heterarchical network of diverse types of information processing systems. To press this perspective, we examine neural information processing in both non-vertebrates and vertebrates, including examples of cognitive processing in the vertebrate hypothalamus and basal ganglia.
This paper challenges a common assumption about decision- making mechanisms in humans: decision-making is a distinctively high-level cognitive activity implemented by mechanisms concentrated in the higher-level areas of the cortex. We argue instead that human behavior is controlled by a multiplicity of highly distributed, heterarchically organized decision-making mechanisms. We frame it in terms of control mechanisms that procure and evaluate information to select activities of controlled mechanisms and adopt a phylogenetic perspective, showing how decision-making is realized in control mechanisms in a variety of species. We end by discussing this picture's implication for high-level cognitive decision-making.
The concept of emergence is commonly invoked in modern physics but rarely defined. Building on recent influential work by Butterfield (2011a,b), I provide precise definitions of emergence concepts as they pertain to properties represented in models, applying them to some basic examples from spacetime and thermostatistical physics. The chief formal innovation I employ, similarity structure, consists in a structured set of similarity relations among those models under analysis—and their properties—and is a generalization of topological structure. Although motivated from physics, this similarity-structure-based account of emergence applies to any science that represents its possibilia with (mathematical) models.
Merely approximate symmetry is mundane enough in physics that one rarely finds any explication of it. Among philosophers it has also received scant attention compared to exact symmetries. Herein I invite further consideration of this concept that is so essential to the practice of physics and interpretation of physical theory. After motivating why it deserves such scrutiny, I propose a minimal definition of approximate symmetry—that is, one that presupposes as little structure on a physical theory to which it is applied as seems needed. Then I apply this definition to three topics: first, accounting for or explaining the symmetries of a theory emeritus in intertheoretic reduction; second, explicating and evaluating the Curie-Post principle; and third, a new account of accidental symmetry.
I provide a formally precise account of diachronic emergence of properties as described within scientific theories, extending a recent account of synchronic emergence using similarity structure on the theories’ models. This similarity structure approach to emergent properties unifies the synchronic and diachronic types by revealing that they only differ in how they delineate the domains of application of theories. This allows it to apply also to cases where the synchronic/diachronic distinction is unclear, such as spacetime emergence from theories of quantum gravity. In addition, I discuss two further case studies—finite periodicity in van der Pol oscillators and two-dimensional quasiparticles in the fractional quantum Hall effect—to facilitate comparison of this approach to others in the literature on concepts of emergence applicable to the sciences. My discussion of the fractional quantum Hall effect in particular may be of independent interest to philosophers of physics concerned with its interpretation.
According to phenomenal functionalism, whether some object or event has a given property is determined by the kinds of sensory experiences such objects or events typically cause in normal perceivers in normal viewing conditions. This paper challenges this position and, more specifically, David Chalmers’s use of it in arguing for what he calls virtual realism.
The possibility that normative motivations are basic or psychologically primitive is an intriguing one worthy of more attention. On the one hand, there is a powerful case that human minds are equipped with a psychological system dedicated to norms and norm-guided behavior (Setman and Kelly forthcoming). On the other hand, there has not yet been a convincing case made that there are any distinct, sui generis motivational resources that are unique or exclusive to this system. To the extent that the issue is addressed, many discussions simply proceed as if the motivations that drive different norm-guided behaviors are drawn from a number of different and more basic psychological sources. However, I do not think the possibility that some normative motivations are psychologically primitive has been ruled out.
Hill (2014) argues that perceptual qualia, i.e. the ways in which things look from a viewpoint, are physical properties of objects. They are relational in nature, that is, they are functions of objects’ intrinsic properties, viewpoints, and observers. Hill also claims that his kind of representationalism is the only view capable of “naturalizing qualia”. After discussing a worry with Hill’s account, I put forward an alternative, which is just as “naturalization-friendly”. I build upon Chirimuuta’s color adverbialism (2015), and I argue that we would better serve the “naturalizing project” if we abandoned representationalism and preferred a broadly adverbialist view of perceptual qualia.
Some non-reductionists claim that so-called ‘exclusion arguments’ against their position rely on a notion of causal sufficiency that is particularly problematic. I argue that such concerns about the role of causal sufficiency in exclusion arguments are relatively superficial since exclusionists can address them by reformulating exclusion arguments in terms of physical sufficiency. The resulting exclusion arguments still face familiar problems, but these are not related to the choice between causal sufficiency and physical sufficiency. The upshot is that objections to the notion of causal sufficiency can be answered in a straightforward fashion and that such objections therefore do not pose a serious threat to exclusion arguments.
The nomic structure of our world spans many levels of description. The explanatory and predictive success of the ‘special sciences’ – biology, psychology, geology, and so on – reveals the existence of robust regularities (sometimes called ‘special science laws’) that knit non-fundamental phenomena into intelligible levels of description. There are two conceptions of how these robust regularities fit into the physical world. On a foundationalist conception, the physical laws (or physical properties) are the source of all other nomic facts, including the robustness of these macro-regularities. On an egalitarian conception, the physical laws are no more fundamental than the laws describing the behavior of genes, ecosystems, or societies.2
I survey from a modern perspective what spacetime structure there is according to the general theory of relativity, and what of it determines what else. I describe in some detail both the “standard” and various alternative answers to these questions. Besides bringing many underexplored topics to the attention of philosophers of physics and of science, metaphysicians of science, and foundationally minded physicists, I also aim to cast other, more familiar ones in a new light.
In physics the concept of reduction is often used to describe how features of one theory can be approximated by those of another under specific circumstances. In such circumstances physicists say the former theory reduces to the latter, and often the reduction will induce a simplification of the features in question. (By contrast, the standard terminology in philosophy is to say that the less encompassing, approximating theory reduces the more encompassing theory being approximated.) Accounts of reductive relationships aspire to generality, as broader accounts provide a more systematic understanding of the relationships between theories and which of their features are relevant under which circumstances.
Surplus structure arguments famously identify elements of a theory regarded as excess or superfluous. If there is an otherwise analogous theory that does without such elements, a surplus structure argument prompts adopting it over the one with those elements. Despite their prominence, the form, justification, and range of applicability of such arguments is disputed. I provide an account of these, following Dasgupta () for the form, which makes plain the role of observables and observational equivalence. However, I diverge on the justification: instead of demanding that the symmetries of the theory relevant for surplus structure arguments be defined without recourse to any interpretation of those theories, I suggest that the process of identifying what is observable and its consequences for symmetries work in dialog. They settle through a reflective equilibrium that is responsible to new experiments, arguments, and examples. Besides better aligning with paradigmatic uses of the surplus structure argument, this position also has some broader consequences for scope of these arguments and the relationship between symmetry and interpretation more generally.
Based on three common interpretive commitments in general relativity, I raise a conceptual problem for the usual identification, in that theory, of timelike curves as those that represent the possible histories of (test) particles in spacetime. This problem affords at least three different solutions, depending on different representational and ontological assumptions one makes about the nature of (test) particles, fields, and their modal structure. While I advocate for a cautious pluralism regarding these options, I also suggest that re-interpreting (test) particles as field processes offers the most promising route for natural integration with the physics of material phenomena, including quantum theory.
Christian List  has recently proposed a category-theoretic model of a system of levels, applying it to various pertinent metaphysical questions. We modify and extend this framework to correct some minor defects and better adapt it to application in philosophy of science. This includes a richer use of category theoretic ideas and some illustrations using social choice theory.
Recently, Horsman et al. (2014) have proposed a new framework, Abstraction/Representation (AR) theory, for understanding and evaluating claims about unconventional or non-standard computation. Among its attractive features, the theory in particular implies a novel account of what is means to be a computer. After expounding on this account, I compare it with other accounts of concrete computation, finding that it does not quite fit in the standard categorization: while it is most similar to some semantic accounts, it is not itself a semantic account. Then I evaluate it according to the six desiderata for accounts of concrete computation proposed by Piccinini (2015). Finding that it does not clearly satisfy some of them, I propose a modification, which I call Agential AR theory, that does, yielding an account that could be a serious competitor to other leading account of concrete computation.
If one is interested in reasoning counterfactually within a physical theory, one cannot adequately use the standard possible world semantics. As developed by Lewis and others, this semantics depends on entertaining possible worlds with miracles, worlds in which laws of nature, as described by physical theory, are violated. Van Fraassen suggested instead to use the models of a theory as worlds, but gave up on determining the needed comparative similarity relation for the semantics objectively. I present a third way, in which this similarity relation is determined from properties of the models contextually relevant to the truth of the counterfactual under evaluation. After illustrating this with a simple example from thermodynamics, I draw some implications for future work, including a renewed possibility for a viable deflationary account of laws of nature.
A “stopping rule” in a sequential experiment is a rule or procedure for determining when the experiment should end. For example, consider a pair of experiments designed to obtain evidence about the proportion of fruit flies in a given population with red eyes [Savage, 1962, pp. 17–8]. In both experiments, flies are caught, observed, and released sequentially and fairly, reporting in the end the number of red-eyed flies. In the first, the experiment is designed to stop after observing 100 flies, while the second is designed to stop after observing 6 red-eyed flies. In general the data from these experiments could be very different, but it is also possible that they be the same: in this case, 100 total flies would be observed in both experiments, of which 6 (including the last) would have red eyes. Is the evidence that each of the two would then provide for or against an hypothesis about the proportion of red-eyed flies the same? The stopping rule principle (SRP) states that this is so: Stopping Rule Principle: The evidential relationship between the data from a completed sequential experiment and a statistical hypothesis does not ever depend on the experiment’s stopping rule.
Amalgamating evidence from heterogeneous sources and across levels of inquiry is becoming increasingly important in many pure and applied sciences. This special issue provides a forum for researchers from diverse scientific and philosophical perspectives to discuss evidence amalgamation, its methodologies, its history, its pitfalls and its potential. We situate the contributions therein within six themes from the broad literature on this subject: the variety-of-evidence thesis, the philosophy of meta-analysis, the role of robustness/sensitivity analysis for evidence amalgamation, its bearing on questions of extrapolation and external validity of experiments, its connection with theory development, and its interface with causal inference, especially regarding causal theories of cancer.
This review concerns the notions of physical possibility and necessity as they are informed by contemporary physical theories and the reconstructive explications of past physical theories according to present standards. Its primary goal is twofold: first, to motivate and introduce a range of accessible issues of philosophical relevance around these notions; and second, to provide extensive references to the research literature on them. Although I will have occasion to comment on the direction and shape of this literature, pointing out certain lacunae in argument or scholarly attention, I intend to advance no overriding thesis or point of view, aside from the selection of issues I deem most interesting.
I review and amplify on some of the many uses of representing a scientific theory in a particular context as a collection of models endowed with a similarity structure, which encodes the ways in which those models are similar to one another. This structure, which is related to topological structure, proves fruitful in the analysis of a variety of issues central to the philosophy of science. These include intertheoretic reduction, emergent properties, the epistemic connections between modeling and inference, the semantics of counterfactual conditionals, and laws of nature. The morals are twofold: first, the further adoption of formal methods for describing similarity (and related topological) structure has the potential to aid in decisive progress in philosophy of science; and second, the selection and justification of such structure is not a matter of technical convenience, but rather often involves great conceptual and philosophical subtlety. I conclude with various directions for future research.
Recent work on the hole argument in general relativity by Weatherall (2016b) has drawn attention to the neglected concept of (mathematical) models’ representational capacities. I argue for several theses about the structure of these capacities, including that they should be understood not as many-to-one relations from models to the world, but in general as many-to-many relations constrained by the models’ isomorphisms. I then compare these ideas with a recent argument by Belot (2017) for the claim that some isometries “generate new possibilities” in general relativity. Philosophical orthodoxy, by contrast, denies this. Properly understanding the role of representational capacities, I argue, reveals how Belot’s rejection of orthodoxy does not go far enough, and makes better sense of our practices in theorizing about spacetime.
« Quantum Computing Lecture Notes 2.0
The Collapsing Leviathan
I was seriously depressed for the last week, by noticeably more than my baseline amount for the new pandemic-ravaged world. The depression seems to have been triggered by two pieces of news:
The US Food and Drug Administration—yes, the same FDA whose failure to approve covid tests in February infamously set the stage for the deaths of 100,000 Americans—has now also banned the Gates Foundation’s program for at-home covid testing. …