Quantum Field Theory (QFT) is the mathematical and conceptual
framework for contemporary elementary particle physics. It is also a
framework used in other areas of theoretical physics, such as
condensed matter physics and statistical mechanics. In a rather
informal sense QFT is the extension of quantum mechanics (QM), dealing
with particles, over to fields, i.e. systems with an infinite number
of degrees of freedom. (See the entry on
quantum mechanics.) In the last decade QFT has become a more widely discussed
topic in philosophy of science, with questions ranging from
methodology and semantics to ontology.
Neurophysiology and neuroanatomy limit the set of possible computations that can be performed in a brain circuit. Although detailed data on individual brain microcircuits is available in the literature, cognitive modellers seldom take these constraints into account. One reason for this is the intrinsic complexity of accounting for mechanisms when describing function. In this paper, we present multiple extensions to the Neural Engineering Framework that simplify the integration of low-level constraints such as Dale’s principle and spatially constrained connectivity into high-level, functional models. We apply these techniques to a recent model of temporal representation in the Granule-Golgi microcircuit in the cerebellum, extending it towards higher degrees of biological plausibility. We perform a series of experiments to analyze the impact of these changes on a functional level. The results demonstrate that our chosen functional description can indeed be mapped onto the target microcircuit under biological constraints. Further, we gain insights into why these parameters are as observed by examining the effects of parameter changes. While the circuit discussed here only describes a small section of the brain, we hope that this work inspires similar attempts of bridging low-level biological detail and high-level function. To encourage the adoption of our methods, we published the software developed for building our model as an open-source library.
[Editor's Note: The following new entry by Timothy O’Connor replaces the
on this topic by the previous authors.]
The world appears to contain diverse kinds of objects and
systems—planets, tornadoes, trees, ant colonies, and human
persons, to name but a few—characterized by distinctive features
and behaviors. This casual impression is deepened by the success of
the special sciences, with their distinctive taxonomies and laws
characterizing astronomical, meteorological, chemical, botanical,
biological, and psychological processes, among others. But
there’s a twist, for part of the success of the special sciences
reflects an effective consensus that the features of the composed
entities they treat do not “float free” of features and
configurations of their components, but are rather in some way(s)
dependent on them.
My aims in this essay are two. First (§§1-4), I want to get clear on the very idea of a theory of the history of philosophy, the idea of an overarching account of the evolution of philosophical reflection since the inception of written philosophy. And secondly (§§5-8), I want to actually sketch such a global theory of the history of philosophy, which I call the two-streams theory.
Psycholinguistic studies have repeatedly demonstrated that downward entailing (DE) quantifiers are more difficult to process than upward entailing (UE) ones. We contribute to the current debate on cognitive processes causing the monotonicity effect by testing predictions about the underlying processes derived from two competing theoretical proposals: two-step and pragmatic processing models. We model reaction times and accuracy from two verification experiments (a sentence-picture and a purely linguistic verification task), using the diffusion decision model (DDM). In both experiments, verification of UE quantifier more than half was compared to verification of DE quantifier fewer than half. Our analyses revealed the same pattern of results across tasks: Both non-decision times and drift rates, two of the free model parameters of the DDM, were affected by the monotonicity manipulation. Thus, our modeling results support both two-step (prediction: non-decision time is affected) and pragmatic processing models (prediction: drift rate is affected).
 Shenoy, Prakash P. (1991), On Spohn’s Rule for Revision of Beliefs. International Journal of Approximate Reasoning 5, 149-181.  Spirtes, Peter & Glymour, Clark & Scheines, Richard (2000), Causation,
The cerebellum is classically described in terms of its role in motor control. Recent evidence suggests that the cerebellum supports a wide variety of functions, including timing-related cognitive tasks and perceptual prediction. Correspondingly, deciphering cerebellar function may be important to advance our understanding of cognitive processes. In this paper, we build a model of eyeblink conditioning, an extensively studied low-level function of the cerebellum. Building such a model is of particular interest, since, as of now, it remains unclear how exactly the cerebellum manages to learn and reproduce the precise timings observed in eyeblink conditioning that are potentially exploited by cognitive processes as well. We employ recent advances in large-scale neural network modeling to build a biologically plausible spiking neural network based on the cerebellar microcircuitry. We compare our simulation results to neurophysiological data and demonstrate how the recurrent Granule-Golgi subnetwork could generate the dynamics representations required for triggering motor trajectories in the Purkinje cell layer. Our model is capable of reproducing key properties of eyeblink conditioning, while generating neurophysiological data that could be experimentally verified.
Inspired by work of Stefano Zambelli on these topics, this paper the complex nature of the relation between technology and computability. This involves reconsidering the role of computational complexity in economics and then applying this to a particular formulation of the nature of technology as conceived within the Sraffian framework. A crucial element of this is to expand the concept of technique clusters. This allows for understanding that the set of possible techniques is of a higher cardinality of infinity than that of the points on a wage-profit frontier. This is associated with potentially deep discontinuities in production functions and a higher form of uncertainty involved in technological change and growth.
Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
Spatial cognition relies on an internal map-like representation of space provided by hippocampal place cells, which in turn are thought to rely on grid cells as a basis. Spatial Semantic Pointers (SSP) have been introduced as a way to represent continuous spaces and positions via the activity of a spiking neural network. In this work, we further develop SSP representation to replicate the firing patterns of grid cells. This adds biological realism to the SSP representation and links biological findings with a larger theoretical framework for representing concepts. Furthermore, replicating grid cell activity with SSPs results in greater accuracy when constructing place cells.Improved accuracy is a result of grid cells forming the optimal basis for decoding positions and place cell output. Our results have implications for modelling spatial cognition and more general cognitive representations over continuous variables.
Here’s a paper on categories where the morphisms are open physical systems, and composing them describes gluing these systems together:
• John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics. …
Dishonest signals are displays, calls, or performances that would ordinarily convey certain information about some state of the world, but where the signal being sent does not correspond to the true state. Manipulation is the sending of signals in a way that takes advantage of default receiver responses to such signals, to influence their behavior in ways favorable to the sender. Manipulative signals are often dishonest, and dishonest signals are often manipulative, though this not need be the case. Some theorists have defined signaling in such a way that evolutionarily reinforced signals are essentially manipulative.
Relying on some auxiliary assumptions, usually considered mild, Bell’s theorem proves that no local theory can reproduce all the predictions of quantum mechanics. In this work, we introduce a fully local, superdeterministic model that, by explicitly violating settings independence—one of these auxiliary assumptions, requiring statistical independence between measurement settings and systems to be measured—is able to reproduce all the predictions of quantum mechanics. Moreover, we show that, contrary to widespread expectations, our model can break settings independence without an initial state that is too complex to handle, without visibly losing all explanatory power and without outright nullifying all of experimental science. Still, we argue that our model is unnecessarily complicated and does not offer true advantages over its non-local competitors. We conclude that, while our model does not appear to be a viable contender to their non-local counterparts, it provides the ideal framework to advance the debate over violations of statistical independence via the superdeterministic route.
The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.
The giving and requesting of explanations is central to normative practice. When we tell children that they must act in certain ways, they often ask why, and often we are able to answer them. Sentences like ‘Kicking dogs is wrong because it hurts them’, and ‘You should eat your vegetables because they’re healthy’, are meaningful and ubiquitous.
Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: (i) the quantum state is a complete description of a physical system; (ii) the quantum (superposition) state is grounded in its terms, and (iii) the superposition terms are themselves grounded in local goings-on about the system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism.
This paper sketches, in a very partial and preliminary way, an approach to philosophy of science that I believe has some important affinities with philosophical positions that are often regarded as versions of “pragmatism”. However, pragmatism in both its classical and more modern forms has taken on many different commitments. I will be endorsing some of these and rejecting others—in fact, I will suggest that some elements prominent in some recent formulations of pragmatism are quite contrary in spirit to a genuine pragmatism. Among the elements I will retain from many if not all varieties of pragmatism are an emphasis on what is useful, where this is understood in a means/ends framework, a rejection of spectator theories of knowledge and skepticism about certain ways of thinking about “representation” in science. Also skepticism about ambitious forms of metaphysics. Elements found in some previous versions of pragmatism that I will reject include proposals to understand (or replace) truth with some notion of community assent and skepticism about causal and physical modality. It is
In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In section 3, I argue that these methodological claims are in tension with a prominent account of scientific method, often called “Inference to the Best Explanation” (IBE). Later on, in section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely the argument due to Roche and Sober (2013) that “explanatoriness is evidentially irrelevant”. This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In section 4, I consider some extant responses to this argument, especially that of Climenhaga (2017). In section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected De Divinatione, I formulate what I call the “Ciceronian Causal-nomological Requirement”, (CCR), which states roughly that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations”, chance correlations which we should not rely upon for predictive inference. In section 7, I offer some concluding remarks.
If the Past Hypothesis underlies the arrows of time, what is the status of the Past Hypothesis? In this paper, I examine the role of the Past Hypothesis in the Boltzmannian account and defend the view that the Past Hypothesis is a candidate fundamental law of nature. Such a view is known to be compatible with Humeanism about laws, but as I argue it is also supported by a minimal non-Humean “governing” view. Some worries arise from the non-dynamical and time-dependent character of the Past Hypothesis as a boundary condition, the intrinsic vagueness in its specification, and the nature of the initial probability distribution. I show that these worries do not have much force, and in any case they become less relevant in a new quantum framework for analyzing time’s arrows—the Wentaculus. Hence, both Humeans and minimalist non-Humeans should embrace the view that the Past Hypothesis is a candidate fundamental law of nature and welcome its ramifications for other parts of philosophy of science.
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures.
In the year 2000, in a paper titled Quantum Theory Needs No ‘Interpretation’, Chris Fuchs and Asher Peres presented a series of instrumentalist arguments against the role played by ‘interpretations’ in QM. Since then —quite regardless of the publication of this paper— the number of interpretations has experienced a continuous growth constituting what Adán Cabello has characterized as a “map of madness”. In this work, we discuss the reasons behind this dangerous fragmentation in understanding and provide new arguments against the need of interpretations in QM which —opposite to those of Fuchs and Peres— are derived from a representational realist understanding of theories —grounded in the writings of Einstein, Heisenberg and Pauli. Furthermore, we will argue that there are reasons to believe that the creation of ‘interpretations’ for the theory of quanta has functioned as a trap designed by anti-realists in order to imprison realists in a labyrinth with no exit. Taking as a standpoint the critical analysis by David Deutsch to the anti-realist understanding of physics, we attempt to address the references and roles played by ‘theory’ and ‘observation’. In this respect, we will argue that the key to escape the anti-realist trap of interpretation is to recognize that —as Einstein told Heisenberg almost one century ago— it is only the theory which can tell you what can be observed. Finally, we will conclude that what QM needs is not a new interpretation but instead, a theoretical (formal-conceptual) consistent, coherent and unified scheme which allows us to understand what the theory is really talking about. Key-words: Interpretation, explanation, representation, quantum theory.
It has been argued in various places that measurement-induced collapses in Orthodox Quantum Mechanics yields a genuine structural (or intrinsic) quantum arrow of time. In this paper, I will critically assess this proposal. I begin by distinguishing between a structural and a non-structural arrow of time. After presenting the proposal of a collapse-based arrow of time in some detail and discussing some criticisms it has faced, I argue, first, that any quantum arrow of time in Orthodox Quantum Mechanics cannot be defined for the entire universe and, second, that it requires non-dynamical information to be established. Consequently, I deliver that any quantum arrow of time in Orthodox Quantum Mechanics is, at best, local and nonstructural, deflating the original proposal.
The notion of time reversal has caused some recent controversy in philosophy of physics. In this paper, I claim that the notion is more complex than usually thought. In particular, I contend that any account of time reversal presupposes, explicitly or implicitly, an answer to the following questions: (a) What is time-reversal symmetry predicated of? (b) What sorts of transformations should time reversal perform, and upon what? (c) What role does time-reversal symmetry play in physical theories? Each dimension, I argue, not only admits divergent answers, but also opens a dimension of analysis that feeds the complexity of time reversal: modal, metaphysical, and heuristic, respectively. The comprehension of this multi-dimensionality, I conclude, shows how philosophically rich the notion of time reversal is in philosophy of physics
According to the “Boltzmann brain” hypothesis, we popped into existence as a thermal fluctuation in an otherwise chaotic universe, with our brains replete with spurious memories of a fictitious, orderly past. The hypothesis extends less ambitious argumentation by Ludwig Boltzmann in the late 19th century, but it lacks the physical foundation of Boltzmann’s original arguments. We are assured of neither the recurrence nor the reversibility of the time developments of the applicable physics. The Boltzmann brain scenario is much more likely to produce a physically spurious “batty brain” whose memories fail to conform to the scientifically well-behaved regularities of our brains.
Comparative psychology came into its own as a science of animal minds, so a standard story goes, when it abandoned anecdotes in favor of experimental methods. However, pragmatic constraints significantly limit the number of individual animals included in laboratories experiments. Studies are often published with sample sizes in the single digits, and sometimes samples of one animal. With such small samples, comparative psychology has arguably not actually moved on from its anecdotal roots. Replication failures in other branches of psychology have received substantial attention, but have only recently been addressed in comparative psychology, and have not received serious attention in the attending philosophical literature. I focus on the question of how to interpret findings from experiments with small samples, and whether they can be generalized to other members of the tested species. As a first step, I argue that we should view studies with extreme small sample sizes as anecdotal experiments, lying somewhere between traditional experiments and traditional anecdotes in evidential weight and generalizability.
A widespread view in physics holds that the implementation of time reversal in standard quantum mechanics must be given by an anti-unitary operator. In foundations and philosophy of physics, however, there has been some discussion about the conceptual grounds of this orthodoxy, largely relying on either its obviousness or its mathematical-physical virtues. My aim in this paper is to substantively change the traditional structure of the debate by highlighting the philosophical commitments underlying the orthodoxy. I argue the persuasive force of the orthodoxy greatly depends on a relationalist metaphysics of time and a by-stipulation view of time-reversal invariance. Only with such philosophical background can the orthodoxy of time reversal in standard quantum mechanics succeed and be properly justified.
Catherine Herfeld: Professor List, what comes to your mind when someone refers to rational choice theory? What do you take rational choice theory to be? Christian List: When students ask me to define rational choice theory, I usually tell them that it is a cluster of theories, which subsumes individual decision theory, game theory, and social choice theory. I take rational choice theory to be not a single theory but a label for a whole field. In the same way, if you refer to economic theory, that is not a single theory either, but a whole discipline, which subsumes a number of different, specific theories. I am actually very ecumenical in my use of the label ‘rational choice theory’. I am also happy to say that rational choice theory in this broad sense subsumes various psychologically informed theories, including theories of boundedly rational choice. We should not define rational choice theory too narrowly, and we definitely shouldn’t tie it too closely to the traditional idea of homo economicus.
Psychologists frequently use response time to study cognitive processes, but response time may also be a part of the commonsense psychology that allows us to make inferences about other agents’ mental processes. We present evidence that by age six, children expect that solutions to a complex problem can be produced quickly if already memorized, but not if they need to be solved for the first time. We suggest that children could use response times to evaluate agents’ competence and expertise, as well as to assess the value and relevance of information.
Have you ever disagreed with your government’s stance about some significant social, political, economic, or even philosophical issue? For example: Healthcare policy? Response to a pandemic? Gender inequality? Structural racism? Drilling in the Arctic? Fracking? Approving or vetoing a military intervention in a foreign country? Transgender rights? Exiting some multi-national political alliance (for instance, the European Union)? The building of a 20 billion dollar wall? We’re guessing the answer is most likely ’yes’.
Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. …