Neurophysiology and neuroanatomy limit the set of possible computations that can be performed in a brain circuit. Although detailed data on individual brain microcircuits is available in the literature, cognitive modellers seldom take these constraints into account. One reason for this is the intrinsic complexity of accounting for mechanisms when describing function. In this paper, we present multiple extensions to the Neural Engineering Framework that simplify the integration of low-level constraints such as Dale’s principle and spatially constrained connectivity into high-level, functional models. We apply these techniques to a recent model of temporal representation in the Granule-Golgi microcircuit in the cerebellum, extending it towards higher degrees of biological plausibility. We perform a series of experiments to analyze the impact of these changes on a functional level. The results demonstrate that our chosen functional description can indeed be mapped onto the target microcircuit under biological constraints. Further, we gain insights into why these parameters are as observed by examining the effects of parameter changes. While the circuit discussed here only describes a small section of the brain, we hope that this work inspires similar attempts of bridging low-level biological detail and high-level function. To encourage the adoption of our methods, we published the software developed for building our model as an open-source library.
[Editor's Note: The following new entry by Timothy O’Connor replaces the
on this topic by the previous authors.]
The world appears to contain diverse kinds of objects and
systems—planets, tornadoes, trees, ant colonies, and human
persons, to name but a few—characterized by distinctive features
and behaviors. This casual impression is deepened by the success of
the special sciences, with their distinctive taxonomies and laws
characterizing astronomical, meteorological, chemical, botanical,
biological, and psychological processes, among others. But
there’s a twist, for part of the success of the special sciences
reflects an effective consensus that the features of the composed
entities they treat do not “float free” of features and
configurations of their components, but are rather in some way(s)
dependent on them.
The cerebellum is classically described in terms of its role in motor control. Recent evidence suggests that the cerebellum supports a wide variety of functions, including timing-related cognitive tasks and perceptual prediction. Correspondingly, deciphering cerebellar function may be important to advance our understanding of cognitive processes. In this paper, we build a model of eyeblink conditioning, an extensively studied low-level function of the cerebellum. Building such a model is of particular interest, since, as of now, it remains unclear how exactly the cerebellum manages to learn and reproduce the precise timings observed in eyeblink conditioning that are potentially exploited by cognitive processes as well. We employ recent advances in large-scale neural network modeling to build a biologically plausible spiking neural network based on the cerebellar microcircuitry. We compare our simulation results to neurophysiological data and demonstrate how the recurrent Granule-Golgi subnetwork could generate the dynamics representations required for triggering motor trajectories in the Purkinje cell layer. Our model is capable of reproducing key properties of eyeblink conditioning, while generating neurophysiological data that could be experimentally verified.
Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
This paper develops Richard Wollheim’s claim that the proper appreciation of a picture involves not only enjoying a seeing-in experience but also abiding by a standard of correctness. While scholars have so far focused on what fixes the standard, thereby discussing the alternative between intentions and causal mechanisms, the paper focuses on what the standard does, that is, establishing which kinds, individuals, features and standpoints are relevant to the understanding of pictures. It is argued that, while standards concerning kinds, individuals and features can be relevant also to ordinary perception, standards concerning standpoints are specific to pictorial experience. Drawing on all this, the paper proposes an ontology of depiction according to which a picture is constituted by both its visual appearance and its standard of correctness.
This article sheds light on a response to experimental philosophy that has not yet received enough attention: the reflection defense. According to proponents of this defense, judgments about philosophical cases are relevant only when they are the product of careful, nuanced, and conceptually rigorous reflection. We argue that the reflection defense is misguided: We present five studies (N>1800) showing that people make the same judgments when they are primed to engage in careful reflection as they do in the conditions standardly used by experimental philosophers.
The notion of time reversal has caused some recent controversy in philosophy of physics. In this paper, I claim that the notion is more complex than usually thought. In particular, I contend that any account of time reversal presupposes, explicitly or implicitly, an answer to the following questions: (a) What is time-reversal symmetry predicated of? (b) What sorts of transformations should time reversal perform, and upon what? (c) What role does time-reversal symmetry play in physical theories? Each dimension, I argue, not only admits divergent answers, but also opens a dimension of analysis that feeds the complexity of time reversal: modal, metaphysical, and heuristic, respectively. The comprehension of this multi-dimensionality, I conclude, shows how philosophically rich the notion of time reversal is in philosophy of physics
Comparative psychology came into its own as a science of animal minds, so a standard story goes, when it abandoned anecdotes in favor of experimental methods. However, pragmatic constraints significantly limit the number of individual animals included in laboratories experiments. Studies are often published with sample sizes in the single digits, and sometimes samples of one animal. With such small samples, comparative psychology has arguably not actually moved on from its anecdotal roots. Replication failures in other branches of psychology have received substantial attention, but have only recently been addressed in comparative psychology, and have not received serious attention in the attending philosophical literature. I focus on the question of how to interpret findings from experiments with small samples, and whether they can be generalized to other members of the tested species. As a first step, I argue that we should view studies with extreme small sample sizes as anecdotal experiments, lying somewhere between traditional experiments and traditional anecdotes in evidential weight and generalizability.
This article addresses three questions concerning Kant's views on non-rational animals: do they intuit spatio-temporal particulars, do they perceive objects, and do they have intentional states? My aim is to explore the relationship between these questions and to clarify certain pervasive ambiguities in how they have been understood. I first disambiguate various nonequivalent notions of objecthood and intentionality: I then look closely at several models of objectivity present in Kant's work, and at recent discussions of representational and relational theories of intentionality. I argue ultimately that, given the relevant disambiguations, the answers to all three questions will likely be positive. These results both support what has become known as the nonconceptualist reading of Kant, and make clearer the price the conceptualist must pay to sustain his or her position.
Psychologists frequently use response time to study cognitive processes, but response time may also be a part of the commonsense psychology that allows us to make inferences about other agents’ mental processes. We present evidence that by age six, children expect that solutions to a complex problem can be produced quickly if already memorized, but not if they need to be solved for the first time. We suggest that children could use response times to evaluate agents’ competence and expertise, as well as to assess the value and relevance of information.
While controversy about the nature of grounding abounds, our focus is on a question for which a particular answer has attracted something like a consensus. The question concerns the relation between partial grounding and full grounding. The apparent consensus is that the former is to be defined in terms of the latter. We argue that the standard way of doing this faces a significant problem and that we ought to pursue the reverse project of defining full grounding in terms of partial grounding. The guiding idea behind the definition we propose is that full grounding is what happens when partial grounding works in a way that ensures that the grounded is nothing over and above the grounds. We ultimately understand this idea in terms of iterated nothing-over-and-above claims.
Constitutive panpsychism is the doctrine that macro-level consciousness—that is, consciousness of the sort possessed by certain composite things such as humans—is built out of irreducibly mental (or proto-mental) features had by some or all of the basic physical constituents of reality. On constitutive panpsychism, changes in macro-level consciousness amount to changes in either the way that micro-conscious entities ‘bond’ or the way that micro-conscious qualities ‘blend’ (or both). I pose the ‘Selection Problem’ for constitutive panpsychism: the problem of explaining how high-level functional states of the brain ‘select’ micro-conscious qualities for bonding or blending. I argue that there are no empirically plausible solutions to this problem.
Just as different pairs of shoes are useful for different occasions, different masks are useful for different occasions. Here's my collection. Category 1: Likely to be significantly protective
1.1. 3M 6300 half-face mask with 2091 P100 filters
Summary: Extremely protective for inhalation. …
In June 2016, David Chalmers delivered the Petrus Hispanus Lectures at the LanCog research group, University of Lisbon, on the subject of objects, properties, and perception in virtual reality environments. The paper resulting from these lectures was subsequently published in Disputatio as “The Virtual and the Real” (vol. IX, 2017, No. 46, pp. 309–52). In it, Chalmers defends virtual realism, according to which virtual objects are bona fide digital objects with virtual counterparts of perceptible properties such as colour and shape, and perception in virtual reality environments is typically veridical rather than illusory. This special issue collects responses to Chalmers due to Claus Beisbart, Jesper Juul, Peter Ludlow, Neil McDonnell and Nathan Wildman, Alyssa Ney, Eric Schwitzgebel, and Marc Silcox; together with a detailed response by Chalmers to each paper.
In this paper I want to explain, from a physicalist point of view, why so many people are persuaded that consciousness is non—physical. I take there to be good arguments, stemming from the need to integrate conscious events into the causal workings of the world, for identifying conscious states with physical states, and in what follows I shall take these arguments as read. At the same time there is no doubt that many people have strong intuitions that consciousness cannot possibly be physical. My aim will be to explain how these intuitions arise, and why they do not discredit physicalism.
Is there such thing as animal homosexuality? I begin this paper with a brief discussion of two case studies of homosexual behaviour in nonhuman animals, notably cockchafers and king penguins, in order to reveal the persistent attempts of some animal scientists to explain away animal homosexuality. I then go on to identify and analyse two philosophical concerns underlying these attempts: the problem of other minds and the problem of anthropomorphism. Critics of animal homosexuality seem to assume a) that there is no way of knowing whether nonhuman animals have minds; b) that even if they would in fact have minds, they still would not be capable of having the mental states that we usually associate with human homosexuality; and c) that even if they were capable of such states, there would still be the issue that same-sex sexual mental states and behaviours are often mistakenly identified as sexual states and behaviours. By providing arguments against each of these assumptions, I support the claim that some animals exhibit homosexuality, that there are homosexual mental states in at least some nonhuman animals, and that these states may help to explain homosexual behaviours.
Counterfactual thought is an important element of our cognitive lives. In making practical decision, we are often led to ask what would happen if we were to carry out a certain action, and we frequently support causal claims by showing that the putative effect depends counterfactually on the supposed cause. It therefore does not come as a surprise that scholars in many disciplines—from philosophy to cognitive and social psychology to computer science to linguistics—have shown a keen interest in understanding counterfactuals. One point of contention is whether causal notions should figure in a semantic account of counterfactuals. A number of philosophers and linguists, motivated by examples like those described in section 1 below, have favored such causal theories of counterfactuals. However, this approach stands opposed to a prominent philosophical tradition, going back at least to David Hume and most prominently defended by David Lewis, that aims to give a reductive analysis of causation in counterfactual terms. The two views advocate for opposite directions of analysis and are consequently mutually exclusive—combining them would lead to circularity.
The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB (20) independent of set theory? Can we prove that BB (n + 1) > 2BB(n) for large enough n? Given BB (n), how many advice bits are needed to compute BB (n + 1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable whether BB (n) is even or odd?
Perception of a property (e.g. a colour, a shape, a size) can enable thought about the property, while at the same time misleading the subject as to what the property is like. This long-overlooked claim parallels a more familiar observation concerning perception-based thought about objects, namely that perception can enable a subject to think about an object while at the same time misleading her as to what the object is like. I defend the overlooked claim, and then use it to generate a challenge for a standard way of thinking about the relationship between visual experience and rational belief formation. Put informally, that view holds that just as we can mislead others by saying something false, illusory experience misleads by misrepresenting how things stand in the world. I argue that we ought to abandon this view in favour of some radical alternative account of the relationship between visual experience and rational belief formation.
Pautz (Perceiving the world , 2010) has argued that the most prominent naive realist account of hallucination—negative epistemic disjunctivism—cannot explain how hallucinations enable us to form beliefs about perceptually presented properties. He takes this as grounds to reject both negative epistemic disjunctivism and naive realism. Our aims are two: First, to show that this objection is dialectically ineffective against naive realism, and second, to draw morals from the failure of this objection for the dispute over the nature of perceptual experience at large.
Do the senses represent causation? Many commentators read Malebranche as anticipating Hume’s negative answer to this question. I disagree with this assessment. When a yellow billiard ball strikes a red billiard ball, Malebranche holds that we see the yellow ball as causing the red ball to move. Given Malebranche’s occasionalism, he insists that the visual experience of causal interaction is illusory. Nevertheless, Malebranche holds that the senses (mis)represent finite things as causally efficacious. This experience of creaturely causality explains why Aristotelian philosophers and ordinary folk struggle to recognize occasionalism’s truth.
We explore the contribution made by oscillatory, synchronous neural activity to representation in the brain. We closely examine six prominent examples of brain function in which neural oscillations play a central role, and identify two levels of involvement that these oscillations take in the emergence of representations: enabling (when oscillations help to establish a communication channel between sender and receiver, or are causally involved in triggering a representation) and properly representational (when oscillations are a constitutive part of the representation). We show that even an idealized informational sender-receiver account of representation makes the representational status of oscillations a non-trivial matter, which depends on rather minute empirical details.
The paper analyses in some depth the distinction by Paul Humphreys between ‘epistemic opacity’ —which I refer to as ‘weak epistemic opacity’ here— and ‘essential epistemic opacity’, and defends the idea that epistemic opacity in general can be made sense as coming in degrees. The idea of degrees of epistemic opacity is then exploited to show, in the context of computer simulations, the tight relation between the concept of epistemic opacity and actual scientific (modelling and simulation) practices. As a consequence, interesting questions arise in connection with the role of agents dealing with epistemically opaque processes such as computer simulations.
Debate on the nature of representation in cognitive systems tends to oscillate between robustly realist views and various anti-realist options. I defend an alternative view, deflationary realism, which sees cognitive representation as an offshoot of the extended application to cognitive systems of an explanatory model whose primary domain is public representation use. This extended application, justified by a common explanatory target, embodies idealisations, partial mismatches between model and reality. By seeing representation as part of an idealised model, deflationary realism avoids the problems with robust realist views, whilst keeping allegiance to realism.
A perverted space-time geodesy results from the notions of variable rods and clocks, which are taken to have their length and rates affected by the gravitational field. On the other hand, what we might call a concrete geodesy relies on the notions of invariable unit-measuring rods and clocks. In fact, this is a basic assumption of general relativity. Variable rods and clocks lead to a perverted geodesy in the sense that a curved space-time might be seen as arising from the departure from the Minkowskian space-time as an effect of the gravitational field on the rate of clocks and the length of rods. In the case of a concrete geodesy we have “directly” a curved space-time whose curvature can be determined using (invariable) unit-measuring rods and clocks. In this paper, we will make the case for the plausibility that Einstein's views on geometry in relation to general relativity are permeated by a perverted geodesy.
In this essay, my aim is to explain Vātsyāyana’s solution to a problem that arises for his theory of liberation. For him and most Nyāya philosophers after him, liberation consists in the absolute cessation of pain (ātyantika-duḥkha-vimukti). Since this requires freedom from embodied existence, it also results in the absolute cessation of pleasure. How, then, can agents like us (who habitually seek pleasure) be rationally motivated to seek liberation? Vātsyāyana’s solution depends on what I will call the Pain Principle, i.e., the principle that we should treat all aspects of our embodied existence as pain. If we were to follow this advice, we would come to apply the label of pain (duḥkha-saṃjñā) to all aspects of our embodied existence, including pleasure. This would undermine our attachment to our own embodied existence. I show that this fits with Vātsyāyana’s general theory of motivation. According to this theory, by manipulating the labels (saṃjñā) using which we think about the world and ourselves, we can induce radical shifts in our patterns of motivation.
I'm a superficialist about belief. On my view, to believe something is to match, to an appropriate degree and in appropriate respects, a "dispositional stereotype" composed of various behavioral, experiential, and cognitive dispositions. …
An important part of the influential Humean doctrine in philosophy is the supervenience principle (sometimes referred to as the principle of separability). This principle asserts that the complete state of the world supervenes on the intrinsic properties of its most fundamental components and their spatiotemporal relations (the so-called Humean mosaic). There are well-known arguments in the literature purporting to show that in quantum mechanics the Humean supervenience principle is violated, due to the existence of entangled states.
Agential pathology (sometimes referred to as impaired agency or motivational pathology) is a phenomenon whereby people suffering from depressive illnesses struggle to initiate and sustain day-today action, in the absence of any identifiable organic motor abnormality. In this paper, I argue that all extant attempts to explain agential pathology share the same explanatory weakness; they are unable to account for why the phenomenon is typically accompanied by an experience of diminished practical significance of objects and features of the world (i.e. experiences of objects’ and features’ ‘availability’ for action or ‘invitingness’). After outlining this explanandum, I argue that the two broad classes of theory already proposed to explain agential pathology (which I term mental state theories and somatic theories) fall short of explaining it. I use this explanatory lacuna to motivate a novel theory of agential pathology (which I term the perceptual theory). This posits that those afflicted by agential pathology struggle to act and experience diminished practical significance in the world around them due to an absence of certain action-centric perceptual representations. This both fills the explanatory gap left by mental state and somatic theories, and provides evidence for the explanatory indispensability of a number of controversial kinds of high-level, action-oriented perceptual contents.
Tyler Burge notably offers a truth-first account of perceptual entitlement in terms of a priori necessary representational functions and norms: on his account, epistemic normativity turns on natural norms, which turn on representational functions. This paper has two aims: first, it criticizes Tyler Burge’s truth-first a priori derivation on functionalist and value-theoretic grounds. Second, it develops a novel, knowledge-first a priori derivation of perceptual entitlement. According to the view developed here, it is a priori that we are entitled to believe the deliverances of our perceptual belief formation system, in virtue of the latter’s constitutive function of generating knowledge.