There is a well-known version of Russell's paradox concerning the bibliography of all bibliographies which fail to list themselves. The usual analysis of this paradox leads to the conclusion that such a bibliography is self-contradictory and so therefore cannot exist. However, as we show, a more searching analysis leads to a rather different conclusion.
Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
Spatial cognition relies on an internal map-like representation of space provided by hippocampal place cells, which in turn are thought to rely on grid cells as a basis. Spatial Semantic Pointers (SSP) have been introduced as a way to represent continuous spaces and positions via the activity of a spiking neural network. In this work, we further develop SSP representation to replicate the firing patterns of grid cells. This adds biological realism to the SSP representation and links biological findings with a larger theoretical framework for representing concepts. Furthermore, replicating grid cell activity with SSPs results in greater accuracy when constructing place cells.Improved accuracy is a result of grid cells forming the optimal basis for decoding positions and place cell output. Our results have implications for modelling spatial cognition and more general cognitive representations over continuous variables.
I argue that ‘consent’ language presupposes that the contemplated action is or would be at someone else’s behest. When one does something for another reason — for example, when one elects independently to do something, or when one accepts an invitation to do something — it is linguistically inappropriate to describe the actor as ‘consenting’ to it; but it is also inappropriate to describe them as ‘not consenting’ to it. A consequence of this idea is that ‘consent’ is poorly suited to play its canonical central role in contemporary sexual ethics. But this does not mean that nonconsensual sex can be morally permissible. Consent language, I’ll suggest, carries the conventional presupposition that that which is or might be consented to is at someone else’s behest. One implication will be a new kind of support for feminist critiques of consent theory in sexual ethics.
Suppose we have a countably infinite fair lottery, in John Norton’s sense of label independence: in other words, probabilities are not changed by any relabeling—i.e., any permutation—of tickets. In classical probability, it’s easy to generate a contradiction from the above assumptions, given the simple assumption that there is at least one set A of tickets that has a well-defined probability (i.e., that the probability that the winning ticket is from A is well-defined) and that has the property that both A and its complement are infinite. …
In a recent series of papers, Jane Friedman argues that suspended judgment is a sui generis first-order attitude, with a question (rather than a proposition) as its content. In this paper, I offer a critique of Friedman’s project. I begin by responding to her arguments against reductive higher-order propositional accounts of suspended judgment, and thus undercut the negative case for her own view. Further, I raise worries about the details of her positive account, and in particular about her claim that one suspends judgment about some matter if and only if one inquires into this matter. Subsequently, I use conclusions drawn from the preceding discussion to offer a tentative account: S suspends judgment about p iff (i) S believes that she neither believes nor disbelieves that p, (ii) S neither believes nor disbelieves that p, and (iii) S intends to judge that p or not-p.
This paper is primarily an advertisement for a research program, and for some particular, so far under-explored research questions within that research program. It’s an advertisement for the program of constructing fragmented models of subjects’ propositional attitudes, and theorizing about and by means of such models. I’ll aim to do two things: First, motivate a fragmentationist research program by identifying a cluster of problems that such a research program is well-positioned to address or resolve. Second, identify what I take to be some of the challenges and research questions that the fragmentationist program will need to address, and where the space of possible answers is not yet well-charted.
Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn’t defer to your doctor by aligning your credence in the de se thought ‘I am sick’ with the doctor’s credence in that same de se thought. Nor should you defer to the objective chances by setting your credence in the thought ‘The actual winner wins’ equal to the objective chance that the actual winner wins. Here, I generalize principles of expert deference to handles these kinds of problem cases.
Here’s a paper on categories where the morphisms are open physical systems, and composing them describes gluing these systems together:
• John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics. …
Consumption decisions are partly in‡uenced by values and ideologies. Consumers care about global warming as well as about child labor and fair trade. Incorporating values into the consumer’s utility function will often violate monotonicity, in case consumption hurts cherished values in a way that isn’t offset by the hedonic bene…ts of material consumption. We distinguish between intrinsic and instrumental values, and argue that the former tend to introduce discontinuities near zero. For example, a vegetarian’s preferences would be discontinuous near zero amount of animal meat. We axiomatize a utility representation that captures such preferences and discuss the measurability of the degree to which consumers care about such values.
Art works are artefacts and, like all artefacts, are the product of agency. How important is that for our engagement with them? For many artefacts, agency hardly matters. The paperclips on my desk perform their function without me having to think of them as the outputs of agency, though I might on occasion admire their design. But for those artefacts we categorise as works of art, the connection is important: if I treat something as art I need to see how it manifests the choices, preferences, actions and sensibilities of the maker. I am not asked to see it simply as a record of those things. The work is not valuable merely as a conduit to the qualities of the maker; it has final value and not merely instrumental value. Its value depends on its relation to the maker; in Korsgaard’s terms it is value that is final and extrinsic.
Dishonest signals are displays, calls, or performances that would ordinarily convey certain information about some state of the world, but where the signal being sent does not correspond to the true state. Manipulation is the sending of signals in a way that takes advantage of default receiver responses to such signals, to influence their behavior in ways favorable to the sender. Manipulative signals are often dishonest, and dishonest signals are often manipulative, though this not need be the case. Some theorists have defined signaling in such a way that evolutionarily reinforced signals are essentially manipulative.
Relying on some auxiliary assumptions, usually considered mild, Bell’s theorem proves that no local theory can reproduce all the predictions of quantum mechanics. In this work, we introduce a fully local, superdeterministic model that, by explicitly violating settings independence—one of these auxiliary assumptions, requiring statistical independence between measurement settings and systems to be measured—is able to reproduce all the predictions of quantum mechanics. Moreover, we show that, contrary to widespread expectations, our model can break settings independence without an initial state that is too complex to handle, without visibly losing all explanatory power and without outright nullifying all of experimental science. Still, we argue that our model is unnecessarily complicated and does not offer true advantages over its non-local competitors. We conclude that, while our model does not appear to be a viable contender to their non-local counterparts, it provides the ideal framework to advance the debate over violations of statistical independence via the superdeterministic route.
Games are a distinctive form of art — and very different from many traditional arts. Games work in the medium of agency. Game designers don’t just tell stories or create environments. They tell us what our abilities will be in the game. They set our motivations, by setting the scoring system and specifying the win-conditions. Game designers sculpt temporary agencies for us to occupy. And when we play games, we adopt these designed agencies, submerging ourselves in them, and taking on their specified ends for a while.
The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.
The giving and requesting of explanations is central to normative practice. When we tell children that they must act in certain ways, they often ask why, and often we are able to answer them. Sentences like ‘Kicking dogs is wrong because it hurts them’, and ‘You should eat your vegetables because they’re healthy’, are meaningful and ubiquitous.
In linguistics, the dominant approach to the semantics of plurals appeals to mereology. However, this approach has received strong criticisms from philosophical logicians who subscribe to an alternative framework based on plural logic. In the first part of the article, we offer a precise characterization of the mereological approach and the semantic background in which the debate can be meaningfully reconstructed. In the second part, we deal with the criticisms and assess their logical, linguistic, and philosophical significance. We identify four main objections and show how each can be addressed. Finally, we compare the strengths and shortcomings of the mereological approach and plural logic. Our conclusion is that the former remains a viable and well-motivated framework for the analysis of plurals.
Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: (i) the quantum state is a complete description of a physical system; (ii) the quantum (superposition) state is grounded in its terms, and (iii) the superposition terms are themselves grounded in local goings-on about the system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism.
This paper develops Richard Wollheim’s claim that the proper appreciation of a picture involves not only enjoying a seeing-in experience but also abiding by a standard of correctness. While scholars have so far focused on what fixes the standard, thereby discussing the alternative between intentions and causal mechanisms, the paper focuses on what the standard does, that is, establishing which kinds, individuals, features and standpoints are relevant to the understanding of pictures. It is argued that, while standards concerning kinds, individuals and features can be relevant also to ordinary perception, standards concerning standpoints are specific to pictorial experience. Drawing on all this, the paper proposes an ontology of depiction according to which a picture is constituted by both its visual appearance and its standard of correctness.
According to an increasingly popular view in epistemology and philosophy of mind, beliefs are sensitive to contextual factors such as practical factors and salient error possibilities. A prominent version of this view, called credal sensitivism, holds that the context-sensitivity of belief is due to the context-sensitivity of degrees of belief or credence. Credal sensitivism comes in two variants: while credence-one sensitivism (COS) holds that maximal confidence (credence one) is necessary for belief, threshold credal sensitivism (TCS) holds that belief consists in having credence above some threshold, where this threshold doesn’t require maximal confidence. In this paper, I argue that COS has difficulties in accounting for three important features about belief: i) the compatibility between believing p and assigning non-zero credence to certain error possibilities that one takes to entail not-p, ii) the fact that outright beliefs can occur in different strengths, and iii) beliefs held by unconscious subjects. I also argue that TCS can easily avoid these problems. Finally, I consider an alleged advantage of COS over TCS in terms of explaining beliefs about lotteries. I argue that lottery cases are rather more problematic for COS than TCS. In conclusion, TCS is the most plausible version of credal sensitivitism.
The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to new information. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting.
This paper sketches, in a very partial and preliminary way, an approach to philosophy of science that I believe has some important affinities with philosophical positions that are often regarded as versions of “pragmatism”. However, pragmatism in both its classical and more modern forms has taken on many different commitments. I will be endorsing some of these and rejecting others—in fact, I will suggest that some elements prominent in some recent formulations of pragmatism are quite contrary in spirit to a genuine pragmatism. Among the elements I will retain from many if not all varieties of pragmatism are an emphasis on what is useful, where this is understood in a means/ends framework, a rejection of spectator theories of knowledge and skepticism about certain ways of thinking about “representation” in science. Also skepticism about ambitious forms of metaphysics. Elements found in some previous versions of pragmatism that I will reject include proposals to understand (or replace) truth with some notion of community assent and skepticism about causal and physical modality. It is
In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In section 3, I argue that these methodological claims are in tension with a prominent account of scientific method, often called “Inference to the Best Explanation” (IBE). Later on, in section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely the argument due to Roche and Sober (2013) that “explanatoriness is evidentially irrelevant”. This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In section 4, I consider some extant responses to this argument, especially that of Climenhaga (2017). In section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected De Divinatione, I formulate what I call the “Ciceronian Causal-nomological Requirement”, (CCR), which states roughly that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations”, chance correlations which we should not rely upon for predictive inference. In section 7, I offer some concluding remarks.
Proclus of Athens (*412–485 C.E.) was the most authoritative
philosopher of late antiquity and played a crucial role in the
transmission of Platonic philosophy from antiquity to the Middle Ages. For almost fifty years, he was head or ‘successor’
(diadochos, sc. of Plato) of the Platonic
‘Academy’ in Athens. Being an exceptionally productive
writer, he composed commentaries on Aristotle, Euclid and Plato,
systematic treatises in all disciplines of philosophy as it was at
that time (metaphysics and theology, physics, astronomy, mathematics,
ethics) and exegetical works on traditions of religious wisdom
(Orphism and Chaldaean Oracles).
If the Past Hypothesis underlies the arrows of time, what is the status of the Past Hypothesis? In this paper, I examine the role of the Past Hypothesis in the Boltzmannian account and defend the view that the Past Hypothesis is a candidate fundamental law of nature. Such a view is known to be compatible with Humeanism about laws, but as I argue it is also supported by a minimal non-Humean “governing” view. Some worries arise from the non-dynamical and time-dependent character of the Past Hypothesis as a boundary condition, the intrinsic vagueness in its specification, and the nature of the initial probability distribution. I show that these worries do not have much force, and in any case they become less relevant in a new quantum framework for analyzing time’s arrows—the Wentaculus. Hence, both Humeans and minimalist non-Humeans should embrace the view that the Past Hypothesis is a candidate fundamental law of nature and welcome its ramifications for other parts of philosophy of science.
A fair lottery is going to be held. There are uncountably infinitely many players and the prize is infinitely good. Specifically, countably infinitely many fair coins will be tossed, and corresponding to each infinite sequence of heads and tails there is a ticket that exactly one person has bought. …
Warning: This is speculative back-of-the-envelope discussion of public policy outside of my fields of expertise. For some time I’ve been thinking that perhaps, now that the Phase II safety trials of some coronavirus vaccines have been completed, we should just start vaccinating prior to completion of the Phase III trials. …
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures.
6. We desire love as a function of the relational nature of our being. Ontologically, we are not complete or sufficient unto ourselves. We do not and cannot provide the 'space' (both physical and emotional) we must occupy in order to be what and as we are.
In the year 2000, in a paper titled Quantum Theory Needs No ‘Interpretation’, Chris Fuchs and Asher Peres presented a series of instrumentalist arguments against the role played by ‘interpretations’ in QM. Since then —quite regardless of the publication of this paper— the number of interpretations has experienced a continuous growth constituting what Adán Cabello has characterized as a “map of madness”. In this work, we discuss the reasons behind this dangerous fragmentation in understanding and provide new arguments against the need of interpretations in QM which —opposite to those of Fuchs and Peres— are derived from a representational realist understanding of theories —grounded in the writings of Einstein, Heisenberg and Pauli. Furthermore, we will argue that there are reasons to believe that the creation of ‘interpretations’ for the theory of quanta has functioned as a trap designed by anti-realists in order to imprison realists in a labyrinth with no exit. Taking as a standpoint the critical analysis by David Deutsch to the anti-realist understanding of physics, we attempt to address the references and roles played by ‘theory’ and ‘observation’. In this respect, we will argue that the key to escape the anti-realist trap of interpretation is to recognize that —as Einstein told Heisenberg almost one century ago— it is only the theory which can tell you what can be observed. Finally, we will conclude that what QM needs is not a new interpretation but instead, a theoretical (formal-conceptual) consistent, coherent and unified scheme which allows us to understand what the theory is really talking about. Key-words: Interpretation, explanation, representation, quantum theory.