The race to develop a vaccine for COVID-19 is on. Finding a vaccine is the most promising route to lifting the public health restrictions currently in place to prevent the spread of coronavirus, which has already killed hundreds of thousands of people and infected many more. It is possible that a viable candidate may emerge in the not too distant future. At the height of the pandemic, Canadian Prime Minister Justice Trudeau was asked whether he would consider making vaccination for COVID-19 mandatory. He opined that “we have a fair bit of time to reflect on ... [the best vaccination protocol] in order to get it right”. But the time to reflect is now. The legislative changes needed to develop and implement a policy are complex.
Many scholars believe that it is procedurally undemocratic for the judiciary to have an active role in shaping the law. These scholars believe either that such practices as judicial review and creative statutory interpretation are unjustified, or that they are justified only because they improve the law substantively. This Article argues instead that the judiciary can play an important procedurally democratic role in the development of the law. Majority rule by legislatures is not the only defining feature of democracy; rather, a government is democratic to the extent to which it provides egalitarian forms of political participation. One such form of participation can be the opportunity to influence the law through the courts, either directly by participating in a case or indirectly by advocating litigation. Arguing from several examples, this Article shows that judicial decision-making allows different voices to be heard that may not necessarily have influence or power in majoritarian legislative structures or popular initiatives. Giving citizens the opportunity to change, to preserve, and to obtain authoritative clarification of the law through the courts can thus make a government procedurally more democratic.
Pretense is often characterized as a form of imagination, more specifically as a sort of enactive imagination. But for the most part, pretending and imagining interact with one’s evaluative / affective systems differently. One tends to respond to imagined content with emotions similar to (albeit more attenuated than) those one would feel if that content was real. When pretending, however, one’s affective responses are often much more generalized, and insensitive to the content of the pretense. We suggest that this is because one’s attentional focus in pretense is on the actions themselves, and their correspondence with the scripts or roles being used to generate the pretense. Moreover, because pretense is intrinsically motivated, pretending is generally fun, irrespective of what, in particular, is being pretended.
Given how many academic papers are out there, it would be useful to have more filtering and discovery mechanisms for helping us to find the ones we might be most interested in. One thing that could help is if authors themselves offered a concise 'overview' of what they think makes their various papers worth reading (when they are). …
I argue against sceptical invariantism on the grounds that, in common with a number of contemporary proposals in this regard, it misdiagnoses the source of radical scepticism. The nub of the matter is that the problem of radical scepticism does not essentially trade on an appeal to an austere epistemic standard for knowledge as sceptical invariantism supposes; indeed, the putative radical sceptical paradox is no less troubling if we stipulate that the operative epistemic standard for knowledge is very undemanding. As I explain, the idea that the source of radical scepticism concerns epistemic standards in this way pervades the recent treatment of this problem, and hence understanding where sceptical invariantism goes awry casts light on the wider contemporary debate about radical scepticism.
Uncertainty in climate science has drawn increasing attention in recent years (e.g., Parker 2006, 2010, 2011, 2013; Stainforth et al. 2007; Knutti 2008; Frigg et al. 2013, 2014; Parker and Risbey 2015). The topic is important epistemically and politically: epistemically, because scientists have only limited abilities to validate and confirm the output of climate models ; and politically, because policymakers have to take into account the current knowledge concerning the climate and its uncertainty.
This paper focuses on the interaction of reasons and argues that reasons for an action may transmit to the necessary means of that action. Analyzing exactly how this phenomenon may be captured by principles governing normative transmission has proved an intricate task in recent years. In this paper, I assess three formulations focusing on normative transmission and necessary means: Ought Necessity, Strong Necessity, and Weak Necessity. My focus is on responding to two of the main objections raised against normative transmission for necessary means, in that they seem to give us reasons for buying tickets to plays we have no intention of seeing and that the principles give us the wrong result when the means are necessary but not sufficient. Even though these objections have been discussed previously, the counterarguments have so far relied on rejecting premises that the proponents of these objections are unlikely to concede. In this paper, I show how we may answer the objections in a way more likely to convince proponents of the objections. The result is an argument for a key aspect when it comes to understanding how reasons and ends-means normativity function. Normative transmission from ends to necessary means is not only interesting at the structural level, it is also possible to argue that it has implications for areas as diverse as philosophy of rationality, political philosophy and applied ethics.
Effective political decision making, like other decision making, requires decision-makers to have accurate beliefs about the domain in which they are acting. In democratic societies, this often means that accurate beliefs must be held by a community, or at least a significant portion of a community, of voters. Voters are tasked with scrutinizing candidates and possible policy proposals and, considering their own experiences, interests, goals, knowledge, and values, with deciding which of various ballot measures is most likely to bring about their desired outcomes.
There are two debates regarding whether practical considerations play a role in determining what one ought to believe. The first concerns whether the fact that having some doxastic attitude (e.g. believing, disbelieving, withholding) would be beneficial or harmful is a genuine normative reason for or against that attitude. For example, consider the following: Beneficial Belief Believing in an afterlife would alleviate your crippling anxiety about death. Harmful Belief Believing that your missing child is dead would cause your spouse suffering.
The literature on normative non-naturalism is plagued by a lack of consensus about what it is for normative properties to be non-natural in the first place. Some take non-naturalism to be the claim that some normative properties are not identical to descriptive properties (e.g. Jackson (1998, Shafer- Landau (2003), and Parfit (2011)), while others take it to be the claim that some normative facts are not fully grounded in non-normative, descriptive facts (e.g. Schroeder (2007), Chang (2013), and Scanlon (2014)). But very few parties to the naturalism vs. non-naturalism debate address the taxonomical question of which is the best way to characterize the view. Most avoid this question altogether by simply stipulating what they take non-naturalism to be, and then arguing for or against that claim. While this is dialectically convenient, it has created a confusing literature wherein it’s unclear to what extent there’s genuine disagreement.
On a widely shared generic conception of inferential justification—henceforth ‘the standard conception’—an agent is inferentially justified in believing that p only if she has antecedently justified beliefs in all the non-redundant premises of a good argument for p. This paper explores three questions that haven’t been given the attention they deserve, that complicate the application of the standard conception to cases, and that reveal it to be underspecified at the core—in ways not resolved, but inherited, by more specific (extant) versions of it. The goal isn’t to answer the questions, but to articulate them, explain what turns on them, and to invite a critical re-examination of the standard conception.
François-Marie d’Arouet (1694–1778), better known by his
pen name Voltaire, was a French writer and public activist who played
a singular role in defining the eighteenth-century movement called the
Enlightenment. At the center of his work was a new conception of
philosophy and the philosopher that in several crucial respects
influenced the modern concept of each. Yet in other ways Voltaire was
not a philosopher at all in the modern sense of the term. He wrote as
many plays, stories, and poems as patently philosophical tracts, and
he in fact directed many of his critical writings against the
philosophical pretensions of recognized philosophers such as Leibniz,
Malebranche, and Descartes.
Big Data promises to revolutionise the production of knowledge within
and beyond science, by enabling novel, highly efficient ways to plan,
conduct, disseminate and assess research. The last few decades have
witnessed the creation of novel ways to produce, store, and analyse
data, culminating in the emergence of the field of data
science, which brings together computational, algorithmic,
statistical and mathematical techniques towards extrapolating
knowledge from big data. At the same time, the Open Data
movement—emerging from policy trends such as the push for Open
Government and Open Science—has encouraged the sharing and
interlinking of heterogeneous research data via large digital
According to traditional Aristotelianism, what makes you and me be distinct entities is that although we are of the same species, we’re made of distinct chunks of matter. Here is a quick initial problem with this. …
Many macroscopic physical processes are known to occur in a time-directed way despite the apparent time-symmetry of the known fundamental laws. A popular explanation is to postulate an unimaginably atypical state for the early universe — a ‘Past Hypothesis’ (PH) — that seeds the time-asymmetry from which all others follow. I will argue that such a PH faces serious new difficulties. First I strengthen the grounds for existing criticism by providing a systematic analytic framework for assessing the status of the PH. I outline three broad categories of criticism that put into question a list of essential requirements of the proposal. The resulting analysis paints a grim picture for the prospects of providing an adequate formulation for an explicit PH. I then provide a new argument that substantively extends this criticism by showing that any time-independent measure on the space of models of the universe must necessarily break one of its gauge symmetries. The PH then faces a new dilemma: reject a gauge symmetry of the universe and introduce a distinction without difference or reject the time-independence of the measure and lose explanatory power.
The Free Choice e↵ect—whereby ⌃(p or q) seems to entail both ⌃p and ⌃q—has traditionally been characterized as a phenomenon a↵ecting the deontic modal ‘may’. This paper presents an extension of the semantic account of free choice defended in Fusco (2015) to the agentive modal ‘can’, the ‘can’ which, intuitively, describes an agent’s powers. On this account, free choice is a nonspecific de re phenomenon (Fodor, 1970; Bauerle, 1983) that—unlike typical cases— a↵ects disjunction. I begin by sketching a model of inexact ability, which grounds a modal approach to agency (Belnap and Perlo↵, 1998; Belnap et al., 2001) in a Williamson ( , 2014)-style margin of error. A classical propositional semantics combined with this framework can reflect the intuitions highlighted by Kenny (1976)’s dartboard cases, as well as the counterexamples to simple conditional views recently discussed by Mandelkern et al. (2017). In §3, I turn to an independently motivated actual-world-sensitive account of disjunction, and show how it extends free choice inferences into an object language for propositional modal logic.
The assertion by Yu and Nikolic that the delayed choice quantum eraser experiment of Kim et al. empirically falsifies the consciousness-causes-collapse hypothesis of quantum mechanics is based on the unfounded and false assumption that the failure of a quantum wave function to collapse implies not be surprising, as confirmed by , that the distribution recorded at D is the sum of two closely-spaced single-slit Fraunhofer distributions. In other words, the detection of which-path information by detectors D1 and D2 guarantees no interference distribution at D . FIG. 1. When which-path information of idler photons is recorded by detectors D1 and D2 , detector D does not produce an interference pattern.
In this talk, I propose to sketch the contents of Noether’s 1918 article, “Invariante Variationsprobleme”, as it may be seen against the background of the work of her predecessors and in the context of the debate on the conservation of energy that had arisen in the general theory of relativity.
This article is about the ontological dispute between finitists, who claim that only finitely many numbers exist, and infinitists, who claim that infinitely many numbers exist. Van Bendegem set out to solve the ‘general problem’ for finitism: how can one recast finite fragments of classical mathematics in finitist terms? To solve this problem Van Bendegem comes up with a new brand of finitism, namely so-called ‘apophatic finitism’. In this article it will be argued that apophatic finitism is unable to represent the negative ontological commitments of infinitism or, in other words, that which does not exist according to infinitism. However, there is a brand of infinitism, so-called ‘apophatic infinitism’, that is able to represent both the positive and the negative ontological commitments of apophatic finitism.
We argue that permissibility-based solutions to the paradox of supererogation encounter a nested dilemma. Such approaches solve the paradox by distinguishing moral and rational permissions. If they do not also include a bridge condition that relates these two permissions, then they violate a very plausible monotonicity condition. If they do include a bridge condition, then permissibility-based solutions either amount to rational satisficing or they collapse back into the classical account of supererogation and fail to resolve the paradox.
The neocortex figures importantly in human cognition, but it is not the only locus of cognitive activities or even at the top of a hierarchy of cognitive processing areas in the central nervous system. Moreover, the form of information processing employed in the neocortex is not representative of information processing elsewhere in the nervous system. In this paper, we articulate and argue against cortico-centrism in cognitive science, contending instead that the nervous system constitutes a heterarchical network of diverse types of information processing systems. To press this perspective, we examine neural information processing in both non-vertebrates and vertebrates, including examples of cognitive processing in the vertebrate hypothalamus and basal ganglia.
This paper challenges a common assumption about decision- making mechanisms in humans: decision-making is a distinctively high-level cognitive activity implemented by mechanisms concentrated in the higher-level areas of the cortex. We argue instead that human behavior is controlled by a multiplicity of highly distributed, heterarchically organized decision-making mechanisms. We frame it in terms of control mechanisms that procure and evaluate information to select activities of controlled mechanisms and adopt a phylogenetic perspective, showing how decision-making is realized in control mechanisms in a variety of species. We end by discussing this picture's implication for high-level cognitive decision-making.
The concept of emergence is commonly invoked in modern physics but rarely defined. Building on recent influential work by Butterfield (2011a,b), I provide precise definitions of emergence concepts as they pertain to properties represented in models, applying them to some basic examples from spacetime and thermostatistical physics. The chief formal innovation I employ, similarity structure, consists in a structured set of similarity relations among those models under analysis—and their properties—and is a generalization of topological structure. Although motivated from physics, this similarity-structure-based account of emergence applies to any science that represents its possibilia with (mathematical) models.
Merely approximate symmetry is mundane enough in physics that one rarely finds any explication of it. Among philosophers it has also received scant attention compared to exact symmetries. Herein I invite further consideration of this concept that is so essential to the practice of physics and interpretation of physical theory. After motivating why it deserves such scrutiny, I propose a minimal definition of approximate symmetry—that is, one that presupposes as little structure on a physical theory to which it is applied as seems needed. Then I apply this definition to three topics: first, accounting for or explaining the symmetries of a theory emeritus in intertheoretic reduction; second, explicating and evaluating the Curie-Post principle; and third, a new account of accidental symmetry.
I provide a formally precise account of diachronic emergence of properties as described within scientific theories, extending a recent account of synchronic emergence using similarity structure on the theories’ models. This similarity structure approach to emergent properties unifies the synchronic and diachronic types by revealing that they only differ in how they delineate the domains of application of theories. This allows it to apply also to cases where the synchronic/diachronic distinction is unclear, such as spacetime emergence from theories of quantum gravity. In addition, I discuss two further case studies—finite periodicity in van der Pol oscillators and two-dimensional quasiparticles in the fractional quantum Hall effect—to facilitate comparison of this approach to others in the literature on concepts of emergence applicable to the sciences. My discussion of the fractional quantum Hall effect in particular may be of independent interest to philosophers of physics concerned with its interpretation.
According to phenomenal functionalism, whether some object or event has a given property is determined by the kinds of sensory experiences such objects or events typically cause in normal perceivers in normal viewing conditions. This paper challenges this position and, more specifically, David Chalmers’s use of it in arguing for what he calls virtual realism.
One of the central areas of dispute in the reception of Kant’s
critical philosophy concerns his distinction between the cognitive
faculties of sensibility (Sinnlichkeit) and intellect
(Verstand), and their characteristic representational
outputs—viz. intuition (Anschauung) and concept
(Begriff). Though the dispute is multi-faceted, it centers on
disagreement concerning the interpretation of Kant’s conception
of the contribution made by the higher cognitive faculties (or the
“intellect” in the broadest sense of that term) to a
subject’s sensory apprehension of the world around it.
The possibility that normative motivations are basic or psychologically primitive is an intriguing one worthy of more attention. On the one hand, there is a powerful case that human minds are equipped with a psychological system dedicated to norms and norm-guided behavior (Setman and Kelly forthcoming). On the other hand, there has not yet been a convincing case made that there are any distinct, sui generis motivational resources that are unique or exclusive to this system. To the extent that the issue is addressed, many discussions simply proceed as if the motivations that drive different norm-guided behaviors are drawn from a number of different and more basic psychological sources. However, I do not think the possibility that some normative motivations are psychologically primitive has been ruled out.
Hill (2014) argues that perceptual qualia, i.e. the ways in which things look from a viewpoint, are physical properties of objects. They are relational in nature, that is, they are functions of objects’ intrinsic properties, viewpoints, and observers. Hill also claims that his kind of representationalism is the only view capable of “naturalizing qualia”. After discussing a worry with Hill’s account, I put forward an alternative, which is just as “naturalization-friendly”. I build upon Chirimuuta’s color adverbialism (2015), and I argue that we would better serve the “naturalizing project” if we abandoned representationalism and preferred a broadly adverbialist view of perceptual qualia.
We engage in normative and evaluative thought and talk throughout our lives. For example, we make claims about how we should treat other people, which movies are better than others, what kind of social/political institutions are just, and what makes a scientific theory a good one. In such thought and talk, we deploy a range of normative and evaluative concepts: e.g., SHOULD, JUSTICE, COURAGEOUS, IMPOLITE and GOOD. One possible target of normative and evaluative inquiry concerns those very normative and evaluative concepts themselves. For example, we might ask: are some of these concepts that we currently use defective in some way? Could they be improved? More generally: which normative or evaluative concepts should we be using, and why? We call normative or evaluative inquiry with this sort of target the conceptual ethics of normativity. (Henceforth, we will generally use ‘normative’ as a shorthand way to refer to both the normative and the evaluative). There are different motivations one can have for engaging in the conceptual ethics of normativity. One natural motivation is to either vindicate or improve one’s existing normative concepts. This paper aims to clarify and address what we take to be one of the deepest challenges to the conceptual ethics of normativity, where it is motivated in this way. Put roughly, the challenge arises from the fact that we need to use some of our own normative concepts in order to evaluate our normative concepts.