A simple argument proposes a direct link between realism about quantum mechanics and one kind of metaphysical holism: if elementary quantum theory is at least approximately true, then there are entangled systems with intrinsic whole states for which the intrinsic properties and spatiotemporal arrangements of salient subsystem parts do not suffice.
Think of a pointillist painting: hundreds of tiny pixels depicting a leafy scene. Each leaf is a constellation of primary colors, some expertly proportioned and arranged dots of red, yellow, and blue paint. These pixels are mutually independent: the color at one does not depend on or constrain the decoration anywhere else. Collectively, though, they determine all the contents of our painting: duplicate the geometry of the canvas and the pointy distribution of pigments and we thereby duplicate the whole integrated scene.
Rifling through two-hundred-year-old diaries, unfurling bundles of love-letters like flowers, saying every name in an orphanage registry under my breath, getting lost in a farmer’s field, gingerly lifting leaves long folded with perfumey motes, falling asleep in my sunshine chair, drooling spittle puddles onto a crackled map of Nunsmoor. The stories I stumbled across in the archives were often painful, shocking, and occasionally joyous. At first, they seem far away but after a short while they begin to move closer (or maybe it’s we who are moving?) and I begin to comprehend, just barely, a great aliveness.
Jacques Derrida (1930–2004) was the founder of
“deconstruction,” a way of criticizing not only both
literary and philosophical texts but also political institutions. Although Derrida at times expressed regret concerning the fate of the
word “deconstruction,” its popularity indicates the
wide-ranging influence of his thought, in philosophy, in literary
criticism and theory, in art and, in particular, architectural theory,
and in political theory. Indeed, Derrida’s fame nearly reached
the status of a media star, with hundreds of people filling
auditoriums to hear him speak, with films and televisions programs
devoted to him, with countless books and articles devoted to his
In a recent article Martha Nussbaum identified three problems with the Stoic doctrine of respect for dignity: its exclusive focus on specifically human dignity, its indifference to the need for external goods, and its ineffectiveness as a moral motive. This article formulates a non-Stoic doctrine of respect for dignity that avoids these problems. I argue that this doctrine helps us to understand such moral phenomena as the dignity of nonhuman animals as well as the core human values of life, freedom, and equality. I end by arguing that Nussbaum underestimates the mutual support between motives of respect and other moral motives such as compassion.
The philosophy of Epicurus (341–270 B.C.E.) was a complete and
interdependent system, involving a view of the goal of human life
(happiness, resulting from absence of physical pain and mental
disturbance), an empiricist theory of knowledge (sensations, together with
the perception of pleasure and pain, are infallible criteria), a
description of nature based on atomistic materialism, and a
naturalistic account of evolution, from the formation of the world to
the emergence of human societies. Epicurus believed that, on the basis
of a radical materialism which dispensed with transcendent entities
such as the Platonic Ideas or Forms, he could disprove the possibility
of the soul’s survival after death, and hence the prospect of
punishment in the afterlife.
This is a position paper. It presents a cohesive framework that addresses some of the defining issues of cotemporary metaethics, notably the nature of moral judgment, moral reality, and moral language. The framework is supposed to appeal to philosophers antecedently attracted, on the one hand, to the idea that there are no such mind-independent entities as values, and on the other hand, to the idea that there is still such a thing is substantive moral truth. §1 introduces three prominent divides in contemporary metaethics: between cognitivism and noncognitivism in moral psychology, between moral realism and antirealism in moral metaphysics, and between descriptivism and expressivism in moral semantics. §2 then presents, rather dogmatically, a comprehensive approach to the mind that I call impure intentionalism, which type-individuates mental states in terms of their intentional character, understood as a combination of content and attitude; it also presents a specific framework for understanding the attitudinal aspect of mental states. Finally, applying impure intentionalism to moral psychology, §3 distinguishes between two kinds of moral judgment, one cognitive and one noncognitive, and crafts a moral metaphysics and a moral semantics around this distinction.
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
In the fall of 1998 Trent Lott used his power as Senate Majority Leader to prevent the confirmation of James C. Hormel, an openly gay San Francisco philanthropist who was then President Clinton's nominee for Ambassador to Luxembourg. Mr. Lott made it clear that his opposition to Hormel was based on his opposition to homosexuality in general. Asked by a television interviewer during the controversy whether homosexuality is a sin, Mr. Lott answered "Yes, it is"; he went on to compare gay people to alcoholics, sex addicts, and kleptomaniacs.
In the framework of Brans—Dicke theory, a cosmological model regarding the expanding universe has been formulated by considering an inter—conversion of matter and dark energy. A function of time has been incorporated into the expression of the density of matter to account for the non—conservation of the matter content of the universe. This function is proportional to the matter content of the universe. Its functional form is determined by using empirical expressions of the scale factor and the scalar field in field equations. This scale factor has been chosen to generate a signature flip of the deceleration parameter with time. The matter content is found to decrease with time monotonically, indicating a conversion of matter into dark energy. This study leads us to the expressions of the proportions of matter and dark energy of the universe. Dependence of various cosmological parameters upon the matter content has been explored.
How is it possible that models from game theory, which are typically highly idealised, can be harnessed for designing institutions through which we interact? I argue that game theory assumes that social interactions have a specific structure, which is uncovered with the help of directed graphs. The graphs make explicit how game theory encodes counterfactual information in natural collections of its models and can therefore be used to track how model-interventions change model-outcomes. For model-interventions to inform real-world design requires the truth of a causal hypothesis, namely that structural relations specified in a model approximate causal relations in the target interaction; or in other words, that the directed graph can be interpreted causally. In order to increase their confidence in this hypothesis, market designers complement their models with natural and laboratory experiments, and computational methods. Throughout the paper, the reform of a matching market for medical residents provides a case study for my proposed view, which hasn’t been previously considered in the philosophy of science.
Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.
We discuss an article by Steven Weinberg  expressing his discontent with the usual ways to understand quantum mechanics. We examine the two solutions that he considers and criticizes and propose another one, which he does not discuss, the pilot wave theory or Bohmian mechanics, for which his criticisms do not apply.
Law and democracy seem oddly estranged in academic philosophical discourse. Aside from some controversies about constitutionalism, there is very little mention of democracy in most contemporary jurisprudential treatments. Likewise, one can leaf through extensive discussions of democracy that do not elaborate any distinctive, essential role that law plays in achieving democratic aims. Law tends to be treated as an instrumental afterthought.
In the previous lecture, I argued that citizens have a moral need to convey and to receive certain moral messages from each other that affirm their mutual equality, basic rights, and their belonging in a moral community. Those particular messages must take the form of collective commitments. Democratic law plays an inspiring, unique role in satisfying that need by constituting a community of equal membership that can pursue collective moral ends for and in the name of the community by producing articulate, public commitments to mandatory and discretionary ends.
This paper explores some of the ways in which agentive, deontic, and epistemic concepts combine to yield ought statements—or simply, oughts—of different characters. Consider an example. Suppose I place a coin on the table, either heads up or tails up, though the coin is covered and you do not know which. And suppose you are then asked to bet whether the coin is heads up or tails up, with $10 to win if you bet correctly. If the coin is heads up but you bet tails, there is a sense in which we would naturally say that you ought to have made the other choice—at least, things would have turned out better for you if you had. But an ought statement like this does not involve any suggestion that you should be criticized for your actual choice. Nobody could blame you, in this situation, for betting incorrectly. By contrast, imagine that the coin is placed in such a way that you can see that it is heads up, but you bet tails anyway. Again we would say that you ought to have done otherwise, but this time it seems that you could legitimately be criticized for your choice.
This paper presents a novel challenge to epistemic internalism, the view that epistemic justification supervenes on facts to which the believing agent has introspective access. The challenge rests on a new set of cases which feature subjects forming beliefs under conditions of ‘bad ideology’ – that is, conditions in which pervasively false beliefs sustain and are sustained by systems of social oppression. In such cases, I suggest, the externalistic view that justification is a matter of structural, worldly relations, rather than the internalistic view that justification is a matter of how things seem from the agent’s individual perspective, becomes the more intuitively attractive theory. But these ‘bad ideology’ cases do not merely yield intuitive verdicts that favour externalism over internalism. These cases are moreover analogous to precisely those canonical cases widely taken to be counterexamples to externalism: cases featuring brains-in-vats, clairvoyants, and dogmatists. That is, my ‘bad ideology’ cases are, in all relevant respects, just like cases that are thought to count against externalism – except that they intuitively favour externalism. This, I argue, is a serious worry for internalism, and bears interestingly on the debate over whether externalism is a genuinely ‘normative’ epistemology.
An approach to frame semantics is built on a conception of frames as finite automata, observed through the strings they accept. An institution (in the sense of Goguen and Burstall) is formed where these strings can be refined or coarsened to picture processes at various bounded granularities, with transitions given by Brzozowski derivatives.
Beall and Murzi (J Philos 110(3):143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules (over sufficiently expressive base theories). As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field (Notre Dame J Form Log 58(1):1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi (J Philos Log. doi:10.1007/s10992-017-9438-x, 2017), is a generalisation of Kripke’s notion of grounded truth (J Philos 72:690–716, 1975), and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic (nor, more generally, of a revisionary approach to semantic paradox), we take the notion of naïve
Scientific research is almost always conducted by communities of scientists of varying size and complexity. Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project. Scientists manage to do this without a central authority allocating them to different projects. Thanks largely to the pioneering studies of Philip Kitcher and Michael Strevens , understanding this self-organization has become an important area of research in the philosophy of science.
In this paper, I make explicit some implicit commitments to realism and conceptualism in recent work in social epistemology exemplied by Miranda Fricker and Charles Mills. I offer a survey of recent writings at the intersection of social epistemology, feminism, and critical race theory, showing that commitments to realism and conceptualism are at once implied yet undertheorized in the existing literature. I go on to offer an explicit defense of these commitments by drawing from the epistemological framework of John McDowell, demonstrating the relevance of the metaphor of the “space of reasons” for theorizing and criticizing instances of epistemic injustice. I then point out how McDowell’s own view requires expansion and revision in light of Mills’ concept of “epistemologies of ignorance.” I conclude that, when their strengths are used to make up for each others’ weaknesses, Mills and McDowell’s positions mutually reinforce one another, producing a powerful model for theorizing instances of systematic ignorance and false belief.
When philosophers ponder whether machines could be conscious, they are generally interested in a particular form of AI: AGI, or artificial general intelligence. AGI doesn’t exist yet, but we now have domain specific intelligences like AlphaGo and Watson, the world Go and Jeopardy! champions, respectively. These systems outperform humans in specific domains, and they are impressive. But AI seems to be developing exponentially, and within the next ten or twenty years there will likely be forms of artificial general intelligence (AGI). AGI is a kind of general, flexible intelligence that can do things like make breakfast without burning the house down, while thinking of mathematics and answering the phone. Its intelligence is not limited to a single domain, like chess. Because AGIs are general, flexible, integrate knowledge across domains, and exhibit human-level intelligence or beyond, AGIs seem like better candidates for being conscious than existing systems.
Recently the first protective measurement has been realized in experiment [Nature Phys. 13, 1191 (2017)], which can measure the expectation value of an observable from a single quantum system. This raises an important and pressing issue of whether protective measurement implies the reality of the wave function. If the answer is yes, this will improve the influential PBR theorem [Nature Phys. 8, 475 (2012)] by removing auxiliary assumptions, and help settle the issue about the nature of the wave function. In this paper, we demonstrate that this is indeed the case. It is shown that a ψ-epistemic model and quantum mechanics have different predictions about the variance of the result of a Zeno-type protective measurement with finite N .
It is difficult for the metaphysician to not be fascinated by Stephen Hawking’s question, ‘What is it that breathes fire into the equations and makes a universe for them to govern?’ (Hawking, 1988, p. 174). Like a Tuscan countryside in the eyes of a painter, this statement inspires quite the stream of consciousness, at least in my idiosyncratic mind. For one thing, Hawking’s wording sounds as if abstract entities provide push and pull to the universe. Why would the equations govern anything, rather than merely describing how events tend to unfold? Objections aside though, I like Hawking’s question because it makes me wonder, given the mathematical nature of fundamental physical theories, what, in the realm of concreta, the lofty equations are describing. And, in another blip of consciousness, I am reminded of my Russellian monist friends, who would perhaps see, in Hawking’s question, the related question: how do we know what is ontologically fundamental, if science just details the nomological-causal structure of the world, and remains silent about its underlying categorical properties? Not quite like the rich hues of Tuscany at sunset, but alas, the mathematical nature of physics intrigues me.
This study investigated the development of intuitions about which properties are associated with the brain and which are associated with the body. A sample of 60 children aged 6, 8, and 10 years, as well a sample of 20 adults, were told about a brain transplant between two individuals and were asked about where certain properties resided after the transplant. Adults and older children construed the characteristics associated with fine-motor behaviour, culpability, social contract and best friendships as transferring with the brain. Characteristics associated with gross-motor behaviour, physical/biological properties, ownership and familial relationships were more likely to be seen as remaining with the body. Domain-based explanations for this pattern of results are discussed. Copyright © 2011 John Wiley & Sons, Ltd.
do know that preverbal infants assume that intrinsic behaviors are associated more with distinctive insides than, say, with distinctive hats (9); however, this study reveals infer-social contingency may have only been cognitively integrated later. This is plausible, but it cannot explain why contingency and furriness also work if self-propulsion is a foundational part of the expectations. Perhaps instead, there are initially two different systems for predators and prey: A hair-trigger detection system for
Personality is increasingly being viewed as a complex and changing system. Self-processes are worth considering in this context because of their highly dynamic quality: they interact and influence one another in extremely intricate ways. In this chapter we first classify self-related terms and examine the following key processes in detail: self-awareness and associated processes (e.g., self-reflection, self-distancing, mindfulness), mental time travel (autobiography and prospection), and self-knowledge (including self-concept). More briefly, we also review Theory-of-Mind, self-rumination, self-esteem, and self-talk. We present information about neuroanatomy, subtypes, measurement, and functions of self-processes, as well as links with personality. Some important messages proposed are: (1) self-awareness is made up of various sub-processes and must be divided into self-reflection and self-rumination, (2) prospection depends on autobiographical knowledge, (3) our self-concept often is inaccurate, and (4) self-talk is present in most—if not all—other self-processes.
The question whether Frege’s theory of indirect reference enforces an infinite hierarchy of senses has been hotly debated in the secondary literature. Perhaps the most influential treatment of the issue is that of Burge (1979), who offers an argument for the hierarchy from rather minimal Fregean assumptions. I argue that this argument, endorsed by many, does not itself enforce an infinite hierarchy of senses. I conclude that whether or not the theory of indirect reference can avail itself of only finitely many senses is pending further theoretical development.
Examining previous discussions on how to construe the concepts of gender and race, we advocate what we call strategic conceptual engineering. This is the employment of a (possibly novel) concept for specific epistemic or social aims, concomitant with the openness to use a different concept (e.g., of race) for other purposes. We illustrate this approach by sketching three distinct concepts of gender and arguing that all of them are needed, as they answer to different social aims. The first concept serves the aim of identifying and explaining gender-based discrimination. It is similar to Haslanger’s well-known account, except that rather than offering a definition of ‘woman’ we focus on ‘gender’ as one among several axes of discrimination. The second concept of gender is to assign legal rights and social recognitions, and thus is to be trans-inclusive. We argue that this cannot be achieved by previously suggested concepts that include substantial gender-related psychological features, such as awareness of social expectations. Instead, our concept counts someone as being of a certain gender solely based on the person’s self-identification with this gender. The third concept of gender serves the aim of personal empowerment by means of one’s gender identity. In this context, substantial psychological features and awareness of one’s social situation are involved. While previous accounts of concepts have focused on their role in determining extensions, we point to contexts where a concept’s role in explanation and moral reasoning can be more important.
Ongoing empirical discoveries in molecular biology have generated novel conceptual challenges and perspectives. Philosophers of biology have reacted to these trends when investigating the practice of molecular biology and contributed to scientific debates on methodological and conceptual matters. This article reviews some major philosophical issues in molecular biology. First, philosophical accounts of mechanistic explanation yield a notion of explanation in the context of molecular biology that does not have to rely on laws of nature and comports well with molecular discovery. Second, reductionism continues to be debated and increasingly be rejected by scientists. Philosophers have likewise moved away from reduction toward integration across fields or integrative explanations covering several levels of organization. Third, although the gene concept has undergone substantial transformation and even fragmentation, it still enjoys widespread use by molecular biologists, which has prompted philosophers to understand the empirical reasons for this. At the same time, it has been argued the notion of ‘genetic information’ is largely an empty metaphor, which generates the illusion of explanatory understanding without offering an adequate explanation of molecular and developmental mechanisms.