The epistemic account and the noetic account hold that the essence of scientific progress is the increase in knowledge and understanding, respectively. Dellsén (2018) criticizes the epistemic account (Park, 2017a) and defends the noetic account (Dellsén, 2016). I argue that Dellsén’s criticisms against the epistemic account fail, and that his notion of understanding, which he claims requires neither belief nor justification, cannot explain scientific progress, although it can explain fictional progress in science-fiction.
In the first essay in The Sovereignty of Good, ‘The Idea of Perfection’, Murdoch deploys a bit of arcane idiom that is easy to pass over without much hesitation. With only a few exceptions, the passage in which it appears, close to the start of the part of the essay that sees Murdoch develop her positive proposal, has drawn little critical assessment. Murdoch’s alternative ‘soul-picture’ is pitched against a neo-Kantian existentialist, behaviorist position (whose ethics and politics are utilitarian and ‘democratic’ (p.9)) which she sees as fueled by a genetic – and for her faulty - analysis of concepts: Concepts are understood to have a structure that is public, grasp of which is a skill; ordinary words for concepts are learned from observation of some ‘typical outward behavior pattern’; in the case of moral concepts, ‘to copy a right action is to act rightly’. This latter is the view of Stuart Hampshire to whom Sovereignty is dedicated; but, she writes, while ‘this is all very well to say’, the question still arises: ‘What is the form that I am supposed to copy?’ (p.30). She finds she wants to attack this ‘heavily fortified’ (p.16) position: ‘I am not content’ (ibid.) Her alternative will be sketched out, though incompletely, in the rest of the paper. For now, a partial summary: Concepts have a complicated structure, a grasp of which is private and necessarily fallible. There is no skill a public display of which is a criterion of concept possession.
My aim in this paper is to propose a way to study the role of perspectives in both the production and justification of experimental knowledge claims. My starting point for this will be Anjan Chakravartty’s claim that Ronald Giere’s perspectival account of the role of instruments in the production of such claims entails relativism in the form of irreducibly incompatible truths. This led Michela Massimi to argue that perspectivism, insofar as it wants to form a realist position, is only concerned with the justification of such claims: whether they are produced reliably is, on her view, a perspective-independent fact of the matter. Following a suggestion by Giere on how scientists handle incompatible experimental results, I will then argue that Massimi’s perspectivism can be extended to also cover the production of such claims, without falling into relativism. I will elaborate this suggestion by means of Uljana Feest’s work on how scientists handle incompatible experimental results. I will argue that, if we reconceptualize perspectives The author would like to acknowledge the Research Foundation – Flanders (FWO) as funding institution. Part of this paper was written during a stay as a visiting researcher at the University of Edinburgh with Michela Massimi’s ERC-project Perspectival Realism: Science, Knowledge and Truth from a Human Vantage Point.
Byrne’s first set of questions focuses on my central notion of perceptual capacities. He asks why I focus on capacities to discriminate and single out rather than, say, the capacity to know. In response, one reason against focusing on the capacity to know is that one goal of my account is to give an account of perceptual knowledge: it is not clear what explanatory progress would be made if knowledge were analysed in terms of the capacity to know.
Perception is our key to the world. It plays at least three different roles in our lives. It justifies beliefs and provides us with knowledge of our environment. It brings about conscious mental states. It converts informational input, such as light and sound waves, into representations of invariant features in our environment. Corresponding to these three roles, there are at least three fundamental questions that have motivated the study of perception: Epistemology Question: How does perception justify beliefs and yield knowledge of our environment?
Metacognition – the ability to represent, monitor and control ongoing cognitive processes – helps us perform many tasks, both when acting alone and when working with others. While metacognition is adaptive, and found in other animals, we should not assume that all human forms of metacognition are gene-based adaptations. Instead, some forms may have a social origin, including the discrimination, interpretation, and broadcasting of metacognitive representations. There is evidence that each of these abilities depends on cultural learning and therefore that cultural selection might shape human metacognition. The cultural origins hypothesis is a plausible and testable alternative that directs us towards a substantial new programme of research.
We investigate the value of persons. Our primary goal is to chart a path from equal and extreme value to infinite value. We advance two arguments. Each argument offers a reason to think that equal and extreme value are best accounted for if we are infinitely valuable. We then raise some difficult but fruitful questions about the possible grounds or sources of our infinite value, if we indeed have such value.
For Avicenna (Ibn Sīnā) metaphysics is a science
(ʿilm), i.e., a perfectly rationally established
discipline that allows human reason to achieve an authentic
understanding of the inner structure of the world. Metaphysics is the
science of being qua being and therefore the science that
explains every being. In his interpretation, Avicenna fuses
the Aristotelian tradition, which he intends to renew (Gutas 2014),
with the Neo-Platonic idea of emanation, on which he builds his
system: metaphysics thus includes theology, cosmology and angelology,
and provides a foundation for physics, psychology, prophetology and
It is the aim of this paper to develop and defend an interpretation of level of scientific discipline within the truth-maker framework. In particular, I exploit the mereological relation of proper parthood, which is integral to truth-maker semantics, in order to provide an account of scientific level.
In Plato’s Statesman, the Eleatic Visitor argues that the four apparently distinct arts of politics (πολιτική), kingship (βασιλική), slaveholding (δεσποτική), and household-management (οἰκονοµική) are in fact one and the same art. Aristotle rejects this thesis in the second sentence of his Politics
I argue for an account of the vulnerability of trust, as a product of our need for secure social attachments to individuals and to a group. This account seeks to explain why it is true that, when we trust or distrust someone, we are susceptible to being betrayed by them, rather than merely disappointed or frustrated in our goals. What we are concerned about in matters of trust is, at the basic level, whether we matter, in a non-instrumental way, to that individual, or to the group of which they are a member. We have this concern as a result of a drive to form secure social attachments. This makes us vulnerable in the characteristic way of being susceptible to betrayal, because how the other acts in such matters can demonstrate our lack of worth to them, or to the group, thereby threatening the security of our attachment, and eliciting the reactive attitudes characteristic of betrayal.
Material constitution is the relation that holds between an object and what it is made of. For example, statues may be constituted by lumps of matter, flags by colored pieces of cloth, and human persons (according to a prominent strand of theorizing about personal identity) by biological organisms. Constitution is often thought to be a dependence relation. I will later say more about what this means, but the rough idea is this. According to a popular picture, reality is hierarchically structured: some bits of it hang on other, metaphysically prior, bits.
Aristotle famously contends that every physical object is a compound
of matter and form. This doctrine has been dubbed
“hylomorphism”, a portmanteau of the Greek words for
matter (hulê) and form (eidos or
morphê). Highly influential in the development of
Medieval philosophy, Aristotle’s hylomorphism has also enjoyed
something of a renaissance in contemporary metaphysics. While the basic idea of hylomorphism is easy to grasp, much remains
unclear beneath the surface. Aristotle introduces matter and form, in
the Physics, to account for changes in the natural world,
where he is particularly interested in explaining how substances come
into existence even though, as he maintains, there is no generation
ex nihilo, that is that nothing comes from nothing.
Roughly speaking, when one acts supererogatorily, one does more than one is obligated to. A typical case looks something like this:
It would be permissible to bestow a benefit x1 on an individual A at a personal cost of z1; instead, you permissibly bestow a larger benefit x2 on A at a larger personal cost z2. …
Suppose Alice is an misanthropic immortal who lives in a universe of happy people. Suppose, too, that Alice is an immortal. Then one day Alice does a really bad thing. She is unreasonably annoyed at all other people and instantly freezes everything besides herself. …
This exploratory paper discusses a somewhat heterodox metaphysical theory of consciousness: the “many-worlds theory”. The theory gives up the common assumption that all conscious experiences are features of one and the same world and asserts instead that different conscious subjects are associated with different “first-personally centred worlds”. We can think of these as distinct and “parallel” first-personal realizers of a shared “third-personal world”. This is combined with a form of modal realism, according to which different subjects’ first-personally centred worlds are all real, though only one of them is present for each subject. The relationship between first-personally centred and third-personal worlds can in turn be captured in a levelled ontology, where the first-personal level is subvenient and the third-personal supervenient. The described setup is intended to capture the irreducibly subjective nature of conscious experience without lapsing into solipsism. The paper also looks at some existing scientific theories of consciousness, such as integrated information theory, through the lens of the present metaphysical theory and discusses its implications for the hard problem of consciousness.
We are witnessing a re-emergence of the practice of public shaming, especially shaming carried out with the use of the Internet. The following two cases are typical of the phenomenon. In October 2012, Lindsey Stone was on a trip to Washington DC as a caregiver for a group of adults with learning difficulties. Stone had a running joke with a colleague, Jamie Schuh, where they took humorous photographs, such as them smoking in front of a “No Smoking” sign. While at Arlington National Cemetery, Schuh photographed Stone raising her middle finger and pretending to be shouting in front of a sign reading “Silence and Respect”. Thinking it hilarious, Schuh posted the photo on Facebook, with Stone’s consent. Four weeks’ later, Twitter and Facebook were abuzz with outrage at the photo. Messages ranged from “Lindsey Stone hates the military and hates soldiers who have died in foreign wars” to “Send the dumb feminist to prison” to “Hope this cunt gets raped and stabbed to death”. A “Fire Lindsey Stone” Facebook page was created, and attracted 12,000 likes overnight. The next day, Stone lost her job. As a result, “she fell into depression, became an insomniac, and barely left home for a year”. Stone applied for many other jobs as a caregiver during this time, but never heard back. Eventually she did manage to get a new job, but lived in constant fear that her new employers would discover the photo and fire her.
The success of political liberalism depends on there being an overlapping consensus among reasonable citizens—including religious citizens—upon principles of political morality. This paper explores the resources within one major religion— Christianity—that might lead individuals to endorse (or reject) political liberalism, and thus to join (or not join) the overlapping consensus. I show that there are several strands within Christian political ethics that are consonant with political liberalism and might form the basis for Christian citizens’ membership of the overlapping consensus. Nonetheless, tensions remain, and it is not clear that Christians could wholeheartedly endorse the political conception or give unreserved commitment to political liberal ideals.
Proponents of public reason views hold that the exercise of political power ought to be acceptable to all reasonable citizens. This paper elucidates the common structure shared by all public reason views, first by identifying a set of question that all such views must answer and, second, by showing that the answers to these questions stand in a particular relationship to each other. In particular, we show that what we call the ‘rationale question’ is fundamental. This fact, and the common structure more generally, is often overlooked or distorted within the literature. As a result, we argue, several prominent argumentative moves made by both critics and defenders of public reason are unsuccessful. Our overall conclusion is that discussions of public reason views would be more fruitful if they made consistent use of the common structure we identify.
Political philosophy has witnessed a recent surge of interest in
territorial rights—what they are, who holds them, what justifies
them—as well as in a broader theory of territorial justice,
which situates said rights in an account of distributive justice,
thereby addressing the scope of the rights. This interest is hardly
surprising. The state is not simply a membership organization: it
exercises authority over a geographical domain and this naturally
gives rise to questions about how state authority over place can be
justified, and how different claims to this authority can be assessed
One of the most expected properties of a logical system is that it can be algebraizable, in the sense that an algebraic counterpart of the deductive machinery could be found. When this happens, a lot of logical problems can be faithfully and conservatively translated into some given algebra, and then algebraic tools can be used to tackle them. This happens so naturally with the brotherhood between classical logic and Boolean algebra, that a similar relationship is expected to hold for non-standard logics as well. And indeed it holds for some, but not for all logics. In any case, the task of finding such an algebraic counterpart is far from trivial. The intuitive idea behind the search for algebraization for a given logic system, generalizing the pioneering proposal of Lindenbaum and Tarski, usually starts by trying to find a congruence on the set of formulas that could be used to produce a quotient algebra, defined over the algebra of formulas of the logic.
Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one’s beliefs. Furthermore, conditionalization and probabilism apparently follow from a wide range of measures of inaccuracy. However, we argue that there is an under-appreciated diachronic constraint on measures of inaccuracy which limits the measures from which one can prove conditionalization, and none of the remaining measures allow one to prove probabilism. That is, among the measures in the literature, there are some from which one can prove conditionalization, others from which one can prove probabilism, but none from which one can prove both. Hence at present, the accuracy-based approach cannot underwrite both conditionalization and probabilism.
There is a large gap between the specialized knowledge of scientists and laypeople’s understanding of the sciences. The novice-expert problem arises when non-experts are confronted with (real or apparent) scientific disagreement, and when they don’t know whom to trust. Because they are not able to gauge the content of expert testimony, they rely on imperfect heuristics to evaluate the trustworthiness of scientists. This paper investigates why some bodies of scientific knowledge become polarized along political fault lines. Laypeople navigate conflicting epistemic and social demands in their acceptance of scientific testimony; this might explain their deference to scientific fringe theories, which often goes together with denying established scientific theories. I evaluate three approaches to mitigate denialism: improving the message, improving the messenger, and improving the environment in which the message is conveyed.
In prior work, we have argued that spacetime functionalism provides tools for clarifying the conceptual difficulties specifically linked to the emergence of spacetime in certain approaches to quantum gravity. We argue in this article that spacetime functionalism in quantum gravity is radically different from other functionalist approaches that have been suggested in quantum mechanics and general relativity: in contrast to these latter cases, it does not compete with purely interpretative alternatives, but is rather intertwined with the physical theorizing itself at the level of quantum gravity. Spacetime functionalism allows one to articulate a coherent realist perspective in the context of quantum gravity, and to relate it to a straightforward realist understanding of general relativity. Keywords: Spacetime functionalism, emergence of spacetime, quantum gravity, wave function monism, local beables, dynamical approach to general relativity, structural realism, scientific realism, naturalism.
For quite some time, cognitive science has offered philosophy an opportunity to address central problems with an arsenal of relevant theories and empirical data. However, even among those naturalistically inclined, it has been hard to find a universally accepted way to do so. In this article, we offer a case study of how cognitive-science input can elucidate an epistemological issue that has caused extensive debate. We explore Jason Stanley’s idea of the practical grasp of a propositional truth and present naturalistic arguments against his reductive approach to knowledge. We argue that a plausible interpretation of cognitive-science input concerning knowledge—even if one accepts that knowledge how is partly propositional—must involve an element of knowing how to act correctly upon the proposition; and this element of knowing how to act correctly cannot itself be propositional.
We critically review two extant paradigms for understanding the systematic interaction between modality and tense, as well as their respective modifications designed to do justice to the contingency of time’s structure and composition. We show that on either type of theory, as well as their respective modifications, some principles prove logically valid whose truth might sensibly be questioned on metaphysical grounds. These considerations lead us to devise a more general logical framework that allows accommodation of those metaphysical views that its predecessors rule out by fiat.
The best empirically-grounded theory of first-personal phenomenal consciousness is global workspace theory. This, combined with the success of the phenomenal concept strategy, means that consciousness can be fully reductively explained in terms of globally broadcast nonconceptual content. So there are no qualia (and there is no mental paint). As a result, the question of which other creatures besides ourselves are conscious is of no importance, and doesn’t admit of a factual answer in most cases. What is real, and what does matter, is a multidimensional similarity pace of functionally organized minds.
When thinking about triage situations, it's common for people to assume that saving lives (as many of them as possible) should be our moral goal. But this is wrong, for the straightforward reason that some deaths are vastly more tragic than others. …
In The Nature of Selection (1984), Sober argued that natural selection is in principle powerless to explain why any individual organism has the traits it does rather than the very same individual having different traits. A debate ensued, in which critics have argued against Sober by laying explanations end-to-end, to form a chain of explanation that begins with selection and passes through one or more intermediate events before reaching the target explanandum. I argue that Sober’s critics misunderstand how contrastive explananda (why p rather than q) behave in such explanatory chains, and that this strategy has so far failed.
This article replies to the main objections raised by the commentators on Carruthers (1998a). It discusses the question of what evidence is relevant to the assessment of dispositional higher-order thought (HOT) theory; it explains how the actual properties of phenomenal consciousness can be dispositionally constituted; it discusses the case of pains and other bodily sensations in non-human animals and young children; it sketches the case for preferring higher-order to first-order theories of phenomenal consciousness; and it replies to some miscellaneous points and objections.