Analytic metaphysics is widely thought a dry discipline. I want to show how it could be used to connect with some deeply devotional theological claims. Here is a valid argument:
If artifacts exist, we created them. …
Scientific knowledge (and its transformation) is often presented in terms of models or overarching theories (Parke this volume). This chapter, in contrast, focuses on concepts as units and organizers of scientific knowledge. Concepts, on the one hand, are more fine-grained units in that a scientific theory contains many individual concepts. On the other hand—and this makes a look at concepts in biology particularly interesting—a concept can be used across several theories, and it can persist even when a theory has been discarded. The concept of a species continues to be used well after pre-Darwinian theories about species were abandoned, and this concept is used across all of biology, in such different theoretical context as vertebrate development and microbial ecology. The gene concept is likewise used in very different fields, and has survived despite the flaws of the original Mendelian theory of inheritance and a move toward molecular accounts.
Via Rochelle Don on Flickr
People dispute the ontological status of robots. Some insist that they are tools: objects created by humans to perform certain tasks — little more than sophisticated hammers. …
There is a salient contrast in how theoretical representations are regarded. Some are regarded as revealing the nature of what they represent, as in familiar cases of theoretical identification in physical chemistry where water is represented as hydrogen hydroxide and gold is represented as the element with atomic number 79. Other theoretical representations are regarded as serving other explanatory aims without being taken individually to reveal the nature of what they represent, as in the representation of gold as a standard for pre-20th century monetary systems in economics or the representation of the meaning of an English sentence as a function from possible worlds to truth values in truth-conditional semantics. Call the first attitude towards a theoretical representation realist and the second attitude instrumentalist. Philosophical explanation purports to reveal the nature of whatever falls within its purview, so it would appear that a realist attitude towards its representations is a natural default. I offer reasons for skepticism about such default realism that emerge from attending to several case studies of philosophical explanation and drawing a general metaphilosophical moral from the foregoing discussion.
The philosopher wrote:
The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. …
As is well known, Kant holds that the applicability of the moral ‘ought’ depends on a kind of agent-causal freedom that is incompatible with the deterministic structure of phenomenal nature. I argue that Kant understands this determinism to threaten not just morality but the very possibility of our status as rational beings. Rational beings exemplify “cognitive control” in all of their actions, including not just rational willing and the formation of doxastic attitudes, but also more basic cognitive acts such as judging, conceptualizing, and synthesizing.
We study dynamic multi-agent systems (dmass). These are multi-agent systems with explicitly dynamic features, where agents can join and leave the system during the evolution. We propose a general conceptual framework for modelling such dmass and argue that it can adequately capture a variety of important and representative cases. We then present a concrete modelling framework for a large class of dmass, composed in a modular way from agents specified by means of automata-based representations. We develop generic algorithms implementing the dynamic behaviour, namely addition and removal of agents in such systems. Lastly, we state and discuss several formal verification tasks that are specific for dmass and propose general algorithmic solutions for the class of automata representable dmass.
I respond to recent criticism of my analysis of the permissive-instructive distinction and outline problems with the alternative analysis on offer. Amongst other problems, I argue that the use of formal measures is unclear and unmotivated, that the distinction is conflated with others that are not equivalent, and that no good reasons are provided for thinking the alternative model or formal measure tracks what biologists are interested in. I also clarify my own analysis where it has been misunderstood or ignored.
The “Cosmological Constant Problem” (CCP) has historically been understood as describing a conflict between cosmological observations in the framework of general relativity (GR) and theoretical predictions from quantum field theory (QFT), which a future theory of quantum gravity ought to resolve. I argue that this view of the CCP is best understood in terms of a bet about future physics made on the basis of particular interpretational choices in GR and QFT respectively. Crucially, each of these choices must be taken as itself grounded in the success of the respective theory for this bet to be justified.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic role of consciousness in cognition.Could there be a cognitive zombie – that is, an unconscious
creature with the capacity for cognition? …
In Principia Ethica, G. E. Moore (1903) argued that goodness is a “non-natural” property and thereby sparked the so-called “naturalism vs. non-naturalism” debate in metaethics. This debate is still live, but unwell, today because, while much ink has been spilled defending both sides, there is a lack of consensus amongst parties to the debate (even within their own camps) about what exactly it would mean for normative properties to be non-natural in the first place. In fact, most naturalists and non-naturalists simply stipulate what they take “non-naturalism” to mean, rather than get bogged down in the tricky taxonomical question of what is the best way to characterize the view. For example, Jackson (1998), Shafer-Landau (2003), and Parfit (2011) stipulate that they take non-naturalism to be the view that some normative properties are not identical to descriptive properties, while Schroeder (2007), Chang (2013), Scanlon (2014), and Dunaway (2016) take non-naturalism to be the view that some normative facts are not fully grounded in – i.e. metaphysically explained by – non-normative facts.
In much of the recent literature on the subject, autonomy is interpreted as having the capacity and freedom to be the primary judge and executor of how one’s life goes (see e.g. Dworkin 1998; Dworkin 1994; Beauchamp and Childress 2008; Korsgaard 2009; Radoilska 2013). In the case of the normal and competent human adult - sometimes identified with the enfranchised citizen of a modern democratic state - our capacity for self-governance can be thought of as grounding a constraint on what other people (including the state and its representatives) can legitimately do to us, thereby providing a rationale for consent requirements of various sorts (see e.g. Estlund 2007). On this view, the failure to elicit my consent in the context of some specific interaction is to fail to respect me as the autonomous, and thereby normatively qualified, agent I am. One obvious limitation of this explanation is that the practice of constraining behaviour by eliciting consent extends far beyond the domain of agents who satisfy the standard requirements of autonomous, self governing, rational
It is widely recognized that the process used to make observations often has a significant effect on how hypotheses should be evaluated in light of those observations. Arthur Stanley Eddington (1939, Ch. II) provides a classic example. You’re at a lake and are interested in the size of the fish it contains. You know, from testimony, that at least some of the fish in the lake are big (i.e., at least 10 inches long), but beyond that you’re in the dark. You devise a plan of attack: get a net and use it to draw a sample of fish from the lake. You carry out your plan and observe: O : 100% of the fish in the net are big.
On New Year’s Eve 2016, the Cologne Police Department proudly reported via Twitter that it was currently screening hundreds of “nafris” at the main train station in Cologne. The label ‘nafri’, used by the police to refer to North Africans, had its (public) linguistic debut in this tweet, which was immediately followed by national moral outrage.
Certain metaphysical views are thought to have implications for the kinds of feelings that are appropriate to have. For instance, many philosophers maintain that we lack free will and that, as a result, reactive attitudes like resentment are inappropriate. Resentment would only be appropriate if people had genuine libertarian free will; since people lack such free will, we should not resent people even when they do us wrong (e.g., Pereboom 2001, Sommers 2007). Buddhist metaphysics also has implications for the kinds of reactive attitudes that are appropriate to have. Insofar as Buddhism denies the existence of a self, emotions that depend on a representation of self are based on a fundamental mistake.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic role of consciousness in perception.Human perception is normally conscious: there is something
it is like for us to perceive the world around us. …
Here is my favorite version of the “existence is not a property” objection to Anselm’s first ontological argument. It makes no sense to talk of the greatness of a nonexistent being except hypothetically as the greatness it would have if it existed. …
Here is a picture of ethics. We are designed to operate with a specific algorithm A for generating imperatives from circumstances. Unfortunately, we are broken in two ways: we don’t always follow the generated imperatives and we don’t always operate by means of A. …
We approach the problem of the extended mind from a radically non-dualist perspective. The separation between mind and matter is an artefact of the outdated mechanistic worldview, which leaves no room for mental phenomena such as agency, intentionality, or experience. We propose to replace it by an action ontology, which conceives mind and matter as aspects of the same network of processes. By adopting the intentional stance, we interpret the catalysts of elementary reactions as agents exhibiting desires, intentions, and sensations. Autopoietic networks of reactions constitute more complex super-agents, which moreover exhibit memory, deliberation and sense-making. In the specific case of social networks, individual agents coordinate their actions via the propagation of challenges. The distributed cognition that emerges from this interaction cannot be situated in any individual brain. This non-dualist, holistic view extends and operationalizes process metaphysics and Eastern philosophies. It is supported by both mindfulness experiences and mathematical models of action, self-organization, and cognition.
A fictional text is commonly viewed as constituting an invitation to play a certain game of make-believe, with the individual sentences written by the author providing the propositions we are to imagine and/or accept as true within the fiction. However, we can’t always take the text at face value. What narratologists call ‘unreliable narrators’ may present a confused or misleading picture of the fictional world. Meanwhile there has been a debate in philosophy about so-called ‘imaginative resistance’ in which we are inclined to resist imagining (or even accepting as true in the fiction) what’s explicitly stated in the text. But if we can’t take the text’s word for it, how do we determine what’s true in a fiction? We propose an account of fiction interpretation in a dynamic setting (a version of DRT with a mechanism for opening, updating, and closing temporary ‘workspaces’) and combine this framework with belief revision logic. With these tools in hand we turn to modelling imaginative resistance and unreliable narrators.
Consciousness presents a series of characteristics that have been observed throughout the years: unity, continuity, richness and robustness are some of them. It manifests itself in regions of the brain capable of processing a huge quantity of integrated information with a level of neural activity close to criticality. We argue that the physics of consciousness cannot be exclusively based on classical physics. Consciousness unity cannot be explained classically as the classical properties are always Humean like a mosaic. One needs an entangled quantum system that can at least satisfy part of the functions of a quantum computer to allow to generate an inner aspect with the unity of consciousness and to couple with a classical system that gives it simultaneous access to preprocessed information at the neural level and to produce events that generate neural firings.
« A rare classified ad
Paul Bernays Lectures
Last week, I had the honor of giving the annual Paul Bernays Lectures at ETH Zürich. My opening line: “as I look at the list of previous Bernays Lecturers—many of them Nobel physics laureates, Fields Medalists, etc.—I think to myself, how badly did you have to screw up this year in order to end up with me?”
Paul Bernays was the primary assistant to David Hilbert, before Bernays (being Jewish by birth) was forced out of Göttingen by the Nazis in 1933. …
To say that evidence is normative is to say that what evidence one possesses, and how this evidence relates to any proposition, determines which attitude among believing, disbelieving and withholding one ought to take toward this proposition if one deliberates about whether to believe it. It has been suggested by McHugh that this view can be vindicated by resting on the premise that truth is epistemically valuable. In this paper, I modify the strategy sketched by McHugh so as to overcome the initial difficulty that it is unable to vindicate the claim that on counterbalanced evidence with respect to P one ought to conclude deliberation by withholding on P. However, I describe the more serious difficulty that this strategy rests on principles whose acceptance commits one to acknowledging non-evidential reasons for believing. A way to overcome this second difficulty, against the evidentialists who deny this, is to show that we sometimes manage to believe on the basis of non-epistemic considerations. If this is so, one fundamental motivation behind the evidentialist idea that non-epistemic considerations could not enter as reasons in deliberation would lose its force. In the second part of this paper I address several strategies proposed in the attempt to show that we sometimes manage to believe on the basis of non-epistemic considerations and show that they all fail. So, I conclude that the strategy inspired by McHugh to ground the normativity of evidence of the value of truth ultimately fails.
This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Thanks to John Schwenkler for hosting me. Today, I’ll start by situating the project of the book within a broader landscape in the philosophy of mind.What is the role of phenomenal consciousness in our mental
In our commentary on Lynch et al.’s target paper (2019, this issue), we focus on decomposition as a research strategy. We argue that not only the presumptive microbial causes but also their supposed phenotypic effects need to be decomposed relative to each other. Such a dual decomposition strategy ought to improve the way in which causal claims in microbiome research can be made and understood.
In connection with free will, quantum mechanics or divine creation it is useful to talk about contrastive explanation. But there is no single generally accepted concept of contrastive explanation, and what one says about these topics varies depending on the chosen concept. …
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers & Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful.
About 30 years ago, William Alston (1988) penned the locus classicus for a puzzle that is at the heart of contemporary debates on epistemic normativity. Alston’s puzzle, in short, comes from realizing that the most natural way of understanding talk of epistemic justification seems to be in tension with the limited control we have over our belief formation. In this paper, I want to clarify and expand this puzzle, as well as examine the nature and full consequences of a deflationary approach to its resolution.
Traditional oppositions are at least two-dimensional in the sense that they are built based on a famous bidimensional object called square of oppositions and on one of its extensions such as Blanche’s hexagon. Instead of two-dimensional objects, this article proposes a construction to deal with oppositions in a one-dimensional line segment.
The view that human communication is essentially a matter of sharing mental states, especially communicative intentions, has been immensely influential in pragmatics and beyond. Drawing together and elaborating various lines of criticism, I argue that this influence has been mostly harmful; in particular, it has misdirected research on the evolution and development of language and communication.