Some philosophers have the sense that morality loses some of its significance if there is no realm of objective moral fact. If morality is just grounded in our emotions, the thought goes, then its demands cannot have absolute authority over us; only the hardness of metaphysical fact can support the hardness of the moral “ought.” And so the normative importance of morality seems to depend on moral realism, the view that there are mind-independent moral truths. As Derek Parfit put it (rather bleakly), If there were no such truths, there would be no point in trying to make good decisions. Nothing would matter, and there would not be better or worse ways to live.
The paper offers an account of inference. The account underwrites the idea that inference requires that the reasoner takes her premises to support her conclusion. I argue that the required ‘taking’ is neither an intuition nor a belief. I sketch an alternative view on which inferring consists in attaching what I call inferential force to a structured collection of contents.
Elaborated Intrusion theory (Kavanagh, Andrade & May 2005) distinguishes between unconscious, associative processes as the precursors of desire, and controlled processes of cognitive elaboration that lead to conscious sensory images of the target of desire and associated affect. We argue that these mental images play a key role in motivating human behavior. Consciousness is functional in that it allows competing goals to be compared and evaluated. The role of effortful cognitive processes in desire helps to explain the different time courses of craving and physiological withdrawal.
This study is about Brentano’s criticism of a version of phenomenalism that he calls “mental monism” and that he attributes to positivists philosophers such as Ernst Mach and John Stuart Mill. I am interested in Brentano’s criticism of Stuart Mill’s version of mental monism based on the idea of “permanent possibilities of sensation”. Brentano claims that this form of monism is characterized by the identification of the class of physical phenomena to that of mental phenomena and it commits itself to a form of idealism. Brentano argues instead for a form of indirect or hypothetical realism based on intentional correlations.
There’s a nice new piece on the Consistency of Arithmetic by Timothy Chow in the Mathematical Intelligencer, which the author has made freely available. As he says in a FOM posting, he has put extra effort into trying to make Gentzen’s proof accessible to the “mathematician in the street”. …
The Twin Earth thought experiment invites us to consider a liquid that has all of the superficial properties associated with water (clear, potable, etc.) but has entirely different deeper causal properties (composed of “XYZ” rather than of H2O). Although this thought experiment was originally introduced to illuminate questions in the theory of reference, it has also played a crucial role in empirically informed debates within the philosophy of psychology about people’s ordinary natural kind concepts. Those debates have sought to accommodate an apparent fact about ordinary people’s judgments: Intuitively, the Twin Earth liquid is not water. We present results from four experiments showing that people do not, in fact, have this intuition. Instead, people tend to have the intuition that there is a sense in which the liquid is not water but also a sense in which it is water. We explore the implications of this finding for debates about theories of natural kind concepts, arguing that it supports views positing two distinct criteria for membership in natural kind categories – one based on deeper causal properties, the other based on superficial, observable properties.
This paper criticizes Soames’s main argument against a variant of two-dimensionalism that he calls strong two-dimensionalism. The idea of Soames’s argument is to show that the strong twodimensionalist’s semantics for belief ascriptions delivers wrong semantic verdicts about certain complex modal sentences that contain both such ascriptions and claims about the truth of the ascribed beliefs. A closer look at the formal semantics underlying strong two-dimensionalism reveals that there are two feasible ways of specifying the truth conditions for claims of the latter sort. Only one of the two yields the problematic semantic verdicts, so strong twodimensionalists can avoid Soames’s argument by settling for the other way.
(Note to reader: These are lightly expanded notes for a class I once gave on freedom of speech. The notes are intended to explain the logic, structure and shortcomings of J.S. Mill’s defence of free speech. …
. Excursion 2 Tour I: Induction and Confirmation (Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars)
Tour Blurb. The roots of rival statistical accounts go back to the logical Problem of Induction. …
Motivated by examples, many philosophers believe that there is a significant distinction between states of affairs that are striking and therefore call for explanation and states of affairs that are not striking. This idea underlies several influential debates in metaphysics, philosophy of mathematics, normative theory, philosophy of modality, and philosophy of science but is not fully elaborated or explored. This paper aims to address this lack of clear explanation first by clarifying the epistemological issue at hand. Then it introduces an initially attractive account for strikingness that is inspired by the work of Paul Horwich (1982) and adopted by a number of philosophers. The paper identifies two logically distinct accounts that have both been attributed to Horwich and then argues that, when properly interpreted, they can withstand former criticisms. The final two sections present a new set of considerations against both Horwichian accounts that avoid the shortcomings of former critiques. It remains to be seen whether an adequate account of strikingness exists.
I used to much enjoy contributing to the admirable Ask Philosophers site (you can check out my efforts here!) But rather sadly I had to give that up. But it seems that I can’t resist the pedagogic imperative. …
After reading O’Connor and Churchill’s piece on emergence, one of my very smart undergraduate students commented that it follows from such emergentist views that one could know the mental facts from the physical facts. …
On March 7, 1277, the Bishop of Paris, Stephen Tempier, prohibited the
teaching of 219 philosophical and theological theses that were being
discussed and disputed in the faculty of arts under his jurisdiction. Tempier’s condemnation has gained great symbolic meaning in the minds
of modern intellectual historians, and possibly for this reason, there
is still considerable disagreement about what motivated Tempier to
promulgate his prohibition, what exactly was condemned, and who the
targets were. In addition, the effects of Tempier’s action on the
course of medieval thought in the thirteenth and fourteenth centuries,
and even beyond, has been the subject of much debate.
In 'Consequentialism and Moral Worth' (forthcoming in Utilitas), Nathaniel Sharadin discusses the idea that acts done for the "right reason" have a special normative status (moral worth / praiseworthiness). …
I distinguish two forms of pluralism about biological functions, between-discipline pluralism and within-discipline pluralism. Between-discipline pluralism holds that different theories of function are appropriate for different subdisciplines of biology and psychology (for example, that the selected effects theory of function is appropriate for some branches of evolutionary biology, and the causal role theory is appropriate for disciplines such as molecular biology, neuroscience, or psychology). I provide reasons for rejecting this view. Instead, I recommend within-discipline pluralism, which emphasizes the plurality of function concepts at play within any given subdiscipline of biology and psychology.
Because idealizations frequently advance scientific understanding, many claim that falsehoods play an epistemic role. In this paper, we argue that these positions greatly overstate idealizations’ import for understanding. We introduce work on epistemic value to the debate surrounding idealizations and understanding, arguing that idealizations qua falsehoods only confer non-epistemic value to understanding. We argue for this claim by criticizing the leading accounts of how idealizations provide understanding. For each of these approaches, we show that: (a) idealizations’ false components only promote convenience instead of understanding and (b) only the true components of idealizations have epistemic value.
. Stephen Senn
The Rothamsted School
I never worked at Rothamsted but during the eight years I was at University College London (1995-2003) I frequently shared a train journey to London from Harpenden (the village in which Rothamsted is situated) with John Nelder, as a result of which we became friends and I acquired an interest in the software package Genstat®. …
Paying strict attention to Brandon Carter’s several published renditions of anthropic reasoning, we present a “nutshell” version of the Doomsday argument that is truer to Carter’s principles than the standard balls-and-urns or otherwise “naive Bayesian” versions that proliferate in the literature. At modest cost in terms of complication, the argument avoids commitment to many of the half-truths that have inspired so many to rise up against other toy versions, never adopting posterior outside of the convex hull of one’s prior distribution over the “true chance” of Doom. The hyper-pessimistic position of the standard balls-and-urn presentation and the hyper-optimistic position of naive self-indicators are seen to arise from dubiously extreme prior distributions, leaving room for a more satisfying and plausible intermediate solution.
Descartes is right that language is a defining characteristic of humans. Without language, we could not have evolved into selves. Yet his simple dichotomy is hard to defend nowadays when, on the one hand, we have come to know of various pathologies of language that leave some humans with serious linguistic deficits and, on the other hand, we have research attributing language to other animals (and maybe even to trees!) In a recent article in Scientific American, Kenneally, for instance, points out: The list of abilities that were formerly thought to be a unique part of human language is actually quite long. It includes parts of language, such as words. Vervet monkeys use wordlike alarm calls to signal a specific kind of danger. Another crucial aspect is structure. Because we have syntax, we can produce an infinite number of novel sentences and meanings, and we can understand sentences that we have never heard before. Yet zebra finches have complicated structures in their songs, dolphins can understand differences in word order and even some monkeys in the wild seem to use one type of call to modify another. The list extends to types of cognition, such as theory of mind, which is the ability to infer others' mental states. Dolphins and chimpanzees are excellent at guessing what an interlocutor wants.
Max Planck Institute for Biological Cybernetics, Tübingen, Germany I am always eager to inspect new philosophical conceptions of the mathematical sciences to see whether they have given the mathematical component what I consider to be its rightful due. All too often philosophers of science implicitly buy the logical empiricist line that mathematics is a branch of logic, broadly speaking, and thus a transparent language whose involvement in scientific theories in no sense frames or mediates our understanding of the world. Even those more sophisticated philosophers who have left behind a naïve empiricism to examine the mediating effects of our instruments and models have little to say to us on the subject of mathematics. On the other hand, when the logical empiricist attitude to mathematics is rejected and the use of mathematics is taken to involve something more than the use of a logical language, this largely amounts to a kind of literalism which worries about our being committed to the sorts of abstract entities physicalists take not to exist. Philosophies which find in the application of mathematics something of significance other than a troublesome problem are fairly rare, and experience shows that most of these owe considerable allegiance to Kantianism.
Symmetry breaking is ubiquitous in almost all areas of physics. It is a feature of everyday phenomenon as well as in more specific contexts within physics when considering elementary particles described by quantum fields, quantum mechanical descriptions of condensed matter systems or general relativistic descriptions of the entire universe. In all of these, symmetry breaking plays an essential role. However, one should be careful in understanding ”symmetry breaking” as this one thing, e.g. this one mechanism, you can find in all the various physical systems. The reason for this is that the notion of symmetry breaking is very broad, in the sense that many very different scenarios are covered under this name, and also very misleading, as there is often not much that is really being ”broken”.
Analogue experiments have attracted interest for their potential to shed light on inaccessible domains. For instance, ‘dumb holes’ in fluids and Bose-Einstein condensates, as analogues of black holes, have been promoted as means of confirming the existence of Hawking radiation in real black holes. We compare analogue experiments with other cases of experiment and simulation in physics. We argue—contra recent claims in the philosophical literature—that analogue experiments are not capable of confirming the existence of particular phenomena in inaccessible target systems. As they must assume the physical adequacy of the modelling framework used to describe the inaccessible target system, arguments to the conclusion that analogue experiments can yield confirmation for phenomena in those target systems, such as Hawking radiation in black holes, beg the question.
This paper is about Brentano’s philosophical program in Vienna and the overall architecture, which binds together the main parts of his philosophy. I argue that this program is based on Brentano’s project of philosophy as science and it aims to account for the unity of the main branches of his philosophy. The paper is divided into six parts. The first bears on Brentano’s philosophy of history, which is an important piece of the program. The second is on the close relationship between philosophy and science, and the third is on Brentano’s classification of theoretical sciences. In the three remaining parts of the paper, I examine the two main axes of the program, i.e. psychology and metaphysics, and the question how the three normative sciences are rooted in psychology. In the conclusion, I argue that Brentano’s theory of the four phases in the history of philosophy provides his philosophical program with a justification.
This paper is mainly about Brentano’s commentaries on Ernst Mach in his lectures “Contemporary philosophical questions” which he held one year before he left Austria. I will first identify the main sources of Brentano’s interests in Comte’s and J. S. Mill’s positivism during his Würzburg period. The second section provides a short overview of Brentano’s 1893-1894 lectures and his criticism of Comte, Kirchhoff, and Mill. The next sections bear on Brentano’s criticism of Mach’s monism and Brentano’s argument against the reduction of the mental based on his theory of intentionality. The last section is about Brentano’s proposal to replace the identity relation in Mach’s theory of elements by that of intentional correlation. I conclude with a remark on the history of philosophy in Austria.
It is shown that the combination of unitary quantum theory and special relativity may lead to a contradiction when considering the EPR correlations in different inertial frames in a Gedankenexperiment. This result seems to imply that either unitary quantum theory is wrong or if unitary quantum theory is right then there must exist a preferred Lorentz frame.
Predicting the future behavior of complex dynamical systems with the help of nonlinear models is an important part of scientific practice. However, making predictions from nonlinear models is often affected by severe uncertainties. In recent years, there has been an extensive debate about the epistemic limitations of model-based predictions not only in the philosophy of science, but also within the scientific community. 1See for instance a recent special issue of Science on prediction and the limits of predictability in current science (Jasny and Stone, 2017).
After listening to a talk by Christopher Kaczor, and the ensuing discussion, I want to offer a defense of a moderate position on the state not compelling healthcare professionals to violate their conscience, even when their conscience is unreasonably mistaken. …
Suppose you read this exposition:
Frege’s conception of a function is closely related to his discovery that quantifiers like (“for all”) and (“for some”) operate on what are now called open expressions — expressions containing free variables. …
Every action we take leaves a trail of information that could, in
principle, be recorded and stored for future use. For instance, one
might use the older forms of information technologies of pen and paper
and keep a detailed diary listing all the things one did and thought
during the day. It might be a daunting task to record all this
information this way but there are a growing list of technologies and
software applications that can help us collect all manner of data,
which in principle, and in practice, can be aggregated together for
use in building a data profile about you, a digital diary with
millions of entries.
Greek antiquity saw the development of two distinct systems of logic: Aristotle’s theory of the categorical syllogism and the Stoic theory of the hypothetical syllogism. Some ancient logicians argued that hypothetical syllogistic is more fundamental than categorical syllogistic on the grounds that the latter relies on modes of propositional reasoning such as reductio ad absurdum. Peripatetic logicians, by contrast, sought to establish the priority of categorical over hypothetical syllogistic by reducing various modes of propositional reasoning to categorical form. In the 17th century, this Peripatetic program of reducing hypothetical to categorical logic was championed by Gottfried Wilhelm Leibniz. In an essay titled Specimina calculi rationalis, Leibniz develops a theory of propositional terms that allows him to derive the rule of reductio ad absurdum in a purely categorical calculus in which every proposition is of the form A is B. We reconstruct Leibniz’s categorical calculus and show that it is strong enough to establish not only the rule of reductio ad absurdum, but all the laws of classical propositional logic. Moreover, we show that the propositional logic generated by the nonmonotonic variant of Leibniz’s categorical calculus is a natural system of relevance logic known as RMI¬→ .