It is not easy to make an introduction to e-theory. One reason is that it was composed many years ago under a short period of time. In a way my later work really is nothing other than an attempt to make an introduction to e-theory. Accordingly, this introduction is at risk of being more of the same but I will really try to avoid that. The subtitle, however, refers to my paper Scientific Ontology in which I argued for something like e-theory. After many years of working within a field that has become familiar to me I appreciate that that is not the case for others. Manipulating with things like “basic assumptions” the result will be unfamiliar by default. Another reason, therefore, for the difficulty to introduce e-theory, is that it as a whole does not correspond to anything known of in its details. Even though some parts have been seen before the theory makes assertions that could be perceived as odd. In this introduction, therefore, I take the opportunity to mention that e-theory is not something I think is true. E-theory goes beyond that. As a non-empirical structure it is a tool. Useful or not. And as Karl Popper would have said, e-theory cannot be proven, only falsified.
Humans are imperfect reasoners. In particular, humans are imperfect mathematical reasoners. They are fallible, with a non-zero probability of making a mistake in any step of their reasoning. This means that there is a nonzero probability that any conclusion that they come to is mistaken. This is true no matter how convinced they are of that conclusion. Even brilliant mathematicians behave in this way; Poincare wrote that he was “absolutely incapable of adding without mistakes” (1910, p. 323).
ONE important dimension of how we evaluate anger concerns its effects. Roughly, we often want to know if someone being angry is productive or not, relative to certain values or goals. Debate on this kind of question runs through the history of political thought up until the present moment. For example, it’s long been a key part of the debate about the role of anger in political movements against a range of forms of domination, oppression, and exploitation, from campaigns to overthrow authoritarian dictatorships to the contemporary Black Lives Matter movement.
This paper explores the relation between belief-like imaginings and the establishment of imaginary worlds (often called ‘fictional worlds’). After outlining various assumptions my argument is premised on, I argue that belief-like imaginings, in themselves, do not render their content true in the imaginary world to which they pertain. I show that this claim applies not only to imaginative projects in which we are instructed or intend to imagine certain propositions, but also to spontaneous imaginative projects. After arguing that, like guided imaginative projects, spontaneous projects involve specific imaginary truths, I conclude that imaginative projects, whether spontaneous or deliberate, comprise not only imaginings, but also mental acts of determining such ‘truths.’
The Challenge from Cognitive Diversity (CCD) states that demography-specific intuitions are unsuited to play evidential roles in philosophy. The CCD attracted much attention in recent years, in great part due to the launch of an international research effort to test for demographic variation in philosophical intuitions. In the wake of these international studies, the CCD may prove revolutionary. For, if these studies uncover demographic differences in intuitions, then, in line with the CCD, there would be good reason to challenge philosophical views that rely on those intuitions for evidential support. I argue that philosophical views that rely on demography-specific intuitions for evidential support need not be threatened by such findings. I first provide a detailed analysis of the epistemological principles driving the CCD and distinguish three formulations of this challenge. I then show that there are good reasons to reject all such formulations of the CCD.
Daniel Wegner’s theory of apparent mental causation is often misread. His aim was not to question the causal effectiveness of conscious mental states like intentions. Rather, he attempted to show that our subjective sense of agency is not a completely reliable indicator of the actual causality of action, and needs to be replaced by more objective means of inquiry.
. While immersed in our fast-paced, remote, NISS debate (October 15) with J. Berger and D. Trafimow, I didn’t immediately catch all that was said by my co-debaters (I will shortly post a transcript). We had all opted for no practice. …
James Mill (1773–1836) was a Scots-born political philosopher,
historian, psychologist, educational theorist, economist, and legal,
political and penal reformer. Well-known and highly regarded in his
day, he is now all but forgotten. Mill’s reputation now rests
mainly on two biographical facts. The first is that his first-born son
was John Stuart Mill, who became even more eminent than his father. The second is that the elder Mill was the collaborator and ally of
Jeremy Bentham, whose subsequent reputation also eclipsed the elder
Mill’s. Our aim here is to try, insofar as possible, to remove
Mill from these two large shadows and to reconsider him as a
formidable thinker in his own right.
When I gave a talk about Ramanujan’s easiest formula at the Whittier College math club run by my former student Brandon Coya, one of the students there asked a great question: are there any unproved formulas of Ramanujan left? …
A question of recent interest in epistemology and philosophy of mind is how belief and credence relate to each other. A number of philosophers argue for a belief-first view of the relationship between belief and credence. On the belief-first view, what is it to have a credence just is to have a particular kind of belief, that is, a belief whose content involves probabilities or epistemic modals. Here, I argue against the belief-first view: specifically, I argue that it cannot account for agents who have credences in propositions they barely comprehend. I conclude that, however credences differ from beliefs, they do not differ in virtue of adding additional content to the believed proposition.
Relational semantics for nonclassical logics lead straightforwardly to topological representation theorems of their algebras. Ortholattices and De Morgan lattices are reducts of the algebras of various nonclassical logics. We define three new classes of topological spaces so that the lattice categories and the corresponding categories of topological spaces turn out to be dually isomorphic. A key feature of all these topological spaces is that they are ordered relational or ordered product topologies.
How long does a quantum particle take to traverse a classically forbidden energy barrier? In other words, what is the correct expression for quantum tunneling time? This seemingly simple question has inspired widespread debate in the physics literature. By drawing an analogy with the double-slit experiment, I argue that we should not even expect the standard interpretation of quantum mechanics to provide an expression for quantum tunneling time. I explain how this conclusion connects to time’s special status in quantum mechanics, the meaningfulness of classically inspired concepts in different interpretations of quantum mechanics, and the prospect of constructing experimental tests to distinguish between different interpretations.
The paper, drawing on the example of simulation codes used in nuclear physics and high-energy physics, seeks to highlight the ethical implications of discontinuing support for simulation codes and the loss of knowledge embodied in them. Predicated on the concept of trading zones and actor network models, the paper addresses the problem of extinction of simulation codes and attempts to understand their evolution and development within those frameworks. We show that simulation codes of closed type develop to the level of creoles, becoming local languages and standards of scientific centers and disappearing as their few main developers leave, whereas codes of open types become universal languages, imposing problem-solving patterns on the entire community and crowding out other codes. The paper suggests that because of simulations’ reliance on tacit knowledge, practices entrenched in codes cannot be exhaustively explicated or transmitted through writing alone; on the contrary, the life cycle of a simulation code is determined by the life cycle of its trading zone. We examine the extent to which both of these phenomena pose a risk to the preservation of knowledge. Bearing upon intergenerational ethics, we draw analogies between the pure intergenerational problem (PIP) and the problem of preserving the knowledge implemented in simulation codes and transmitting it to future generations. We argue that for the complete transfer of knowledge, it is necessary to develop and maintain inhabitability and sustainability of simulation trading zones in a controllable way, at least until the demand for these codes is warranted to cease in the future.
[Editor's Note: The following new entry by Jennifer Flynn replaces the
on this topic by the previous author.] The relation between bioethics and moral theory is a complicated one. To start, we have philosophers as major contributors to the field of
bioethics, and to many philosophers, their discipline is almost by
definition a theoretical one. So when asked to consider the role of
moral theorizing in bioethics, a natural position of such philosophers
is that moral theory has a crucial, if not indispensable, role. At the same time, there are those who call into question the
“applied ethics” model of bioethics.
Liberalism is a family of doctrines that emphasize the value of
freedom and hold that the just state ensures freedom for individuals. Liberal feminists embrace this value and this role for the state and
insist on freedom for women. A disagreement concerning how freedom
should be understood divides liberalism into two different sorts; this
disagreement also divides liberal feminism. Some liberals understand freedom as freedom from coercive
interference. The convention in the literature is to call such folks
“classical liberals”. This is fitting since the view they
embrace is historically prior.
I used to think that it is trivial and uncontroversial that if one intends something, one intends it as an end or as a means. Some people (e.g., Aquinas, Anscombe, O’Brian and Koons, etc.) have a broad view of intention. …
Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose. …
Martin Gustafsson Åbo Akademi University 1. The collection of papers discussed in this issue of Iride is called Rileggere Wittgenstein – not Rileggere the Tractatus, or Rileggere the early Wittgenstein. In fact, a central theme of the collection is precisely the rejection of the idea that all that is at stake in the debate over so-called ‘resolute’ readings is, in Conant’s words, ‘the relatively parochial question concerning the proper interpretation of Wittgenstein’s work during a single, relatively early phase of his philosophical development’ (Why Worry, p. 167). Both Conant and Diamond emphasize that, properly thought through, a resolute reading of the Tractatus will undermine many established views of what Wittgenstein is doing in his later works as well, and lead to a deeper and more adequate understanding of what the continuities and discontinuities between his early and late philosophies are.
This paper argues that Wittgenstein, both early and late, rejects the idea that the logically simpler and more fundamental case is that of “the mere sign” and that what a meaningful symbol is can be explained through the elaboration of an appropriately supplemented conception of the sign: the sign plus something (say, an interpretation or an assignment of meaning). Rather the sign, in the logically fundamental case of its mode of occurrence, is an internal aspect of the symbol. The Tractatus puts this point as follows: “The sign is that in the symbol which is perceptible by the senses.” Conversely, this means that it is essential to a symbol – to what a symbol is – that it have an essentially perceptible aspect. For Wittgenstein there is no privileged direction of explanatory priority between symbol and sign here: without signs there are no symbols (hence without language there is no thought) and without some sort of relation to symbols there are no signs (hence the philosopher’s concept of the supposedly “merely linguistic” presupposes an internal relation to symbols).
The force-content distinction is the Achilles heel of the Fregean picture of propositions. The problem is not just that the distinction is poorly motivated, or that it skews our understanding of meaning and communication (although I think all of that is true). The problem is that it is incoherent. Propositions are, or can be, true or false. In order to be true or false a proposition must take a stand on how things are. If something fails to do that — if it is completely neutral about how the world is — then it cannot be evaluated as true or false. To put it slightly differently, if a proposition is true if things are thus-and-so, then it is committed to things being thus-and-so. This notion of commitment is the one found in the concepts of judgment and assertion. Commitments arise out of assertions and judgments. There is no other nonjudgmental or non-assertoric concept of commitment for the Fregean to fall back on. Hence, in order for a proposition to be true or false, it must, in some sense, judge or assert something about the world. The point is perfectly general. In order for anything to be true or false, it must in some sense judge or assert that things are a certain way. The concept of a force-neutral, truth- 1 Unless otherwise noted, by “the force-content distinction” I mean the constitutive version of the force-content distinction, according to which propositions are non-assertoric and nonjudgmental. The other form of the distinction is the taxonomic version, according to which there is a single kind of truth-conditional content shared by all the varieties of speech acts and sentences. See (Hanks 2015: 9) for this distinction.
A singular proposition is a proposition that is about an object by virtue of containing that object as a constituent. In this paper I show how singular propositions are the product of a conflation of two different conceptions of propositions, one due to Frege and the other to Russell. A consequence of this conflation is that singular propositions violate an intuitive principle of compositionality. I argue that the only way to rescue singular propositions is to identify them with certain types of actions, a view I have defended elsewhere (Hanks 2011; 2013; 2015). Another goal of the paper is to trace the source of the conflation in the concept of a singular proposition through the development of formal semantics, starting with Frege and continuing with Church, Carnap, Montague, Lewis, and ultimately Kaplan.
Eternalism is the view that all times are equally real. The relativity of simultaneity in special relativity backs this up. There is no cosmically extended, self-existing ‘now.’ This leads to a tricky problem. What makes statements about the present true? I shall approach the problem along the lines of perspectival realism and argue that the choice of the perspective does. To corroborate this point, the Lorentz transformations of special relativity are compared to the structurally similar equations of the Doppler effect. The ‘now’ is perspectivally real in the same way as a particular electromagnetic spectrum frequency. I also argue that the ontology of time licensed by perspectival realism is more credible in this context than its current alternative, the fragmentalist interpretation of special relativity.
This paper accounts for broad definitions of memory, which extend to paradigmatic memory phenomena, like episodic memory in humans, and phenomena in worms and sea snails. These definitions may seem too broad, suggesting that they extend to phenomena that don’t count as memory or illustrate that memory is not a natural kind. However, these responses fail to consider a definition as a hypothesis. As opposed to construing definitions as expressing memory’s properties, a definition as a hypothesis is the basis to test inferences about phenomena. A definition as a hypothesis is valuable when the “kinding” of phenomena is on-going.
The aim of this paper is to argue that the (alleged) indeterminism of quantum mechanics, claimed by adherents of the Copenhagen interpretation since Born (1926), can be proved from Chaitin’s follow-up to Godel’s (first) incompleteness theorem. In comparison, Bell’s (1964) theorem as well as the so-called free will theorem–originally due to Heywood and Redhead (1983)–left two loopholes for deterministic hidden variable theories, namely giving up either locality (more precisely: local contextuality, as in Bohmian mechanics) or free choice (i.e. uncorrelated measurement settings, as in ’t Hooft’s cellular automaton interpretation of quantum mechanics). The main point is that Bell and others did not exploit the full empirical content of quantum mechanics, which consists of long series of outcomes of repeated measurements (idealized as infinite binary sequences): their arguments only used the long-run relative frequencies derived from such series, and hence merely asked hidden variable theories to reproduce single-case Born probabilities defined by certain entangled bipartite states. If we idealize binary outcome strings of a fair quantum coin flip as infinite sequences, quantum mechanics predicts that these typically (i.e. almost surely) have a property called 1-randomness in logic, which is much stronger than uncomputability. This is the key to my claim, which is admittedly based on a stronger (yet compelling) notion of determinism than what is common in the literature on hidden variable theories.
The Borel-Kolmogorov paradox is often presented as an obscure problem that certain mathematical accounts of conditional probability must face. In this paper, we point out that the paradox arises in the physical sciences, for physical probability or chance. By carefully formulating the paradox in this setting, we show that it is a puzzle for everyone, regardless of one’s preferred probability formalism. We propose a treatment which is inspired by the approach that scientists took when confronted with these cases.
There are a variety of ways that feminists have reflected upon and
engaged with science critically and constructively each of which might
be thought of as perspectives on science. Feminists have detailed the
historically gendered participation in the practice of
science—the marginalization or exclusion of women from the
profession and how their contributions have disappeared when they have
participated. Feminists have also noted how the sciences have been
slow to study women’s lives, bodies, and experiences. Thus from
both the perspectives of the agents—the creators of scientific
knowledge—and from the perspectives of the subjects of
knowledge—the topics and interests focused on—the sciences
often have not served women satisfactorily.
This paper aims to study the foundations of applied mathematics, using a formalized base theory for applied mathematics: ZFCAσ (Zermelo-Fraenkel set theory (with Choice) with atoms, where the subscript used refers to a signature specific to the application. Examples are given, illustrating the following five features of applied mathematics: comprehension principles, application conditionals, representation hypotheses, transfer principles and abstract equivalents.
In Economics Rules, Dani Rodrik (2015) argues that what makes economics powerful despite the limitations of each and every model is its diversity of models. Rodrik suggests that the diversity of models in economics improves its explanatory capacities, but he does not fully explain how. I offer a clearer picture of how models relate to explanations of particular economic facts or events, and suggest that the diversity of models is a means to better economic explanations.
This paper defends a new version of truthmaker non-Maximalism. The central feature of the view is the notion of a default truth-value. I offer a novel explanation for default truth-values and use it to motivate a general approach to the relation between truth-value and ontology, which I call ‘Truth- Value-Maker’ theory. According to this view, some propositions are false unless made true, whereas others are true unless made false. A consequence of the theory is that negative existential truths need no truthmakers and that positive existential falsehoods need no falsemakers.
In Input/output (I/O) logic, one makes a distinction between three kinds of permission, called negative, positive static and positive dynamic permission. They have been studied semantically and axiomatically by Makinson and van der Torre in the particular case where the underlying I/O operation for obligation is one of the standard systems. In this paper, we investigate what happens when the underlying I/O operation is one of the constrained I/O operations recently introduced by Parent and van der Torre. Their distinctive feature is two-fold. First, they are not closed under logical consequence. Second they have a built-in consistency check, which filters out excess outputs and allows them to properly deal with contrary-to-duty reasoning.