All of us engage in and make use of valid reasoning, but the reasoning
we actually perform differs in various ways from the inferences
studied by most (formal) logicians. Reasoning as performed by human
beings typically involves information obtained through more than one
medium. Formal logic, by contrast, has thus far been primarily
concerned with valid reasoning which is based on information in one
form only, i.e., in the form of sentences. Recently, many
philosophers, psychologists, logicians, mathematicians, and computer
scientists have become increasingly aware of the importance of
multi-modal reasoning and, moreover, much research has been undertaken
in the area of non-symbolic, especially diagrammatic, representation
systems.[ 1 ]
This entry outlines the overall directions of this new research area
and focuses on the logical status of diagrams in proofs, their
representational function and adequacy, different kinds of
diagrammatic systems, and the role of diagrams in
some snapshots from Excursion 3 tour II. Excursion 3 Tour II: It’s The Methods, Stupid
Tour II disentangles a jungle of conceptual issues at the heart of today’s statistics wars. The first stop (3.4) unearths the basis for a number of howlers and chestnuts thought to be licensed by Fisherian or N-P tests. …
Decisions are typically about outcomes that happen later in time. As such they demand comparisons of the value of outcomes now versus outcomes later. Should I buy a new car or save for retirement? Have the last piece of cake tonight or tomorrow? Lower carbon emissions now or suffer greater loss later? Intertemporal decisions have triggered hundreds of studies across many fields. Popular subjects include personal finances, addiction, nutrition, health, marketing, and environmental conservation. In many of these decisions we tend to exhibit what is called a positive time preference; that is, all else being equal, we prefer positive goods, experiences, and states of affairs to be delivered sooner rather than later. Sweets delivered to me tomorrow aren’t as valuable to me as sweets I can eat today. Descriptive and normative inquiries tackle how we make intertemporal comparisons of utility in such cases and how we should. The present paper is about the second issue, the normative question that asks how we ought to translate future utility into present utility. My focus is restricted to individuals and not societies. I want to challenge the conventional wisdom dominating the social sciences and philosophy regarding temporal discounting, the practice of discounting the value of future utility.
This is a discussion of Delia Fara’s theory of vagueness, and of its solution to the sorites paradox, criticizing some of the details of the account, but agreeing that its central insight will be a part of any solution to the problem. I also consider a wider range of philosophical puzzles that involve arguments that are structurally similar to the argument of the sorites paradox, and argue that the main ideas of her account of vagueness helps to respond to some of those puzzles.
Could spacetime be derived rather than fundamental? The question is pressing because attempts to quantize gravity have led to theories in which (arguably) there are either no, or only extremely thin, spacetime structures. Moreover, recent proposals for the interpretation of quantum mechanics have suggested that 3-dimensional space may be an ‘appearance’ derived from the 3N -dimensional space in which an N -particle wavefunction lives (cross-reference). In fact, I will largely assume a positive answer, and investigate how it could be; in particular, I want to explicate the role of philosophy in producing a satisfactory explanation of spacetime, providing a roadmap for philosophical engagement with quantum gravity. First, I will explain why such a derivation can be described as ‘emergence’.
The science fiction novel Quarantine portrays a world wherein interaction with human observers is necessary to collapse quantum wavefunctions. The author, Greg Egan, amusingly puts the emphasis on the observers being human — aliens can’t do it. Aliens are therefore at a tremendous disadvantage. As we gaze at the night sky, we are constantly collapsing alien worlds, depriving them of their branch diversity. Whole civilizations are being snuffed out by our observations! Understandably the aliens grow tired of this. In response they erect an impenetrable shield around the solar system, one that blinds us to the outside universe. This shield protects the rest of the universe from harmful human observation, locking humanity into a starless Bubble.
This paper is a tribute to Delia Graff Fara. It extends her work on failures of meta-rules (conditional proof, RAA, contraposition, disjunction elimination) for validity as truth-preservation under a supervaluationist identification of truth with supertruth. She showed that such failures occur even in languages without special vagueness-related operators, for standards of deductive reasoning as materially rather than purely logically good, depending on a context-dependent background. This paper extends her argument to: quantifier meta-rules like existential elimination; ambiguity; deliberately vague standard mathematical notation. Supervaluationist attempts to qualify the meta-rules impose unreasonable cognitive demands on reasoning and underestimate her challenge.
A number of philosophers have attempted to solve the problem of null-probability possible events in Bayesian epistemology by proposing that there are infinitesimal probabilities. Hajek (2003) (more tentatively) and Easwaran (2014) (more definitively) have argued that because there is no way to specify a particular hyperreal extension of the real numbers, solutions to the regularity problem involving infinitesimals, or at least hyperreal infinitesimals, involve an unsatisfactory ineffability or arbitrariness. The arguments depend on the alleged impossibility of picking out a particular hyperreal extension of the real numbers and/or of a particular value within such an extension due to the use of the Axiom of Choice. However, it is false that the Axiom of Choice precludes a specification of a hyperreal extension—such an extension can indeed be specified. Moreover, for all we know, it is possible to explicitly specify particular infinitesimals within such an extension. Nonetheless, I prove that because any regular probability measure that has infinitesimal values can be replaced by one that has all the same intuitive features but other infinitesimal values, the heart of the arbitrariness objection remains.
The clock hypothesis is taken to be an assumption independent of special relativity necessary to describe accelerated clocks. This enables to equate the time read off by a clock to the proper time. Here, it is considered a physical system—the light clock—proposed by Marzke and Wheeler. Recently, Fletcher proved a theorem that shows that a sufficiently small light clock has a time reading that approximates to an arbitrary degree the proper time. The clock hypothesis is not necessary to arrive at this result. Here, one explores the consequences of this regarding the status of the clock hypothesis. It is argued in this work that there is no need for the clock hypothesis in the special theory of relativity.
The use of unrealistic assumptions in Economics is usually defended not only for pragmatic reasons, but also because of the intrinsic difficulties in determining the degree of realism of assumptions. Additionally, the criterion used for evaluating economic models is associated with their ability to provide accurate predictions.
Even though nobody thinks Strong AI has been achieved, we attribute beliefs to computer systems and software:
Microsoft Word thinks that I mistyped that word. Google knows where I’ve been shopping. The attribution is communicatively useful and natural, but is not literal. …
According to Kratzer’s influential account (1981; 1991; 2012), epistemic must and might involve quantification over domains of possibilities determined by a modal base and an ordering source. Recently, this account has been challenged by invoking contexts of ‘epistemic tension’: i.e., cases in which an assertion that must φ is conjoined with the possibility that ¬φ, and cases in which speakers try to downplay a previous assertion that must φ, after finding out that ¬φ. Epistemic tensions have been invoked from two directions. von Fintel and Gillies (2010) propose a return to a simpler modal logic-inspired account: must and might still involve universal and existential quantification, but the domains of possibilities are determined solely by realistic modal bases. In contrast, Lassiter (2016), following Swanson (2006, 2011), proposes a more revisionary account which treats must and might as probabilistic operators. In this paper, we present a series of experiments to obtain reliable data on the degree of acceptability of different contexts of epistemic tensions. Our experiments include novel variations that, we argue, are required to make progress in this debate. We show that restricted quantificational accounts a la Kratzer (e.g., Kratzer, 1981, 2012; Roberts, 2015; Giannakidou and Mari, 2016) fit the overall pattern of results better than either of their recent competitors. In addition, our results help us identify which components of restricted quantificational accounts are crucial for their success, and on that basis propose some general constraints that should be satisfied by all candidate accounts of the modal auxiliaries.
It is widely alleged that metaphysical possibility is “absolute” possibility (Kripke , Lewis , Rosen [2006, 16], Stalnaker [2005, 203], Williamson [2016, 460]). Indeed, this is arguably its metaphysical significance. Kripke calls metaphysical necessity “necessity in the highest degree” ([1980, 99]). Williamson calls metaphysical possibility the “maximal objective modality” [2016, 459]. Rosen says that “metaphysical possibility is the [most inclusive] sort of real possibility” ([2006, 16]). And Stalnaker writes, “we can agree with Frank Jackson, David Chalmers, Saul Kripke, David Lewis, and most others who allow themselves to talk about possible worlds at all, that metaphysical necessity is necessity in the widest sense [2003, 203].” What exactly does the thesis that metaphysical possibility is absolute amount to? Is it true? In this article, I argue that, assuming that the thesis is not merely terminological, and lacking in any metaphysical interest, it is an article of faith. I conclude with the suggestion that metaphysical possibility may lack the metaphysical significance that is widely attributed to it.
To people already familiar with the argument, this may be old news. It’s an analogy to help people feel the intuitive force of the argument. Suppose the following: New diseases appear inside people all the time. …
Wrongdoing is an inescapable fact of life. We all do wrong and are wronged from time to time and in response we often blame one another. In the broadest sense, moral blame is a personal response to wrongdoing or wrongbeing, which can manifest in a variety of mental states— e.g., judgments, desires, dispositions, and emotions—as well as in behavior. We blame for a variety of wrongs, in a variety of ways, and with a variety of consequences: one expresses disappointment with an unfaithful partner who then apologizes, another rants about injustice thereby alienating part of her Facebook community, a third turns inward in frustration with a neglectful parent who in turn mistakes her withdrawal for indifference. Such conflicts are not the whole or even the greater part of our shared social existence, but they are a defining feature of it.
Ascriptions of irrationality typically constitute a form of criticism, while ascriptions of rationality are a form of praise.1 More specifically, charges of irrationality involve personal criticism, in which the agent is negatively evaluated for having responded in certain ways.2 It is often thought that being criticizable is evidence that one has done something one ought not to do— something one had decisive normative reasons not to do.3 Assuming that this is so, the fact that charges of irrationality constitute a form of criticism suggests that there is a close connection between rationality and what we have reasons to do. This provides motivation for a normative, reasons-based account of rationality.4
The burgeoning debate about the metaethical implications of the Darwinist view of morality focuses on which epistemic principle(s) allegedly support debunking arguments against moral objectivism.1 Moral objectivism is the view that (at least some) moral truths are metaphysically necessary as well as constitutively and causally independent of human attitudes or beliefs.2 Though objectivists must, of course, explain how objectivist moral beliefs can be justified in the first place, a central question is whether objectivist moral beliefs can be undercut, assuming that they are at least prima facie justified.3 So, what is that “something” in virtue of which a Darwinist view of morality creates a problem for objectivist moral beliefs? It has been claimed that evolutionary explanations of morality might show that moral beliefs are prone to error or fail to be modally secure, or that the best explanation of moral beliefs does not entail that they are (mostly) true.4 None of these theses has found widespread support.
Latin American feminism, which in this entry includes Caribbean
feminism, is rooted in the social and political context defined by
colonialism, the enslavement of African peoples, and the
marginalization of Native peoples. Latin American feminism focuses on
the critical work that women have undertaken in reaction to the forces
that created this context. At present, the context is dominated by
neoliberal economic policies that, in the environment of
globalization, have disproportionally impacted the most vulnerable
segments of society. Against this political backdrop, Latin American
feminism is grounded in the material lives of people, often women, as
it explores the tensions engendered by the confluence of histories
that generate relationships among gender, citizenship, race/ethnicity,
sexuality, class, community, and religion.
Moral, legal, and political philosophers have spent a great deal of time thinking about what consent is, and how consent can play the apparently transformative role that it appears to with respect to making some otherwise impermissible and objectionable conduct permissible and unobjectionable. These are hard and important metaphysical and moral questions.
Although a prominent question in ancient Greek political philosophy, the question of political expertise or political skill is one that has received little recent philosophical discussion—particularly outside of debates about exactly how to read and interpret Plato. This is unfortunate, as the idea of political expertise or skill relevant to politics continues to be prominent in popular discussions of political candidates, in empirical research relating to voter and political official competence, and, implicitly, in discussions of what have come to be called technocratic or epistocratic political systems.
Bernard Bolzano (1781–1848) was a Catholic priest, a professor
of the doctrine of Catholic religion at the Philosophical Faculty of
the University of Prague, an outstanding mathematician and one of the
greatest logicians or even (as some would have it) the
greatest logician who lived in the long stretch of time between
Leibniz and Frege. As far as logic is concerned, Bolzano anticipated
almost exactly 100 years before Tarski and Carnap their semantic
definitions of logical truth and logical consequence; and in
mathematics he is not only known for his famous Paradoxes of the
Infinite, but also for certain results that have become and still
are standard in textbooks of mathematics such as the
Plato’s shorter ethical works show Socrates at work on topics related
to virtue, which he believes we should seek for the sake of the soul as we should
seek health for the body. Works in this group shows stylistic as well
as philosophic affinities and are generally considered to have been
written early in Plato’s career. The dialogues in this group are our
main source for the philosophical style and teaching of the Platonic
Socrates, who is thought by some scholars to be to a reasonable
approximation of the historical figure. In this article,
“Socrates” always refers to the Platonic figure in the
works under discussion here.
At the second United States Presidential Debate in 2004, President Bush was asked whom he would choose to fill a vacancy on the United States Supreme Court. He replied: 1 would pick somebody who would not allow their personal opinion to get in the way of the law. ... 1 would pick people that would be strict constructionists. We’ve got plenty of lawmakers in Washington, D.C. Legislators make law; judges interpret the Constitution.1
Tour II It’s the Methods, Stupid
There is perhaps in current literature a tendency to speak of the Neyman–Pearson contributions as some static system, rather than as part of the historical process of development of thought on statistical theory which is and will always go on. …
From the very beginning the main point under debate has been the attitude to take to the departure From customary principles of natural philosophy Characteristic of the novel development of physics which was initiated in the first year of this century by Planck's discovery of the universal quantum of action?
The problem of the reduction of chemistry to physics has been traditionally addressed in terms of classical structural chemistry and standard quantum mechanics. In this work, we will study the problem from the perspective of the Quantum Theory of Atoms in Molecules (QTAIM), proposed by Richard Bader in the nineties. The purpose of this article is to unveil the role of QTAIM in the inter-theoretical relations between chemistry and physics. We argue that, although the QTAIM solves two relevant obstacles to reduction by providing a rigorous definition of chemical bond and of atoms in a molecule, it appeals to concepts that are unacceptable in the quantum-mechanical context. Therefore, the QTAIM fails to provide the desired reduction. On the other hand, we will show that the QTAIM is more similar to Bohmian mechanics and that the basic elements of both theories are closely related.
This paper introduces some basic ideas and formalism of physics in non-commutative geometry. It is a draft (written back in 2011) of a chapter of Out of Nowhere, a book on quantum gravity that I am co-authoring with Christian Wuthrich. Although it has long been suggested that quantizing gravity – imposing canonical commutations in some way – will lead to the coordinate commutation relations of non-commutative geometry, there is no known formal requirement that this be so. Nevertheless, such relations do show up in theories of quantum gravity, for instance as the result of a possible Planck scale non-locality in the interactions of the D-branes of string theory.
In his recent Editorial Article, Jeffrey Seeman (2017) calls for the promotion of collaborative work among different disciplines, focusing on the case of the interaction between chemistry, the history of chemistry and the philosophy of chemistry. From a general viewpoint, it is difficult to disagree with this claim; moreover, the interest of scientists in the history and the philosophy of science is always welcome. However, the devil is in the details: there are several points that, we think, must be discussed more carefully with the aim of arriving at far-reaching conclusions.
According to classical molecular chemistry, molecules have a structure, that is, they are sets of atoms with a definite arrangements in space and held together by chemical bonds. The concept of molecular structure is central to modern chemical thought given its impressive predictive power. It is also a very useful concept in chemistry education, due to its role in the rationalization and visualization of microscopic phenomena. However, such a concept seems to find no place in the ontology described by quantum mechanics, since it appeals to classical notions such as the position of the atomic nuclei or the individuality of electrons. Although this problem has attracted the attention of several authors, the discussion is far from settled. Some authors adopt an explicitly reductionist position and advocate to reconstruct the concept of molecular structure within the framework of the quantum theory. Others, although acknowledging the conceptual discontinuity between quantum mechanics and molecular chemistry, keep the hope of future reduction alive. From an explicitly non-reductionist position, on the contrary, others authors conceive molecular structure as an emergent phenomenon.
The celu of the philosophical literature on the hole argument is the 1987 paper by Earman & Norton [“What Price Space-time Substantivalism? The Hole Story” Br. J. Phil. Sci ]. This paper has a well-known back-story, concerning work by Stachel and Norton on Einstein’s thinking in the years 1913-15. Less well-known is a connection between the hole argument and Earman’s work on Leibniz in the 1970s and 1980s, which in turn can be traced to an argument first presented in 1975 by Howard Stein. Remarkably, this thread originates with a misattribution: the argument Earman attributes to Stein, which ultimately morphs into the hole argument, was not the argument Stein gave. The present paper explores this episode and presents some reflections on how it bears on the subsequent literature.