Philosophical skepticism is interesting because there are intriguing
arguments for it despite its initial implausibility. Many contemporary
epistemological positions can be fruitfully presented as responding to
some aspect of those arguments. For example, questions regarding
principles of epistemic closure and transmission are closely related
to the discussion of what we will call Cartesian Skepticism, as are
views according to which we are entitled to dismiss skeptical
hypotheses even though we do not have evidence against them. The
traditional issue of the structure of knowledge and justification,
engendering Foundationalism, Coherentism, and Infinitism, can be seen
as resulting from one main argument for what we will call Pyrrhonian
Sophie de Grouchy (1764–1822) was a French philosopher whose
book The Letters on Sympathy offers clear and original
perspectives on a number of important moral, political, and legal
philosophical issues. As well as this book, which she published
together with her translation of Smiths’s The Theory of
Moral Sentiments in 1798, Grouchy wrote and published other texts
pseudonomously and anonymously. In particular, Grouchy published
articles defending republicanism and participated in the writing and
editing of her husband’s (Condorcet) last work, the Sketch
of Human Progress.
Fictionalism about a region of discourse can provisionally be
characterized as the view that claims made within that discourse are
not best seen as aiming at literal truth but are better regarded as a
sort of ‘fiction’. As we will see, this first
characterization of fictionalism is in several ways rough. But it is a
useful point of departure. This entry is divided into five main sections. The first section
contains a brief history and overview of fictionalist views. The
second section describes more carefully what different kinds of
fictionalist theses there are. In the third and fourth sections,
important arguments for and against fictionalism are summarized.
When someone’s walking speed is two miles per hour, there are not two things, “one mile per hour walkings”, that are present. When we say that a sculpture has three dimensions, we are not saying there are exactly three things—dimensions?—that are present in it. …
Suppose that Arthur is about to give a lecture on trope theory but the lecture is canceled due to Platonist protests. It is intuitive to say that unjust epistemic harms have been perpetrated. But on whom? …
Heidegger’s famous critique of technology is widely recognized as the most concrete and practically relevant dimension of his later thought. I have no desire to contest that view, for it is right as far as it goes. Indeed, much of my own work has sought to demonstrate the continuing relevance of Heidegger’s ontotheological understanding of technology by defending his insightful views from the most formidable objections raised against them (by Andrew Feenberg and others) and by developing the important implications of his groundbreaking understanding of technology for the future of both higher education and environmentalism.‘ What I shall show here, however, is that Heidegger’s widely celebrated understanding of technology also leads back to the very core of his later philosophical views. In fact, the insight and relevance of Heidegger’s understanding of technology, which continues to impress so many, follow from some of the deepest, most mysterious and most difficult of his later ideas, ideas which still remain very little understood. Fortunately, the endeavour to understand, critically appropriate and apply the insights at the core of Heidegger’s prescient philosophy of technology continues unabated. In order to help aid and inspire this important project here, I shall thus seek to illuminate some of the deeper and more mysterious philosophical views behind Heidegger’s celebrated critique of technology.
What would be an adequate theory of social understanding? In the last decade, the philosophical debate has focused on Theory Theory, Simulation Theory and Interaction Theory as the three possible candidates. In the following, we look carefully at each of these and describe its main advantages and disadvantages. Based on this critical analysis, we formulate the need for a new account of social understanding. We propose the Person Model Theory as an independent new account which has greater explanatory power compared to the existing theories.
University education is a highly valuable good. Those who receive it benefit from developing their skills and often increasing their expected earnings and job opportunities. Wider society also benefits from the public goods that university education produces, such as more widely disseminated knowledge, and increases in productivity. In addition, universities shape our society and its major institutions. Disproportionately graduates end up in positions of influence in society.
Morally supererogatory acts are those that go above and beyond the call of duty. More specifically: they are acts that, on any individual occasion, are good to do and also both permissible to do and permissible to refrain from doing. We challenge the way in which discussions of supererogation typically consider our choices and actions in isolation. Instead we consider sequences of supererogatory acts and omissions and show that some such sequences are themselves problematic. This gives rise to the following puzzle: what problem can we have with a sequences of actions if each individual act or omission is itself permissible? In this paper, we develop a response to this question, by exploring whether solutions analogous to those proposed in the rational choice literature are available in the case of supererogatory sequences. Our investigation leads us to the view that making sense of the supererogatory requires accepting that there are global moral norms that apply to sequences of acts alongside the local moral norms that apply to individual acts.
Are women (simply) adult human females? It might surprise the woman on the Clapham Omnibus to learn that philosophers almost always answer no. This paper argues that they are wrong. The orthodox view among philosophers who have considered the matter is that the category woman is a social category, like the categories wife, firefighter, and shoplifter. It is not a biological category, like the categories vertebrate, mammal, or adult human female. (Similar remarks go for man, girl, and boy; following the literature the focus will be on woman.) This (alleged) distinction between adult human female and woman is sometimes said to be the distinction between “sex” and “gender”: Speakers ordinarily seem to think that ‘gender’ and ‘sex’ are coextensive: women and men are human females and males, respectively, and the former is just the politically correct way to talk about the latter. Feminists typically disagree and many have historically endorsed a sex/gender distinction. Its standard formulation holds that ‘sex’ denotes human females and males, and depends on biological features (chromosomes, sex organs, hormones, other physical features). Then again, ‘gender’ denotes women and men and depends on social factors (social roles, positions, behavior, self-ascription). (Mikkola 2016: 23, first emphasis added) §2 makes the positive case that women are adult human females; after that, the final section tries to defuse objections. But first, some preliminaries.
This paper offers a novel account of how know-how improves to expertise in a way that is structurally analogous to how propositional knowledge improves to understanding. A payoff of developing this analogy is a better grip not only of how know-how and expertise differ, but also of why it is that this difference is important.
What is the future of the automotive industry? If you’ve been paying attention over the past decade, you’ll know the answer: self-driving (a.k.a autonomous) vehicles. Instead of relying on imperfect, biased, lazy and reckless human beings to get us from A to B, we will rely on sophisticated and efficient computer programs. …
In this article I raise a new problem for quantum mechanics, which I call the control problem. Like the measurement problem, the control problem places a fundamental constraint on quantum theories. The characteristic feature of the problem is its focus on state preparation. In particular, whereas the measurement problem turns on a premise about the completeness of the quantum state (‘no hidden variables’), the control problem turns on a premise about our ability to prepare or control quantum states. After raising the problem, I discuss some applications. I suggest that it provides a useful new lens through which to view existing theories or interpretations, in part because it draws attention to aspects of those theories which the measurement problem does not (such as the role of conditional and relative states). I suggest that it also helps clarify the physical significance of the well-known no-go result—the no-cloning theorem—on which it is based.
According to commonsense morality, while we have reason to be concerned about the effects of our actions on anyone’s welfare, we also have reason to be partial towards the welfare of people to whom we have certain special relationships. I have, for example, more reason to make sure that my own child gets into a good school than that my neighbor’s child does. In this paper, I want to examine a kind of decision where the identity of people to whom we bear the special relationship depends on our action – in particular, the identity of our future children.
The dispute in philosophical decision theory between causalists and evidentialists remains unsettled. Causal decision theory appeals to many philosophers because it endorses a species of attractive dominance reasoning, and consequently gets intuitive verdicts on a range of cases with the structure of the infamous Newcomb’s Problem. But it also faces a rising wave of challenges – in the form of both counterexamples and more abstract theoretical worries. In this paper I will describe a way to save what is attractive about the causal view – a novel decision theory which captures its main advantages while avoiding its most worrying objections, and which can generalize to solve a set of related problems in other normative domains.
Three extensions of the standard PROLOG fixpoint semantics are presented (called sat, strong, and weak), using partial models, models which may fail to assign truth values to all formulas. Each of these semantics takes negation and quantification into account. All three are conservative: they agree with the conventional semantics on pure Horn clause programs. The sat and the strong semantics incorporate the domain closure assumption, but differ on whether to assign a truth value to a classically valid formula some part of which lacks a truth value. The weak semantics is similar to the strong semantics but abandons the domain closure condition, and consequently, all programs give rise to continuous operators in this semantics. For the weak semantics, a sound and complete proof procedure is given, based on semantics tableaus (or equivalently, Gentzen Sequents).
No evidence for singular thought
Posted on Tuesday, 03 Dec 2019
Teaching for this semester is finally over. Last week I gave a talk in Umea at a workshop on singular thought. I was pleased
to be invited because I don't really understand singular thought. …
— In philosophy of science, several views have been espoused on the meaning of the term ‘theory’; among these are the syntactic view and the semantic view. But even after decades of debate, no consensus has been reached on an all-encompassing positively defined view on theories. Here we take that to mean that the outcome of the debate is that such an all-encompassing view is nonexisting. Correspondingly, the purpose of this paper is to present a pluralist view on theories: it is negatively defined, yet it may break the deadlock in the ongoing debate on the meaning of ‘theory’.
Recent research indicates that norms matter for ordinary causal attributions, although there is a good deal of debate concerning why they matter. One prominent account—that the impact of norms works via the salience of counterfactuals—has received support from a recent paper by Icard et al. (2017) reporting a new effect in cases where two agents perform symmetric actions that are each individually sufficient to bring about an outcome. But in four recent studies (Sytsma under review), I was unable to replicate these findings. In this paper I explore why, investigating a key difference between our studies: Icard et al. asked participants about just one agent (single evaluations), while I asked them about both agents (joint evaluations). I find that this difference helps explain the divergent findings, although the results remain problematic for Icard et al.’s view. Further I identity two evaluation effects: there is a general trend for the causal ratings in these cases to be lower when using single evaluations than when using joint evaluations, and this difference is larger when the agent asked about violates an injunctive norm. I consider four potential explanations of the impact of using single or joint evaluations and argue that determining the correct explanation has important implications for work concerning the effect of norms on causal attributions.
Suppose substantivalism about space is correct. Imagine now that the following happens to a swiss cheese: the space where the holes were suddenly disappears. I don’t mean that the holes close up. I mean that the space disappears: all the points and regions that used to be in the hole are no longer there (and any air that used to be there is annihilated). …
The Tarskian notion of truth-in-a-model is the paradigm formal capture of our pre-theoretical notion of truth for semantic purposes. But what exactly makes Tarski’s construction so well suited for semantics is seldom discussed. In my Semantics, Metasemantics, Aboutness (OUP 2017) I articulate a certain requirement on the successful formal modeling of truth for semantics – “locality-per-reference” – against a background discussion of metasemantics and its relation to truth-conditional semantics. It is a requirement on any formal capture of sentential truth vis-a-vis the interpretation of singular terms and it is clearly met by the Tarskian notion. In this paper another such requirement is articulated – “locality-per-application” – which is an additional requirement on the formal capture of sentential truth, this time vis-a-vis the interpretation of predicates. This second requirement is also clearly met by the Tarskian notion. The two requirements taken together offer a fuller answer than has been hitherto available to the question what makes Tarski’s notion of truth-in-a-model especially well suited for semantics.
We introduce, develop, and apply a new approach for dealing with the intuitive notion of function, called Flow Theory. Within our framework functions have no domain at all. Sets and even relations are special cases of functions. In this sense, functions in Flow are not equivalent to functions in ZFC. Nevertheless, we prove both ZFC and Category Theory are naturally immersed within Flow. Besides, our framework provides major advantages as a language for axiomatization of standard mathematical and physical theories. Russell’s paradox is avoided without any equivalent to the Separation Scheme. Hierarchies of sets are obtained without any equivalent to the Power Set Axiom. And a clear principle of duality emerges from Flow, in a way which was not anticipated neither by Category Theory nor by standard set theories.
Leibniz accepts causal independence, the claim that no created substance can causally interact with any other. And Leibniz needs causal independence to be true, since his well-known pre-established harmony is premised upon it. So, what is Leibniz’s argument for causal independence? Sometimes he claims that causal interaction between substances is superfluous; sometimes he claims that it would require the transfer of accidents, and that this is impossible. But when Leibniz finds himself under sustained pressure to defend causal independence, those are not the reasons that he marshals in its defense. Instead, deep into his long correspondence with Burchard de Volder, he gives a different sort of argument, one that has gone nearly unnoticed by commentators and has not yet been properly understood. In part, this is because the argument develops slowly over four years of correspondence. It emerges in early 1704, but it is formulated tersely and appears murky unless understood in light of Leibniz and De Volder’s tangled exchanges. There Leibniz argues that, on his distinctive ontology of an infinity of created substances, no two created substances could possibly causally interact, for roughly the same reasons that some Cartesians like De Volder deny interaction between minds and bodies on their substance dualist ontology. In this paper I draw out this lost argument, explain it and the metaphysics on which Leibniz builds it, and untangle Leibniz and De Volder’s exchanges concerning causation from which this argument results.
We suggest a concept of convexity of preferences that does not rely on any algebraic structure. A decision maker has in mind a set of orderings interpreted as evaluation criteria. A preference relation is defined to be convex when it satisfies the following condition: If, for each criterion, there is an element that is both inferior to b by the criterion and superior to a by the preference relation, then b is preferred to a. This definition generalizes the standard Euclidean definition of convex preferences. It is shown that under general conditions, any strict convex preference relation is represented by a maxmin of utility representations of the criteria. Some economic examples are provided. Keywords. Convex preferences, abstract convexity, maxmin utility. JEL classification. C60, D01.
Akio Matsumoto has long studied coupled dynamical systems exhibiting various forms of complex dynamics, often involving lags (Matsumoto, 1997; 1999; Matsumoto and Szidarovsky, 2015). In addition, he has had an interest in implications of such models connecting economics with environmental problems (Matsumoto et al., 2018; Ishikawa et al., 2019). A theme of his work on these topics has indeed been that both coupling and lags tend to increase the complexities arising from such systems. This might appear to run counter to another theme of his work, that sometimes chaotic dynamics “can be beneficial” (Matsumoto, 2001, 2003). However, those models involved one-dimensional systems of price dynamics without coupling or lags or other complications that could undermine their relatively sunny outcomes. Nevertheless, this insight of Matsumoto’s that chaotic dynamics are not necessarily “bad” has not been fully appreciated.
It is often argued that while biases routinely influence the generation of scientific theories (in the ‘context of discovery’), a subsequent rational evaluation of such theories (in the ‘context of justification’) will ensure that biases do not affect which theories are ultimately accepted. Against this line of thought, this paper shows that the existence of certain kinds of biases at the generation-stage implies the existence of biases at the evaluation-stage. The key argumentative move is to recognize that a scientist who comes up with a new theory about some phenomena has thereby gained an unusual type of evidence, viz. information about the space of theories that could be true of the phenomena. It follows that if there is bias in the generation of scientific theories in a given domain, then the rational evaluation of theories with reference to the total evidence in that domain will also be biased.
Conditionals are notoriously difficult to analyse. Conditionals are natural language sentences of the form ‘if A then C’, where A is called the antecedent and C the consequent of the conditional. A standard account has however emerged in the 70’ies, the so-called possible world account (Lewis, 1973b; Stalnaker, 1968). This account even spread into the fields of linguistics and formal semantics in the work of Kratzer (1979), and, in some form or another, into the domain of the psychology of reasoning (Over, 2009). According to this account, a conditional A > C is true in the actual world if and only if the closest A-worlds to the actual world are C-worlds. However, recent reflections and analysis suggest that the defining clause is not strong enough and that we may want to add some additional conditions. What these conditions is not settled. Different approaches argue for different conditions (Crupi & Iacona, 2019; Krzy˙zanowska, Wenmackers, & Douven, 2013; Raidl, 2018; Rott, ms; Spohn, 2015). Some of these logics are not worked out yet, or they are only worked out for specific models. To get a grasp to compare them, we need to know what kind of logics they generate depending on the underlying closeness analysis. This article proposes a general method which generates completeness results for such strengthened conditionals. In particular, the article proves completeness for the evidential conditional introduced by Crupi and Iacona (2019).
Many have argued that the state should ensure that all citizens have access to some range of health care services regardless of their ability to pay (Buchanan 1984; Courtland 2017; Daniels 1985; Daniels 2008; DeGrazia 1991; Dworkin 2000; Kelleher 2014; Menzel 2011; Sachs 2008). There has been less philosophical discussion about whether the state should allow some people to purchase access to a higher level of care than the minimum level it guarantees to all. If the state provides public insurance, should it allow some to opt out, to purchase supplementary insurance, or to pay for additional services out of pocket? If the state provides universal access to health care by subsidizing the purchase of private insurance, should it allow insurance companies to offer different tiers of coverage? Many countries, such as Australia and the United Kingdom, provide public insurance but allow people to purchase supplementary insurance. By contrast, several Canadian provinces largely prohibit supplementary private insurance.
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Two weeks ago, I blogged about the claim of Ohad Keller and Nathan Klein to have proven the Aaronson-Ambainis Conjecture. Alas, Keller and Klein tell me that they’ve now withdrawn their preprint (though it may take another day for that to show up on the arXiv), because of what looks for now like a fatal flaw, in Lemma 5.3, discovered by Paata Ivanishvili. …