According to Danièle Moyal-Sharrock, Wittgenstein’s On Certainty presents a theory of hinges, and hinges have a role to play in a foundationalist epistemology (2013, this journal). Michael Williams (2005) and Annalisa Coliva (2013, this journal) have claimed that the hinges are not suitable to play such a role as they are not shared universally. Moyal-Sharrock has replied that a subset of the hinges is suitable to play such a role: the “universal” hinges, an account of which she developed in her 2004 book Understanding on Certainty (2013, this journal). I argue that for Moyal-Sharrock’s reply to be sustained, she must construe the set of universal hinges much more narrowly than she does currently. For instance, Moyal-Sharrock claims that “I have a brain” is a universal hinge, which consigns people who know nothing about brains to stand outside the bounds of sense. I also provide a novel way of thinking about the universal hinges, which I argue is better textually motivated than Moyal-Sharrock’s own way, and which provides a set of hinges more suitable to play a role in foundationalist epistemology.
Convergent and divergent thought are promoted as key constructs of creativity. Convergent thought is defined and measured in terms of the ability to perform on tasks where there is one correct solution, and divergent thought is defined and measured in terms of the ability to generate multiple solutions. However, these characterizations of convergent and divergent thought presents inconsistencies, and do not capture the reiterative processing, or ‘honing’ of an idea that characterizes creative cognition. Research on formal models of concepts and their interactions suggests that different creative outputs may be projections of the same underlying idea at different phases of a honing process. This leads us to redefine convergent thought as thought in which the relevant concepts are considered from conventional contexts, and divergent thought as thought in which they are considered from unconventional contexts. Implications for the assessment of creativity are discussed.
Let me begin with an admission: I am neither a Neo-Kantian myself nor a historian of philosophy. I became aware of Cassirer’s work through my search for precedents for the kind of structural realism that Ladyman was developing, as captured in the slogan ‘The world is structure’. As is now well-known, this differs from Worrall’s form of structural realism in that the latter maintains ‘All that we know is structure’ and in his early writings, Worrall followed Poincaré in his insistence that the nature of the world, beyond this structure, was unknown to us. Subsequently he adopted a kind of agnosticism with regard to this ‘nature’ but in that earlier form we find certain Kantian resonances, which is not surprising of course, given its ancestry in Poincaré’s work. One might initially think that the neo-Kantian would find Ladyman’s collapse of ‘nature’ into ‘structure’ to be unfortunate but, of course, if one takes ‘the world’ of the realist to be the phenomenal world, with the noumena taken negatively and not regarded as the world of determinate but unknowable objects, in the way that Worrall conceives of it (and here I recognise that I am stepping into a minefield!) then there may not be such a chasm between these two views as might at first appear.
In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs to be re-evaluated accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes.
I argue that our best science supports the rationalist idea that, independent of reasoning, emotions aren’t integral to moral judgment. There’s ample evidence that ordinary moral cognition often involves conscious and unconscious reasoning about an action’s outcomes and the agent’s role in bringing them about. Emotions can aid in moral reasoning by, for example, drawing one’s attention to such information. However, there is no compelling evidence for the decidedly sentimentalist claim that mere feelings are causally necessary or sufficient for making a moral judgment or for treating norms as distinctively moral. I conclude that, even if moral cognition is largely driven by automatic intuitions, these shouldn’t be mistaken for emotions or their non-cognitive components. Non-cognitive elements in our psychology may be required for normal moral development and motivation but not necessarily for mature moral judgment.
Socialism is a rich tradition of political thought and practice, the
history of which contains a vast number of views and theories, often
differing in many of their conceptual, empirical, and normative
commitments. In his 1924 Dictionary of Socialism, Angelo
Rappoport canvassed no fewer than forty definitions of socialism,
telling his readers in the book’s preface that “there are
many mansions in the House of Socialism” (Rappoport 1924: v,
34–41). To take even a relatively restricted subset of socialist
thought, Leszek Kołakowski could fill over 1,300 pages in his
magisterial survey of Main Currents of Marxism
(Kołakowski 1978 ).
Despite initial appearance, paradoxes in classical logic, when comprehension is unrestricted, do not go away even if the law of excluded middle is dropped, unless the law of noncontradiction is eliminated as well, which makes logic much less powerful. Is there an alternative way to preserve unrestricted comprehension of common language, while retaining power of classical logic? The answer is yes, when provability modal logic is utilized. Modal logic NL is constructed for this purpose. Unless a paradox is provable, usual rules of classical logic follow. The main point for modal logic NL is to tune the law of excluded middle so that we allow for φ and its negation ¬φ to be both false in case a paradox provably arises. Curry's paradox is resolved differently from other paradoxes but is also resolved in modal logic NL. The changes allow for unrestricted comprehension and naïve set theory, and allow us to justify use of common language in formal sense.
This paper analyzes important elements in the reception of Hegel’s philosophy in the present. In order to reach this goal we discuss how analytic philosophy receives Hegel’s philosophy. For that purpose, we reconstruct the reception of analytic philosophy in the face of Hegel, especially from those authors who were central in this movement of reception and distance of his philosophy, namely, Bertrand Russell, Frege and Wittgenstein. Another central point of this paper is to review the book of Paul Redding, Analytic Philosophy and the Return of Hegelian Thought, in comparison with the reception of Hegel, developed here by analytic philosophy. Finally, we show how a dialogue can be productive of these apparently opposing currents.
Since many years national and international science organizations have recommended the inclusion of philosophy, history, and ethics courses in science curricula at universities. Chemists may rightly ask: What is that good for? Don’t primary and secondary school provide been taught to you to be the edifice of science, and take it only as a provisional state in the course of the ongoing research process of which your work is meant to become a part. Next let’s see what kind of philosophy, history, and enough general education such that universities can ethics is needed for chemical research, and what not. back to an antiquated form of higher education? Or do they want us to learn some “soft skills” that can at best improve our eloquence at the dinner table but is entirely useless in our chemical work?
We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time.
The study of psychological and cognitive mechanisms is an interdisciplinary endeavor, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this paper, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly. On the other hand, if we are cautious in the application of our metaphysical commitments, and careful with the inferences we draw from them, then they can provide new insights into how we might find connections between models and theories of mechanisms that appear incompatible.
It is well known that there is a freedom-of-choice loophole or superdeterminism loophole in Bell’s theorem. Since no experiment can completely rule out the possibility of superdeterminism, it seems that a local hidden variable theory consistent with relativity can never be excluded. In this paper, we present a new analysis of local hidden variable theories. The key is to notice that a local hidden variable theory assumes the universality of the Schrodinger equation, and it permits that a measurement can be in principle undone in the sense that the wave function of the composite system after the measurement can be restored to the initial state. We propose a variant of the EPR-Bohm experiment with reset operactions that can undo measurements. We find that according to quantum mechanics, when Alice’s measurement is undone after she obtained her result, the correlation between the results of Alice’s and Bob’s measurements depends on the time order of these measurements, which may be spacelike separated. Since a local hidden variable theory consistent with relativity requires that relativistically non-invariant relations such as the time order of space-like separated events have no physical significance, this result means that a local hidden variable theory cannot explain the correlation and reproduce all predictions of quantum mechanics even when assuming superdeterminism. This closes the major superdeterminism loophole in Bell’s theorem.
We compare and contrast two distinct approaches to understanding the Born rule in de Broglie-Bohm pilot-wave theory, one based on dynamical relaxation over time (advocated by this author and collaborators) and the other based on typicality of initial conditions (advocated by the ‘Bohmian mechanics’ school). It is argued that the latter approach is inherently circular and physically misguided. The typicality approach has engendered a deep-seated confusion between contingent and law-like features, leading to misleading claims not only about the Born rule but also about the nature of the wave function. By artificially restricting the theory to equilibrium, the typicality approach has led to further misunderstandings concerning the status of the uncertainty principle, the role of quantum measurement theory, and the kinematics of the theory (including the status of Galilean and Lorentz invariance). The restriction to equilibrium has also made an erroneously-constructed stochastic model of particle creation appear more plausible than it actually is. To avoid needless controversy, we advocate a modest ‘empirical approach’ to the foundations of statistical mechanics. We argue that the existence or otherwise of quantum nonequilibrium in our world is an empirical question to be settled by experiment.
Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond if you were to discover one of these robots attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?
Extended cognition is when cognitive processes extend beyond the brain and nervous system of the subject, and in the process properly include such ‘external’ devices as technology. This paper explores what relevance extended cognitive processes might have for humility, and especially for the specifically cognitive aspect of humility—viz., intellectual humility. As regards humility in general, it is argued that there are no in principle barriers to extended cognitive processes helping to enable the development and manifestation of this character trait, but that there may be limitations to the extent to which one’s manifestation of humility can be dependent upon these processes, at least insofar as we follow orthodoxy and treat humility as a virtue. As regards the cognitive trait of intellectual humility in particular, the question becomes whether this can itself be an extended cognitive process. It is argued that this wouldn’t be a plausible conception of intellectual humility, at least insofar as we treat intellectual humility (like humility in general) as a virtue.
Sextus Empiricus was a Pyrrhonian Skeptic living probably in the
second or third century CE, many of whose works survive, including the
Outlines of Pyrrhonism, the best and fullest account we have
of Pyrrhonian skepticism (a kind of skepticism named for Pyrrho (see
entry on Ancient Skepticism)). Pyrrhonian skepticism involves having no beliefs
about philosophical, scientific, or theoretical matters—and
according to some interpreters, no beliefs at all, period. Whereas
modern skepticism questions the possibility of knowledge, Pyrrhonian
skepticism questions the rationality of belief: the Pyrrhonian skeptic
has the skill of finding for every argument an equal and opposing
argument, a skill whose employment will bring about suspension of
judgment on any issue which is considered by the skeptic, and
Epistemic friction, as Gila Sher conceives it, is one of the two principal requirements on knowledge, the other requirement being epistemic freedom. Sher sees these requirements as universal: they apply to all areas of our knowledge, ordinary everyday knowledge as well as logical, scientific, and philosophical knowledge. Epistemic freedom, according to Sher, is freedom to “set up our epistemic goals, . . . , devise strategies, make practical and theoretical decisions” (3). The study of this requirement Sher defers to a sequel volume (though she has a good bit to say about it in the present volume). Her concern in Epistemic Friction is with the requirement named in the title. A central friction requirement, according to Sher, is groundedness in the world, which she explains thus: “Groundedness in the world is veridicality, i.e.,
Pythagoreanism is the very surprising view that “all is number”. If Pythagoreanism is true, then when Ernie asserted that a certain episode of Sesame Street was brought to you by the number three, his assertion’s bizarre implication that the episode in question was brought to you by some number or other is true. (Of course he may still have been wrong about which number.) Very surprising indeed. Could Pythagoreanism possibly be true? And why in the world would anyone believe it? Those are good questions. But in §1 I first try to get clear on what the view is. As it turns out, there are actually several views that are all reasonable ways to precisify the basic Pythagorean idea. Then, I return to the good questions. In §2 I try to understand why in the world anyone would believe at least some version of Pythagoreanism. And in §3 I try to determine whether any version of Pythagoreanism could possibly be true. Interestingly, the best objections I uncover in §3 have no application to the versions that in §2 I argue we have some reason to believe.
It's interesting to compare the ways we talk and think about political vs non-political (civic/philanthropic or market) agents, advocacy, and organization. Consider the common objection to Effective Altruism, that it allegedly "neglects the need for systemic change." …
Since each of those acts plausibly fulfils the instruction, anyone trying to say something summary about what substantial features they share has a problem. The profusion and diversity of imagination’s putative kinds, roles, and capabilities might well lead you to think that nothing interesting or important unites them. Nonetheless, much recent work implicitly shares a quite general approach to imaginative phenomena: the imitation theory, according to which imaginative experiences are imitations of other experiences, and the attitudes they involve are likewise imitations of counterpart attitudes.
of the first section of the fifty-fourth of Francisco Suárez’s Metaphysical Disputations (DM). At this point in the Metaphysical Disputations, all we know is that beings of reason are not real (DM 1.1.4–6, XXV, 3a–4a; 54, prol.1, XXVI, 1015a). So the first question of DM 54.1 is this: are there beings that are not real? At first glance this question seems absurd. If something is a being, how could it fail to be real? The first position reported by Suárez takes just this line. According to this negative position, a being of reason is made up [fictum], just as Pegasus is made up. But clearly such things do not have being: “it is a contradiction to say that there is such a being, since what is only made up [fingitur] does not have being [non est]” (DM 54.1.2, XXVI, 1015b).
One of the central debates in contemporary metaphysics is the debate about the persistence of substances through time. One of the most popular views in this debate is fourdimensionalism, according to which substances persist through time by having different temporal parts at different times.
Adam of Wodeham (c. 1295–1358) was one of the most significant
philosophers and theologians working at Oxford in the second quarter
of the fourteenth century. A student of Ockham, Wodeham is best known
for his theory of the complexe significabile and his
distinctively English approach to questions of philosophical theology. His philosophy and theology were influential throughout the late
medieval and early modern periods.
This paper portrays the later Wittgenstein’s conception of contradictions and his therapeutic approach to them. I will focus on and give relevance to the Lectures on the Foundations of Mathematics (LFM 1976), plus the Remarks on the Foundations of Mathematics (RFM 2001). First, I will explain why Wittgenstein’s attitude towards contradictions is rooted in: (a) a rejection of the debate about realism and anti-realism in mathematics; and (b) Wittgenstein’s endorsement of logical pluralism. Then, I will explain Wittgenstein’s therapeutic approach towards contradictions, and why it means that a contradiction is not a problem for logic and mathematics. Rather, contradictions are problematic when we do not know what to infer from them. Once a meaning is established through a new rule of inference, the contradiction becomes a usable expression like many others in our inferential apparatus. Thus, the apparent problem is dissolved. Finally, I will take three examples of dissolved contradictions from Wittgenstein to clarify further his notion. I will conclude considering why his position on contradictions led him to clash with Alan Turing, and whether the latter was convinced by the Wittgensteinian proposal.
While most surveys, defenses, and critiques of embodied cognition proceed by treating it as a neatly delineated claim, such an approach soon becomes problematic due to the inherent plurality of this perspective on cognition. Embodied cognition is best treated as a research tradition, not as a single theory. This tradition has evolved in opposition to a certain kind of cognitive science, usually dubbed “cognitivism”. Cognitivism is typically characterized as a view that cognition may be fully explained in terms of transformations of mental representations, most commonly amodal symbols. The methodological and ontological commitments of embodied cognition follow research exemplars found in embodied cognitive linguistics, grounded cognition, ecological psychology, dynamical study of development, or neurophenomenology. Due to its inherent variety, this research tradition is not reducible to a single theory of cognitive phenomena (or to a single component subtradition). At the same time, all of these subtraditions share one feature: they reject cognitivism, in one way or another. They also feature fairly similar research heuristics for the discovery of how cognitive mechanisms work.
Many physical theories characterize their observables with unlimited precision. Non-fundamental theories do so needlessly: they are more precise than they need to be to capture the matters of fact about their observables. A natural expectation is that a truly fundamental theory would require unlimited precision in order to exhaustively capture all of the fundamental physical matters of fact. I argue against this expectation and I show that there could be a fundamental theory with limited precision.
Gödel's ontological proof is interpreted in a logically clear and sensible way without empirical and theological implications - rendering it mostly tautological interpretation-wise. Gödel's ontological argument thus cannot be said to prove existence of God. The real value of Gödel's ontological proof lies on the modal collapse consequence.
Roderick Milton Chisholm is widely regarded as one of the most
creative, productive, and influential American philosophers of the
20th Century. Chisholm worked in epistemology,
metaphysics, ethics, philosophy of language, philosophy of mind, and
other areas. His work constitutes a grand philosophical system
somewhat in the manner of Leibniz or Descartes. Chisholm
continually refined — and sometimes utterly revised — his
views. He was a prolific writer. The bibliography of his
written work in [LLP] contains citations of 320 items, including
journal articles, reviews, and books. His work in epistemology
alone would probably guarantee his position as an outstanding figure in
Jacques Lefèvre d’Étaples (c. 1450–1536)
taught philosophy at the University of Paris from around 1490 to 1508,
and then applied his erudition and textual scholarship to biblical
studies and religious reform. Lefèvre traveled to Italy in
1491, 1500, and 1507. There he sought out Ermolao Barbaro, Giovanni
Pico della Mirandola, Marsilio Ficino, Angelo Poliziano, and other
famous humanists. He himself became famous for the many introductions,
commentaries, and editions relating to philosophical works he
published in Paris. These repackaged the full range of philosophical
studies, from his early interests in mathematics and natural magic, to
the entire curriculum of university logic, natural philosophy, moral
philosophy, and metaphysics.
Russellian monism is a theory in the metaphysics of mind, on which a
single set of properties underlies both consciousness and the most
basic entities posited by physics. The theory is named for Bertrand
Russell, whose views about consciousness and its place in nature were
informed by a structuralist conception of theoretical physics. On such
a structuralist conception, physics describes the world in terms of
its spatiotemporal structure and dynamics (changes within that
structure) and says nothing about what, if anything, underlies that
structure and dynamics. For example, as it is sometimes put, physics
describes what mass and charge do, e.g., how they dispose
objects to move toward or away from each other, but not what mass and