Hardin’s (1988) empirically-grounded argument for color eliminativism has defined the color realism debate for the last thirty years. By Hardin’s own estimation, phenomenal structure – the unique/binary hue distinction in particular – poses the greatest problem for color realism. Examination of relevant empirical findings shows that claims about the unique hues which play a central role in the argument from phenomenal structure should be rejected. Chiefly, contrary to widespread belief amongst philosophers and scientists, the unique hues do not play a fundamental role in determining all color appearances. Among the consequences of this result is that greater attention should be paid to certain proposals for putting the structure of phenomenal color into principled correspondence with surface reflectance properties. While color realism is not fully vindicated, it has much greater empirical plausibility than previously thought.
When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be ‘robust’. This article investigates the logic of such ‘robustness analysis’ (RA). The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful new foothold on the logic undergirding RAs.
Ruetsche () claims that an abstract C*-algebra of observables will not contain all of the physically significant observables for a quantum system with infinitely many degrees of freedom. This would signal that in addition to the abstract algebra, one must use Hilbert space representations for some purposes. I argue to the contrary that there is a way to recover all of the physically significant observables by purely algebraic methods.
Symmetry plays a number of central roles in modern physics. As the physicist Paul Anderson famously remarked, “it is only slightly overstating the case to say that physics is the study of symmetry” (1972, p. 394). Here I discuss just one role of symmetry: its use as a guide to superfluous structure, with a particular eye on its application to metaphysics. What is symmetry? Generally speaking, a symmetry is an operation that leaves its object unchanged in a certain respect. Rotation by 90 degrees is a symmetry of a square piece of paper, insofar as the paper’s extension through space is the same after the rotation as before. But we will focus on symmetries of physical theories, not paper. Roughly speaking, these are operations on possible physical systems that leave some aspect of the theory unchanged. Which aspect? That depends: different symmetries preserve different aspects. But an important class of symmetries are those that leave the dynamical laws of the theory unchanged; these are known as dynamical symmetries.
(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)
Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. …
Malebranche argues that ideas are representative beings existing in God. He defends this thesis by an inference to the best explanation of human perception. It is well-known that Malebranche’s theory of vision in God was forcefully rejected by philosophers such as Arnauld, Locke, and Berkeley. However, the notion that ideas exist in God was not the only controversial aspect of Malebranche’s approach. Another controversy centered around Malebranche’s view that ideas are to be understood as posits in an explanatory theory. Opponents of this approach, including Arnauld and Locke, held that our talk about ideas was not explanatory but instead merely descriptive: we use the word ‘idea’ to describe phenomena that we observe by reflecting on our own minds. This controversy has not received much attention from scholars, but in the present paper I will show that it was an explicit and important subject of concern for Malebranche, Arnuald, Locke, and Berkeley and that attention to this controversy can illuminate several aspects of these philosophers’ work.
One of the main themes of my book about Carnap is that a decisive component of the original motivation first to write the Aufbau and then to push forward to the radical pluralism of the Syntax (and beyond) was Carnap’s diagnosis of the political situation immediately after the First World War in Germany. …
Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer scrutiny are not. The premises have modal import that is required for the arguments but is not immediately grasped on inspection, and which ultimately undermines the simpler logical intuitions that make the premises seem plausible. Furthermore, the notion of necessity that they involve goes unspecified, and yet must go beyond standard varieties of logical necessity. This leaves us little reason to believe the premises, while their implausible existential import gives us good reason not to.
The Gelukpa (or Geluk) tradition of Tibetan Buddhist philosophy is
inspired by the works of Tsongkhapa (1357–1419), who set out a
distinctly nominalist Buddhist tradition that differs sharply from
other forms of Buddhist thought not only in Tibet, but elsewhere in
the Buddhist world. The negative dialectics of the Middle Way
(madhyamaka) is the centerpiece of the Geluk intellectual
tradition and is the philosophy that is commonly held in Tibet to
represent the highest view. The Middle Way, a philosophy systematized
in the second century by Nāgārjuna, seeks to chart a
“middle way” between the extremes of essentialism and
nihilism with the notion of two truths: the ultimate truth of
emptiness and the relative truth of dependent existence.
Joseph Butler is best known for his criticisms of the hedonic and
egoistic “selfish” theories associated with Hobbes and
Bernard Mandeville and for his positive arguments that self-love and
conscience are not at odds if properly understood (and indeed promote
and sanction the same actions). In addition to his importance as a
moral philosopher Butler was also an influential Anglican theologian. Unsurprisingly his theology and philosophy were connected — his
main writings in moral philosophy were published sermons, a work of
natural theology, and a brief dissertation attached to that work. Although most of Butler’s moral arguments make rich use of passages
from scripture and familiar Christian stories and concepts, they make
little reference to — and depend little on the reader having
— any particular religious commitments.
This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes.
. As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012/2017. The comments from 2017 lead to a troubling issue that I will bring up in the comments today. …
Traditionally philosophical discussions on moral responsibility have
focused on the human components in moral action. Accounts of how to
ascribe moral responsibility usually describe human agents performing
actions that have well-defined, direct consequences. In today’s
increasingly technological society, however, human activity cannot be
properly understood without making reference to technological
artifacts, which complicates the ascription of moral responsibility
(Jonas 1984; Waelbers
2009).[ 1 ]
As we interact with and through these artifacts, they affect the
decisions that we make and how we make them (Latour 1992).
Autonomous agents are self-governing agents. But what is a
self-governing agent? Governing oneself is no guarantee that one will
have a greater range of options in the future, or the sort of
opportunities one most wants to have. Since, moreover, a person can
govern herself without being able to appreciate the difference between
right and wrong, it seems that an autonomous agent can do something
wrong without being to blame for her action. What, then, are the
necessary and sufficient features of this self-relation? Philosophers
have offered a wide range of competing answers to this question.
In ‘Freedom and Resentment’ P. F. Strawson argues that reactive attitudes like resentment and indignation cannot be eliminated altogether, because doing so would involve exiting interpersonal relationships altogether. I describe an alternative to resentment: a form of moral sadness about wrongdoing that, I argue, preserves our participation in interpersonal relationships. Substituting this moral sadness for resentment and indignation would amount to a deep and far-reaching change in the way we relate to each other – while keeping in place the interpersonal relationships, which, Strawson rightfully believes, cannot be eliminated.
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Louis Pierre Althusser (1918–1990) was one of the most
influential Marxist philosophers of the 20th Century. As
they seemed to offer a renewal of Marxist thought as well as to render
Marxism philosophically respectable, the claims he advanced in the
1960s about Marxist philosophy were discussed and debated
worldwide. Due to apparent reversals in his theoretical positions, to
the ill-fated facts of his life, and to the historical fortunes of
Marxism in the late twentieth century, this intense interest in
Althusser’s reading of Marx did not survive the 1970s. Despite the
comparative indifference shown to his work as a whole after these
events, the theory of ideology Althusser developed within it has been
broadly deployed in the social sciences and humanities and has
provided a foundation for much “post-Marxist”
In Japan, Confucianism stands, along with Buddhism, as a major religio-philosophical teaching introduced from the larger Asian cultural arena at the dawn of civilization in Japanese history, roughly the mid-sixth century. Unlike Buddhism which ultimately hailed from India, Confucianism was first and foremost a distinctly Chinese teaching. It spread, however, from Han dynasty China, into Korea, and then later entered Japan via, for the most part, the Korean peninsula. In significant respects, then, Confucianism is the intellectual force defining much of the East Asian identity of Japan, especially in relation to philosophical thought and practice.
Giacomo (Jacopo) Zabarella (b. 1533 in Padua, d. 1589 in Padua) is
considered the prime representative of Renaissance Italian
Aristotelianism. Known most of all for his writings on logic and
methodology, Zabarella was an alumnus of the University of Padua,
where he received his Ph.D. in philosophy. Throughout his teaching
career at his native university, he also taught philosophy of nature
and science of the soul (De anima). Among his main works are
the collected logical works Opera logica (1578) and writings
on natural philosophy, De rebus naturalibus (1590). Zabarella
was an orthodox Aristotelian seeking to defend the scientific status
of theoretical natural philosophy against the pressures emanating from
the practical disciplines, i.e., the art of medicine and anatomy.
Modern medicine is often said to have originated with 19th century germ theory, which attributed diseases to particular bacterial contagions. The success of this theory is often associated with an underlying principle referred to as the “doctrine of specific etiology,” which refers to the theory’s specificity at the level of disease causation or etiology. Despite the perceived importance of this doctrine the literature lacks a clear account of the types of specificity it involves and why exactly they matter. This paper argues that the 19th century germ theory model involves two types of specificity at the level of etiology. One type receives significant attention in the literature, but its influence on modern medicine has been misunderstood. A second type is present in this model, but it has been overlooked in the extant literature. My analysis clarifies how these types of specificity led to a novel conception of etiology, which continues to figure in medicine today.
Karl Leonhard Reinhold (1757–1823), Austrian philosopher and first
occupant of the chair on Critical Philosophy established at the
University of Jena in 1787, first achieved fame as a proponent of
popular Enlightenment and as an early and effective popularizer of the
Kantian philosophy. During his period at the University of Jena
(1787–94), Reinhold proclaimed the need for a more
“scientific” and systematic presentation of the Critical
philosophy, one based upon a single, self-evident first principle. In
an effort to satisfy this need, he expounded his own “Elementary
Philosophy” in a series of influential works between 1789 and
[The following is a guest post by Bob Lockie. — JS]He who says that all things happen of necessity can hardly find fault with one who denies that all happens by necessity; for on his own theory this very argument is voiced by necessity (Epicurus 1964: XL).Lockie, Robert. …
For the past few weeks, people on- and offline have spoken up to question Winston Churchill’s legacy. They generally highlight his racism, his support for the use of concentration camps, his treatment of Ireland, his complicity in the Bengal famine, and more. …
In my book Understanding Scientific Progress (Maxwell 2017), I argue that fundamental philosophical problems about scientific progress, above all the problem of induction, cannot be solved granted standard empiricism (SE), a doctrine which most scientists and philosophers of science take for granted. A key tenet of SE is that no permanent thesis about the world can be accepted as a part of scientific knowledge independent of evidence. For a number of reasons, we need to adopt a rather different conception of science which I call aim-oriented empiricism (AOE). This holds that we need to construe physics as accepting, as a part of theoretical scientific knowledge, a hierarchy of metaphysical theses about the comprehensibility and knowability of the universe, these theses becoming increasingly insubstantial as we go up the hierarchy. Fundamental philosophical problems about scientific progress, including the problems of induction, theory unity, verisimilitude and scientific discovery, which cannot be solved granted SE, can be solved granted AOE.
A proper understanding of the moral and political significance of migration requires a focus on global inequalities. More specifically, it requires a focus on those global inequalities that affect people’s ability to participate in the production of economic goods and non-economic goods (e.g., relationships of intimacy and care, opportunities for self-expression, well-functioning institutions, etc.). We call cooperative infrastructures the complex material and immaterial technologies that allow human beings to cooperate in order to generate human goods. By enabling migrants to access high-quality cooperative infrastructures, migration contributes to the diffusion of technical and socio-political innovations; in this way, it positively affects the ability of individuals from poorer countries to participate in the production of human goods. However, migration can also damage the material and immaterial components of the cooperative infrastructures accessible in both host countries and sending countries; these potential downsides of migration should not be ignored, although arguably they can often be neutralized, alleviated, or compensated.
John Austin is considered by many to be the creator of the school of
analytical jurisprudence, as well as, more specifically, the approach
to law known as “legal positivism.” Austin’s particular
command theory of law has been subject to pervasive criticism, but its
simplicity gives it an evocative power that continues to attract
Gertrude Elizabeth Margaret Anscombe was one of the most gifted
philosophers of the twentieth century. Her work continues to strongly
influence philosophers working in action theory and moral
philosophy. Like the work of her friend Ludwig Wittgenstein,
Anscombe’s work is marked by a keen analytic sensibility.
[Editor's Note: The following new entry by Ana María
Mora-Márquez replaces the
on this topic by the previous author.] Simon of Faversham († 1306) was a thirteenth-century scholar,
mainly known as a commentator on Aristotle’s logic and natural
philosophy. He is considered a modist, among other things because of
his use of the notions of modi praedicandi and modi
essendi in his commentary on Aristotle’s
Categories (cf. Marmo 1999). Simon’s work as an
Aristotelian commentator heavily relies on Albert the Great’s
paraphrases on the Aristotelian corpus. Simon’s
question-commentaries often portray key medieval discussions in a
somewhat undeveloped state.
This contribution is devoted to addressing the question as to whether the methodology followed in building/assessing string theory can be considered scientific – in the same sense, say, that the methodology followed in building/assessing the Standard Model of particle physics is scientific – by fo-cussing on the ”founding” period of the theory. More precisely, its aim is to argue for a positive answer to the above question in the light of a historical analysis of the early developments of the string theoretical framework. The paper’s main claim is a simple one: there is no real change of scientific status in the way of proceeding and reasoning in fundamental physical research. Looking at the developments of quantum field theory and string theory since their very beginning, one sees the very same strategies at work both in theory building and theory assessment. Indeed, as the history of string theory clearly shows (see Cappelli et al., 2012), the methodology characterising the theoretical process leading to the string idea and its successive developments is not significantly different from the one characterising many fundamental developments in theoretical physics which have been crowned with successful empirical confirmation afterwards (sometimes after a considerable number of years, as exemplified by the story of the Higgs particle).
The central question of my paper is whether there is a coherent logical theory in which truth is construed in epistemic terms and in which also some version of the law of excluded middle is defended. Brentano in his later writings has such a theory. My first question is whether his theory is consistent. I also make a comparison between Brentano’s view and that of an intuitionist at the present day, namely Per Martin-Löf. Such a comparison might provide some insight into what is essential to a theory that understands truth in epistemic terms.