Charlie Dunbar Broad (1887–1971) was an English philosopher who for
the most part of his life was associated with Trinity College,
Cambridge. Broad’s early interests were in science and
mathematics. Despite being successful in these he came to believe that
he would never be a first-rate scientist, and turned to philosophy. Broad’s interests were exceptionally wide-ranging. He devoted his
philosophical acuity to the mind-body problem, the nature of
perception, memory, introspection, and the unconscious, to the nature
of space, time and causation. He also wrote extensively on the
philosophy of probability and induction, ethics, the history of
philosophy and the philosophy of religion.
After a sketch of the optimism and high aspirations of History and Philosophy of Science when I first joined the field in the mid 1960s, I go on to describe the disastrous impact of "the strong programme" and social constructivism in history and sociology of science. Despite Alan Sokal's brilliant spoof article, and the "science wars" that flared up partly as a result, the whole field of Science and Technology Studies (STS) is still adversely affected by social constructivist ideas. I then go on to spell out how in my view STS ought to develop. It is, to begin with, vitally important to recognize the profoundly problematic character of the aims of science. There are substantial, influential and highly problematic metaphysical, value and political assumptions built into these aims. Once this is appreciated, it becomes clear that we need a new kind of science which subjects problematic aims - problematic assumptions inherent in these aims - to sustained imaginative and critical scrutiny as an integral part of science itself. This needs to be done in an attempt to improve the aims and methods of science as science proceeds. The upshot is that science, STS, and the relationship between the two, are all transformed. STS becomes an integral part of science itself. And becomes a part of an urgently needed campaign to transform universities so that they become devoted to helping humanity create a wiser world.
Kantian philosophy of space, time and gravity is significantly affected in three ways by particle physics. First, particle physics deflects Schlick’s General Relativity-based critique of synthetic a priori knowledge. Schlick argued that since geometry was not synthetic a priori, nothing was—a key step toward logical empiricism. Particle physics suggests a Kant-friendlier theory of space-time and gravity presumably approximating General Relativity arbitrarily well, massive spin-2 gravity, while retaining a flat space-time geometry that is indirectly observable at large distances. The theory’s roots include Seeliger and Neumann in the 1890s and Einstein in 1917 as well as 1920s-30s physics. Such theories have seen renewed scientific attention since 2000 and especially since 2010 due to breakthroughs addressing early 1970s technical difficulties.
Résumé : Cet article cherche à montrer comment la pratique mathématique, particulièrement celle admettant des représentations visuelles, peut conduire à de nouveau résultats mathématiques. L’argumentation est basée sur l’étude du cas d’un domaine des mathématiques relativement récent et prometteur: la théorie géométrique des groupes. L’article discute comment la représentation des groupes par les graphes de Cayley rendit possible la découverte de nouvelles propriétés géométriques de groupes. Abstract: The paper aims to show how mathematical practice, in particular with visual representations can lead to new mathematical results. The argument is based on a case study from a relatively recent and promising mathematical subject—geometric group theory. The paper discusses how the representation of groups by Cayley graphs made possible to discover new geometric properties of groups.
The idea that a serious threat to scientific realism comes from unconceived alternatives has been proposed by van Fraassen, Sklar, Stanford and Wray among others. Peter Lipton’s critique of this threat from underconsideration is examined briefly in terms of its logic and its applicability to the case of space-time and particle physics. The example of space-time and particle physics indicates a generic heuristic for quantitative sciences for constructing potentially serious cases of underdetermination, involving one-parameter family of rivals Tm (m real and small) that work as a team rather than as a single rival against default theory T . In important examples this new parameter has a physical meaning (e.g., particle mass) and makes a crucial conceptual difference, shrinking the symmetry group and in some case putting gauge freedom, formal indeterminism vs. determinism, the presence of the hole argument, etc., at risk. Methodologies akin to eliminative induction or tempered subjective Bayesianism are more demonstrably reliable than the custom of attending only to “our best theory”: they can lead either to a serious rivalry or to improved arguments for the favorite theory. The example of General Relativity (massless spin 2 in particle physics terminology) vs. massive spin 2 gravity, a recent topic in the physics literature, is discussed. Arguably the General Relativity and philosophy literatures have ignored the most serious rival to General Relativity.
In this critical notice to Robert Wright’s The Evolution of God, we focus on the question of whether Wright’s God is one which can be said to be an adaptation in a well defined sense. Thus we evaluate the likelihood of different models of adaptive evolution of cultural ideas in their different levels of selection. Our result is an emphasis on the plurality of mechanisms that may lead to adaptation. By way of conclusion we assess epistemologically some of Wright’s more controversial claims concerning the directionality of evolution and moral progress.
Reputation monitoring and the punishment of cheats are thought to be crucial to the viability and maintenance of human cooperation in large groups of non-kin. However, since the cost of policing moral norms must fall to those in the group, policing is itself a public good subject to exploitation by free riders. Recently, it has been suggested that belief in supernatural monitoring and punishment may discourage individuals from violating established moral norms and so facilitate human cooperation. Here we use cross-cultural survey data from a global sample of 87 countries to show that beliefs about two related sources of supernatural monitoring and punishment — God and the afterlife — independently predict respondents' assessment of the justifiability of a range of moral transgressions. This relationship holds even after controlling for frequency of religious participation, country of origin, religious denomination and level of education. As well as corroborating experimental work, our findings suggest that, across cultural and religious backgrounds, beliefs about the permissibility of moral transgressions are tied to beliefs about supernatural monitoring and punishment, supporting arguments that these beliefs may be important promoters of cooperation in human groups. © 2011 Elsevier Inc. All rights reserved.
Plato’s Sophist and Statesman use a notion of a model (paradeigma) quite different from the one with which we are familiar from dialogues like the Phaedo, Parmenides, and Timaeus. In those dialogues a paradeigma is a separate Form, an abstract perfect particular, whose nature is exhausted by its own character. Its participants are conceived as likenesses or images of it: they share with the Form the same character, but they also fall short of it because they exemplify not only that character but also its opposite. Mundane beautiful objects are plagued by various sorts of relativity—Helen is beautiful compared to other women, but not beautiful compared to a goddess; she is beautiful in her physical appearance, but not in her soul or her actions; she is beautiful in your eyes, but not in mine, and so on. The Form of the Beautiful, which is supposed to explain her beauty, is simply and unqualifiedly beautiful (Symp. 210e5-211d1).
In its most abstract form, an ontology is an account of fundamental degrees of freedom in nature. The metaphysician asks, what are the independently varying components of nature, their internal degrees of freedom and the configurations they can assume? The rationalist metaphysician supposes that we have some form of rational insight into the nature of reality. The naturalistic metaphysician relies on observation and experiment. Her task is to infer ontology from data. Given an ontology and a set of laws, one can generate a range of possible behavior,ii so the naturalistic metaphysician faces an inverse problem: how does she infer backwards from a range of observed behavior to underlying ontology?
Some proponents of ‘experimental philosophy’ criticize philosophers’ use of thought experiments on the basis of evidence that the verdicts vary with truth-independent factors. However, their data concern the verdicts of philosophically untrained subjects. According to the expertise defence, what matters are the verdicts of trained philosophers, who are more likely to pay careful attention to the details of the scenario and track their relevance. In a recent paper, Jonathan Weinberg and others reply to the expertise defence that there is no evidence for such expertise. I reply to them in this paper, arguing that they have misconstrued the dialectical situation. Since they have produced no evidence that philosophical training is less efficacious for thought experimentation than for other cognitive tasks for which they acknowledge that it produces genuine expertise, such as informal argumentation, they have produced no evidence for treating the former more sceptically than the latter.
Scholars disagree about the nature of the doctrinal apparatus that supports Berkeley’s case for passive obedience to the sovereign. Is he a rule-utilitarian, or natural law theorist, or ethical egoist, or some combination of some or all these elements? Here I argue that Berkeley is an act-utilitarian who thinks that one is more likely to act rightly by following certain sorts of rules. I also argue that Berkeley mischaracterizes and misevaluates Locke’s version of the social contract theory. Finally, I consider the potentially practically self-defeating nature of Berkeley’s claim that there is no obligation to submit to the rule of “madmen” or “usurpers”.
This book responds to a long-standing question about Pyrrhonian skepticism: whether the skeptics have any kind of beliefs. I address this topic in three steps. First, the question about the skeptic’s belief asks what goes on in the skeptic’s mind. Things look a certain way to the skeptics; skeptics think about things and they move through the world without, for example, bumping into walls when they leave a room. I argue that this kind of mental life does not involve beliefs, understood as judgments or truth-claims. Second, the question about the skeptic’s beliefs concerns language. Assertions are often thought of as the linguistic counterpart of beliefs: something is said to be the case. If the skeptic’s mental life looks as I reconstruct it, the skeptics need a non-assertoric language and, I argue, it is a substantial part of the skeptical project to develop this language. This side of skepticism has not received much attention in earlier publications; it is a crucial part of my analysis that the skeptics’ practices are importantly linguistic practices. Third, the question about the skeptic’s beliefs is a question about agency. Action is often thought to involve judgments or beliefs, about what is valuable or to be done on the one hand, but also about the context in which an action takes place and the situation to which it responds. If the skeptics do not form beliefs, how can they act?
This paper examines a constellation of ethical and editorial issues that have arisen since philosophers started to conduct, submit and publish empirical research. These issues encompass concerns over responsible authorship, fair treatment of human subjects, ethicality of experimental procedures, availability of data, unselective reporting and publishability of research findings. This study aims to assess whether the philosophical community has as yet successfully addressed such issues. To do so, the instructions for authors, submission process and published research papers of 29 main journals in philosophy have been considered and analyzed. In light of the evidence reported here, it is argued that the philosophical community has as yet failed to properly tackle such issues. The paper also delivers some recommendations for authors, reviewers and editors in the field.
Electromagnetism is one of the oldest natural phenomena studied by modern science. Laws of electromagnetism gradually evolved from the various experimental observations. In this process, Coulomb’s law was the first one which established the 1/r dependence of force between two charges. Then, Faraday discovered the induction of voltage by changing magnetic field and Ampere quantified the magnetic field generated by electric current. Finally, Maxwell introduced the concept of displacement current (i.e. time varying electric field giving rise to magnetic field) and wrote all the laws of electromagnetism in an elegant form which are commonly known Maxwell’s equations. These equations in differential form are given by,
As its title indicates, this book is about two kinds of properties of perceiving subjects: their phenomenal properties, and their representational properties. In particular, it focuses on three questions: What are phenomenal properties? What are representational properties? What is the relationship between phenomenal and representational properties? My answers to these questions are guided by two ideas, which have both been around for a long time. The first is that experience is transparent, in the sense that attention to one’s perceptual experiences is, or is intimately involved with, attention to the objects and properties those experiences present as in one’s environment. Though the label is due to Moore, versions of this idea can be found in earlier philosophers as well, and it has played a central role in recent work in the philosophy of perception.
Aristotle’s logic, especially his theory of the syllogism, has
had an unparalleled influence on the history of Western thought. It
did not always hold this position: in the Hellenistic period, Stoic
logic, and in particular the work of Chrysippus, took pride of place. However, in later antiquity, following the work of Aristotelian
Commentators, Aristotle’s logic became dominant, and
Aristotelian logic was what was transmitted to the Arabic and the
Latin medieval traditions, while the works of Chrysippus have not
survived. This unique historical position has not always contributed to the
understanding of Aristotle’s logical works.
Behind the various Christian ideas about heaven and hell lies the more
basic belief that our lives extend beyond the grave (see the entry on
afterlife). For suppose that our lives do not extend beyond the grave. In
addition to excluding a variety of ideas about reincarnation and
karma, this would also preclude the very possibility of future
compensation of any kind for those who experience horrendous
evil during their earthly lives. Indeed, despite their profound
differences, many Christians (though perhaps not all) and many
atheists can presumably agree on one thing at least. If a young girl
should be brutally raped and murdered and this should be the end of
the story for the child, then a supremely powerful, benevolent, and
just God would not exist.
It is not news that we often make discoveries or find reasons for a mathematical proposition by thinking alone. But does any of this thinking count as conducting a thought experiment? The answer to that question is “yes”, but without refinement the question is uninteresting. Suppose you want to know whether the equation [ 8x + 12y = 6 ] has a solution in the integers. You might mentally substitute some integer values for the variables and calculate. In that case you would be mentally trying something out, experimenting with particular integer values, in order to test the hypothesis that the equation has no solution in the integers. Not getting a solution first time, you might repeat the thought experiment with different integer inputs.
In this paper we argue that the different positions taken by Dyson and Feynman on Feynman diagrams’ representational role depend on different styles of scientific thinking. We begin by criticizing the idea that Feynman Diagrams can be considered to be pictures or depictions of actual physical processes. We then show that the best interpretation of the role they play in quantum field theory and quantum electrodynamics is captured by Hughes' Denotation, Deduction and Interpretation theory of models (DDI), where “models” are to be interpreted as inferential, non-representational devices constructed in given social contexts by the community of physicists.
The Univalent Foundations (UF) offer a new picture of the foundations of mathematics largely independent from set theory. In this paper I will focus on the question of whether Homotopy Type Theory (HoTT) (as a formalization of UF) can be justified intuitively as a theory of shapes in the same way that ZFC (as a formalization of set-theoretic foundations) can be justified intuitively as a theory of collections. I first clarify what I mean by an “intuitive justification” by distinguishing between formal and pre-formal “meaning explanations” in the vein of Martin-Löf. I then explain why Martin-Löf’s original meaning explanation for type theory no longer applies to HoTT. Finally, I outline a pre-formal meaning explanation for HoTT based on spatial notions like “shape”, “path”, “point” etc. which in particular provides an intuitive justification of the axiom of univalence. I conclude by discussing the limitations and prospects of such a project.
The Born’s rule to interpret the square of wave function as the probability to get a specific value in measurement has been accepted as a postulate in foundations of quantum mechanics. Although there have been so many attempts at deriving this rule theoretically using different approaches such as frequency operator approach, many-world theory, Bayesian probability and envariance, literature shows that arguments in each of these methods are circular. In view of absence of a convincing theoretical proof, recently some researchers have carried out experiments to validate the rule up-to maximum possible accuracy using multi-order interference (Sinha et al, Science, 329, 418 ). But, a convincing analytical proof of Born’s rule will make us understand the basic process responsible for exact square dependency of probability on wave function. In this paper, by generalizing the method of calculating probability in common experience into quantum mechanics, we prove the Born’s rule for statistical interpretation of wave function.
In this paper I argue that the consistency condition from the Deutsch’s influential model for closed timelike curves (CTCs) differs significantly from the classical consistency condition found in Lewis  and Novikov , as well as from the consistency condition found in the P-CTC model, the major rival to Deutsch’s approach. Both the CCC and the P-CTC consistency condition are formulable in the context of a single history of the world. Deutsch’s consistency condition relies on the existence of a structure of parallel worlds. I argue that Deutsch’s commitment to realism about parallel worlds puts his solutions to the information paradox in jeopardy. I argue that, because of Deutsch’s commitment to this metaphysical picture, he is committed to the existence of physical situations that are in every way indistinguishable from the paradoxes he attempts to rule out by adopting the model in the first place. Deutsch’s proposed solution to the Knowledge Paradox, in particular his commitment to the actuality of the many worlds of the Everett interpretation (on which he relies to solve the paradoxes), guarantees the existence of worlds that are indistinguishable from worlds in which the genuine Knowledge Paradox arises.
Heaven is a place where at least some of us go after we die. There, it is said, we live forever in the immediate presence of God. During our natural lives, God remains distant: we cannot perceive him, or at least not in any obvious or direct way. Observant and intellectually honest people can be entirely unaware that there is any sort of divine being. But in Heaven it is no more possible to be unaware of the divine being than for someone walking in the Sahara desert on a summer’s day to be unaware of the sun. This eternal life in the presence of God is taken to be the best possible state for a human being, and attaining it is the chief goal of Muslims, Christians, and many other religious people.
Welcome again to the Prosblogion Virtual Colloquium! This week’s paper is “Proportionality, Maximization, and the Highest Good” by Craig E. Bacon. Bacon is a PhD candidate at the University of South Carolina. …
To have free will is to have what it takes to act freely. When an
agent acts freely—when she exercises free will—it is up to
her whether she does one thing or another on that occasion. A
plurality of alternatives is open to her, and she determines which she
pursues. When she does, she is an ultimate source or origin of her
action. So runs a familiar conception of free will. Incompatibilists hold that we act freely in this sense only if
determinism is false. Some say little more about what, besides
indeterminism, free will requires. And, indeed, the task of providing
an incompatibilist account is not an easy one.
Ramon Llull (1232–1316) is an amazing figure in the field of
philosophy during the Middle Ages. He is currently recognized as the
author of Ars Magna, a combining logical system to discover
the truth, conceived as an instrument to be used in interfaith
dialogue to convert infidels. In the Ars Llull’s
theological, metaphysical, and logical conceptions are amply
illustrated, and they were developed throughout his more than 200
written works in Catalan, Arabic, and Latin. He is known for being
among the first authors to use his vernacular language, Catalan, to
communicate his thought.
[Editor's Note: The following new entry by Vittoria Perrone Compagni replaces the
on this topic by the previous author.] The intellectual biography of Heinrich Cornelius Agrippa von
Nettesheim (1486–1535) provides us with significant proof of a
cultural crisis in the Renaissance. The most striking aspect of his
heritage is the seemingly paradoxical coexistence of a comprehensive
treatise on magic and occult arts, De occulta philosophia libri
tres (Three Books on Occult Philosophy), written in
1510, but then reworked, substantially enlarged, and finally published
in 1533, and a rigorous refutation of all products of human reason,
De incertitudine et vanitate scientiarum et artium atque
excellentia verbi Dei declamatio invectiva (On the
Uncertainty and Vanity of the Arts and Sciences: An Invective
Declamation), printed in 1530.
The ethical writings of the Oxford Idealists, T. H. Green and
F. H. Bradley, reflect the influence of Kant and Hegel on English
moral philosophy in the latter part of the Nineteenth Century. To the
extent that either draws on other sources it is to Aristotle that they
turn rather than to British moral philosophers such as Butler, Hume or
Reid; a point which is evident both from the fact that Green and
Bradley offer a type of perfectionist account of morality that is
articulated in terms of the concept of self-realization and from the
appearance of Aristotle's man of practical wisdom (the
phronimos) in the fifth essay of Bradley's Ethical
Of knowledge naught remained I did not know,
Of secrets, scarcely any, high or low;
All day and night for three score and twelve years,
I pondered, just to learn that naught I know. (Rubā‘iyyāt, Sa‘idī
1991, p. 125)
Umar Khayyam was a polymath, scientist, philosopher, and poet of
the 11th century CE. Whereas his mathematical works and
poetry have been the subject of much discussion, his recently edited
and published philosophical works have remained a largely neglected
area of study. In what follows, we shall review and comment on the
salient features of Khayyam’s poetry and philosophy, their
relationship with one another, and Khayyam’s pioneering views on
Robert Holkot, OP (d. 1349) belonged to the first generation of
scholars to absorb and develop the views of William Ockham. He is
particularly known for his “covenantal theology” and his
views on human freedom within the framework of a divine command
ethics. He developed an original theology grounded in Ockham’s
logic and metaphysics, and his works were influential into the