Nicod’s Principle says that the claim that all Fs are Gs is confirmed by each instance. Here’s yet another counterexample. Consider the claim:
All unicorns are male. We take this claim to be true, albeit vacuously so, since there are no unicorns. …
What is the motivational profile of admiration? In this paper I will investigate what form of connection between admiration and motivation there may be good reason to accept. A number of philosophers have advocated a connection between admiration and motivation to emulate. I will start by examining this view. I will present three problems for this view. Before suggesting an expanded account of the connection between admiration and motivation according to which admiration involves motivation to promote the value that is judged to be present in the object of admiration. Finally I will examine the implications of this account for the use of admiration in education.
For the next two weeks, I’m in Berkeley for the Simons program “Challenges in Quantum Computation” (awesome program, by the way). If you’re in the Bay Area and wanted to meet, feel free to shoot me an email (easiest for me if you come to Berkeley, though I do have a couple planned trips to SF). …
In all those circumstances, we become used to consulting our reason as to what the best course of action is, and to settle on the one that will give us the greatest satisfaction afterwards, and thus we acquire the idea of moral evil, that is of an act that is harmful to others and which is prohibited by reason. …
Strengthening the prejacent
Posted on Tuesday, 12 Jun 2018
Sometimes, when we say that someone can (or cannot, or must, or
must not) do P, we really mean that they can (cannot, must, must not)
do Q, where Q is logically stronger than P. By what linguistic
mechanism does this strengthening come about? …
We all think that humans are animals, that human language is a sophisticated form of animal signaling, and that it arises spontaneously due to natural processes. From a naturalistic perspective, what is fundamental is what is common to signaling throughout the biological world -- the transfer of information. As Fred Dretske put it in 1981, "In the beginning was information, the word came later." There is a practice of signaling with information transfer that settles into some sort of a pattern, and eventually what we call meaning or propositional content crystallizes out.
Social ontology gives an account of what there is in the social world, judged from the viewpoint of presumptively autonomous human beings. Three issues are salient. The individualism issue is whether social laws impose a limit on individual autonomy from above; the atomism issue is whether social interactions serve from below as part of the infrastructure of intentional autonomy; and the singularism issue whether groups can rival individuals, achieving intentional autonomy as corporate agents. The paper argues that individual autonomy is not under challenge from social laws, that the achievement of intentional autonomy does indeed presuppose interaction with others, and that groups of individuals can incorporate as autonomous agents. In other words, it defends individualism but argues against atomism and singularism.
Early modern experimental philosophers often appear to commit to, and utilise, corpuscular and mechanical hypotheses. This is somewhat mysterious: such hypotheses frequently appear to be simply assumed, odd for a research program which emphasises the careful experimental accumulation of facts. Isaac Newton was one such experimental philosopher, and his optical work is considered a clear example of the experimental method. Focusing on his optical investigations, I identify three roles for hypotheses. Firstly, Newton introduces a hypothesis to explicate his abstract theory. The purpose here is primarily to improve understanding or uptake of the theory. Secondly, he uses a hypothesis as a platform from which to generate some crucial experiments to decide between competing accounts. The purpose here is to suggest experiments in order to bring a dispute to empirical resolution. Thirdly, he uses a hypothesis to suggest an underlying physical cause, which he then operationalises and represents abstractly in his formal theory. The second and third roles are related in that they are both cases of scaffolding: hypotheses provide a temporary platform from which further experimental work and/or theorising can be carried out. In short, the entities and processes included in Newton’s optical hypothesis are not simply assumed hypothetical posits. Rather, they play instrumental roles in Newton’s experimental philosophy.
In computer science, formal methods are used to specify, develop, and verify hardware and software systems. Such methods hold great promise for mathematical discovery and verification of mathematics as well.
History in fast-forward. Logic and argumentation are a natural combination. Though the precise origins of logic are hidden in the mists of antiquity, reflection on patterns in legal or philosophical debate may have been one of the driving forces in the genesis of the discipline. But afterwards, the main emphasis over time shifted to consequence relations in an abstract universe of propositions, and the formal systems to which these give rise. Though contacts were never lost entirely between logic and the realities of discussion and debate, the twentieth century saw a deep split. Perelman and Olbrechts–Tyteca (1958) pointed out how actual reasoning may be more like weaving a piece of cloth from many threads than forging a chain with links in linear mathematical proof style, and rhetoric and informal logic then took their own course. Likewise, Toulmin (1958) made a powerful case of how legal procedure and functional schemas – ‘formalities’ rather than logical form – may be the best paradigm for understanding argumentation. Both critics have inspired follow-up frameworks that continue to flourish today (cf. Walton and Krabbe 1995; van Eemeren and Grootendorst 2004). But this split was not inevitable, and it was not forever. Already Lorenzen (1955) used innovative game-theoretical models of dialogue to investigate the foundations of logic, and in more recent times, Dung (1995) introduced formal models of argumentation in a setting of Artificial Intelligence, which turned out to have strong connections to computational logics.
Trope theory is the view that reality is (wholly or partly) made up
from tropes. Tropes are things like the particular shape, weight, and
texture of an individual object. Because tropes are particular, for
two objects to ‘share’ a property (for them both to
exemplify, say, a particular shade of green) is for each to contain
(instantiate, exemplify) a greenness-trope, where those
greenness-tropes, although numerically distinct, nevertheless exactly
resemble each other. Apart from this very thin core assumption—that there are
tropes—different trope theories need not have very much in
common.[ 1 ]
Most trope theorists (but not all) believe
that—fundamentally—there is nothing but tropes.
Nick Freeman is a well-known British lawyer. He rose to fame in the 1990s when he successfully defended a number of celebrity clients from dangerous driving prosecutions. He was particularly popular among footballers. …
By Gordon Hull
One of the things that marketers like about big data is that they can personalize ads. That operation is getting increasingly sophisticated. We’ve known for a while that basic personality traits (like introversion/extraversion) can be predicted from Facebook likes. …
Naïve Realists think that the ordinary mind-independent objects that we perceive are constitutive of the character of experience. Some understand this in terms of the idea that experience is diaphanous: that the conscious character of a perceptual experience is entirely constituted by its objects. My main goal here is to argue that Naïve Realists should reject this, but I’ll also highlight some suggestions as to how Naïve Realism might be developed in a non-diaphanous direction.
As it is standardly presented, epistemological disjunctivism involves the idea that paradigm cases of visual perceptual knowledge are based on visual perceptual states which are propositional – states of seeing that p (McDowell (1982, 1995, 2008), Haddock and Macpherson (2008b), Pritchard (2012, 2016)). I look at the crow perched in the tree, in excellent perceptual conditions, with fully functioning perceptual and cognitive capacities, I come to know that the crow is black. I know this on the basis of visual perception. And the epistemo-logical disjunctivist spells this out as follows: I have this knowledge in virtue of the fact that I can see that the crow is black.
Over the past decade or so there has been increasing interest, in both philosophy and psychology, in the claim that we should appeal to various forms of social interaction in explaining our knowledge of other minds, where this is presented as an alternative to what is referred to as the dominant approach, usually identified as the ‘theory-theory’. Such claims are made under a variety of headings: the ‘social interaction’ approach, the ‘intersubjectivity approach’, the ‘second person approach’, the ‘collective intentionality’ approach, and more. A multitude of claims are made under these various headings, both about the kind of social interaction we should be appealing to, and about how exactly this or that interaction provides an alternative to this or that ingredient in the ‘dominant approach’. Faced with this plethora of claims and characterizations one may well find oneself wondering whether there is an interesting, well-formulated debate to be had in this area I believe that there is a least one such debate, and in what follows I begin to sketch out how I think it should be formulated, and why I think it reveals fundamental issues about the nature of our knowledge of other minds. The debate turns on pitting two claims against each other. I will call one the ‘Observation Claim’, a claim that does, I think, capture a very widely held view about the basis and nature of our knowledge of other minds, and is rightly labeled ‘dominant’. The other I label the ‘Communication Claim’. It says we should give particular forms of interpersonal communication a foundational role in explaining our knowledge of each other’s minds. Although I think some version of the Communication Claim is right, my main aim is not so much to argue for it but, somewhat programmatically, to put on the table some of the central claims I believe would need to be made good if it is to an interesting and serious alternative to the Observation Claim.
Slavery is the ownership of one person by another. Since a person no more owns another than a thief owns the purloined goods, there has never been any slavery. But of course there have been institutions thought to be slavery: institutions in which a person was thought to be the property of another. …
While it is often said that robotics should aspire to reproducible and measurable results that allow benchmarking, I argue that a focus on benchmarking can be a hindrance for progress in robotics. The reason is what I call the ‘measure-target confusion’, the confusion between a measure of progress and the target of progress. Progress on a benchmark (the measure) is not identical to scientific or technological progress (the target). In the past, several academic disciplines have been led into pursuing only reproducible and measurable ‘scientific’ results – robotics should be careful to follow that line because results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems for a broad variety of contexts. While it is extremely valuable to improve benchmarks to reduce the distance between measure and target, the general problem to measure progress towards more intelligent machines (the target) will not be solved by benchmarks alone; we need a balanced approach with sophisticated benchmarks, plus real-life testing, plus qualitative judgment.
Consciousness matters. It’s because we know that other humans are conscious that we care so much about them. If we think that a creature (a worm, say) is not conscious, then we have fewer qualms about harming it. Yet how can we detect consciousness? You know that you are conscious, but how do you know that other people are? You could observe their behaviour and scan their brains, but you won’t feel their experiences. Indeed, it is conceivable (if very unlikely) that other people do not have conscious experiences at all. Their brains might operate just like yours, producing the normal range of human behaviour (including claiming to be conscious), yet without any conscious experience occurring. Philosophers call such imaginary beings zombies.
We care what people think of us. The thesis that beliefs wrong, although compelling, can sound ridiculous. However, the idea that we can wrong someone by what we believe reveals itself in many places. One common formulation of the Christian Eucharistic confession, “we have sinned against you in thought, word, and deed”, appeals to the idea that we can sin against God in thought, as well as in word and in deed. When loved ones believe the worst of us, it is tempting to think that we can demand an apology for the beliefs they hold, and not just their actions. Many people also think that we can wrong not only the living but also the dead when we believe the worst of them. And at least one of the distinctive wrongs committed by a racist plausibly lies in what she believes about another human being. In all of these cases, there is prima facie evidence that at least one important part of the wrong lies in the belief and not merely the acts leading up to or the acts that follow from the belief.
The key idea of this paper is that human communication is first and foremost a matter of negotiating commitments, rather than one of conveying intentions, beliefs, and other mental states. Every speech act causes the speaker to become committed to the hearer to act on a propositional content. Hence, commitments are relations between speakers, hearers, and propositions; their purpose is to enable speakers and hearers to coordinate their actions. To illustrate the potential of the approach, commitment-based analyses are offered for a representative sample of speech act types, conversational implicatures, and common ground.
Like Most Religious Studies Graduate Students of My Generation, I Was Assigned Clifford Geertz’s the Interpretation of Cultures in My Theories and Methods Course. As Brilliant, Eloquent, and Constantly Re-Readable as the Essays Collected in This Volume Are, Something About Them Troubled Me Even Back in My Grad School Days, and I Have Since Come to View This Work as a Signpost Marking the Point When Religious Studies—Like Many Humanistic Disciplines—Took a Wrong Turn Down Into the Postmodern Rabbit Hole of Interminable Verstehen. Geertz Combines His Celebration of Gilbert Ryle’s “Thick Description” as a Process of Endlessly Uncovering Semiotic Turtles Upon Turtles (Geertz 1973: 29) with a Clear Disdain for “Reductionistic” Attempts to Explain Religion or Other Cultural Forms. In the Process, the Grand Explanatory Ambitions of the Early Figures in Our Field Are Made to Seem Both Culturally Naïve and Dangerously Hegemonistic.
Approaches to quantum gravity often involve the disappearance of space and time at the fundamental level. The metaphysical consequences of this disappearance are profound, as is illustrated with David Lewis’s analysis of modality. As Lewis’s possible worlds are unified by the spatiotemporal relations among their parts, the non-fundamentality of spacetime—if borne out—suggests a serious problem for his analysis: his pluriverse, for all its ontological abundance, does not contain our world. Although the mere existence—as opposed to the fundamentality—of spacetime must be recovered from the fundamental structure in order to guarantee the empirical coherence of the non-spatiotemporal fundamental theory, it does not suffice to salvage Lewis’s theory of modality from the charge of rendering our actual world impossible.
Scientific conflicts often stem from differences in the conceptual framework through which scientists view and understand their own field. In this chapter, I analyze the ontological and methodological assumptions of three traditions in evolutionary biology, namely, Ernst Mayr’s population thinking, the gene-centered view of the Modern Synthesis (MS), and the Extended Evolutionary Synthesis (EES). Each of these frameworks presupposes a different account of “evolutionary causes,” and this discrepancy prevents mutual understanding and objective evaluation in the recent contention surrounding the EES. From this perspective, the chapter characterizes the EES research program as an attempt to introduce causal structures beyond genes as additional units of evolution, and compares its research methodology and objectives with those of the traditional MS framework.
This, then, is the end for which I strive, to attain to such a character myself, and to endeavor that many should attain to it with me. (2) In other words, it is part of my happiness to lend a helping hand, that many others may understand even as I do, so that their understanding and desire may entirely agree with my own. …
There are a number of reasons to think that the electron cannot truly be spinning. Given how small the electron is generally taken to be, it would have to rotate superluminally to have the right angular momentum and magnetic moment. Also, the electron’s gyromagnetic ratio is twice the value one would expect for an ordinary classical rotating charged body. These obstacles can be overcome by examining the flow of mass and charge in the Dirac field (interpreted as giving the classical state of the electron). Superluminal velocities are avoided because the electron’s mass and charge are spread over sufficiently large distances that neither the velocity of mass flow nor the velocity of charge flow need to exceed the speed of light. The electron’s gyromagnetic ratio is twice the expected value because its charge rotates twice as fast as its mass.
Demarest asserts that we have good evidence for the existence and nature of an initial chance event for the universe. I claim that we have no such evidence and no knowledge of its supposed nature. Against relevant comparison classes her initial chance account is no better, and in some ways worse, than the alternatives.
Øystein Linnebo’s book Thin Objects: An Abstractionist Account is out from OUP. If you’ve been following his contributions to debates on neo-Fregean philosophy of mathematics and related issues over some fifteen years, you won’t be surprised by the general line; but you will be pleased to have the strands of thought brought together in a shortish and (at least relative to the topic) accessible book. …
There is unanimous agreement that Nāgārjuna (ca
150–250 AD) is the most important Buddhist philosopher after the
historical Buddha himself and one of the most original and influential
thinkers in the history of Indian philosophy. His philosophy of the
“middle way” (madhyamaka) based around the
central notion of “emptiness”
(śūnyatā) influenced the Indian philosophical
debate for a thousand years after his death; with the spread of
Buddhism to Tibet, China, Japan and other Asian countries the writings
of Nāgārjuna became an indispensable point of reference for
their own philosophical inquiries.
By Gordon Hull
We’ve all heard of a version of the experiment: you set a kid down with a marshmallow, and tell him that if he can sit there and not eat it for a while, he can have two. Some kids can do it, and others can’t. …