Chance-talk is ubiquitous in science, both in fundamental science and in the special sciences, and no less common in nonscientific contexts. We talk about the chance of winning the lottery, the chance of a defendant being found guilty, the chance of a politician winning reelection, and so on.
Summer is here and I have finally started in my summer reading list. First up was Michael Gazzaniga’s new book The Consciousness Instinct. Gazzaniga is of course we’ll known for his work on split brain patients and for helping to found the the discipline of cognitive neuroscience. …
The ongoing epistemological debate on scientific thought experiments (TEs) revolves, in part, around the now famous Galileo’s falling bodies TE and how it could justify its conclusions. In this paper, I argue that the TE's function is misrepresented in this a- historical debate. I retrace the history of this TE and show that it constituted the first step in two general “argumentative strategies”, excogitated by Galileo to defend two different theories of free-fall, in 1590’s and then in the 1638. I analyse both argumentative strategies and argue that their function was to eliminate potential causal factors: the TE serving to eliminate absolute weight as a causal factor, while the subsequent arguments served to explore the effect of specific weight, with conflicting conclusions in 1590 and 1638. I will argue thorough the paper that the TE is best grasped when we analyse Galileo’s restriction, in the TE’s scenario and conclusion, to bodies of the same material or specific weight. Finally, I will draw out two implications for the debate on TEs.
Within feminist theory and a wide range of social sciences, intersectionality has been a relevant focus of research. It has been argued that intersectionality allowed an analytic shift from considering gender, race, class or sexuality as separate and added to each other to considering them as interconnected. This has led most authors to assume mutual constitution as the pertinent model, most times without much scrutiny. In this paper we review the main senses of ‘mutual constitution’ in the literature, critically examine them and present what we take to be a problematic assumption: the problem of reification. This is to be understood as the conceptualization of social categories as entities or objects, in a broad sense, and not as properties of them. We then present the property framework, together with the emergent experience view, which conceptualizes categories and social systems in a way that maintains their ontological specificity while allowing for their being deeply affected by each other.
If we accept relativity theory as providing a metaphysically correct theory of time, the folk concept of temporal simultaneity needs revision. The standard way to revise it has been to relativize it to a reference frame. …
“Nemo judex in causa sua,” we are told: no one should be a judge in their own case. But while this may be a good rule to follow in legal proceedings, its epistemic analogue would be harder to uphold. In fact, we’re often put in a position where we have no choice but to judge our own epistemic performance. We’re put in this sort of position when, for example, we form the opinion that a female candidate’s qualifications are slightly less good than her male competitor’s—while aware of strong evidence that we’re likely to undervalue women’s CVs relative to men’s. Or when we form an opinion about the results of some economic policy that’s tightly connected to our passionate political views—while aware of strong evidence that political passions frequently distort people’s reasoning on this type of matter. A small-plane pilot is put in this position when she’s deciding whether she has enough fuel to make it to an airport a bit further away than her original destination, while aware that her altitude makes it likely that she’s affected by hypoxia, which notoriously affects this sort of judgment while leaving its victims feeling totally clear-headed. A medical resident is put in this positon when he forms an opinion about the appropriate drug dosage for a patient, while aware of strong evidence that he’s been awake so long that his thinking about appropriate dosages is likely to be degraded. And many of us are put in this position when we form an opinion on some controversial issue while aware that others—who share our evidence and who seem as likely as we are to form accurate beliefs on the basis of such evidence—have reached a contrary opinion.
People often encounter evidence which bears directly on the reliability or expected accuracy of their thinking about some topic. This has come to be called “higher-order evidence”. For example, suppose I have some evidence E, and come to have high confidence in hypothesis H on its basis. But then I get some evidence to the effect that I’m likely to do badly at assessing the way E bears on H. Perhaps E bears on H statistically, and I’m given evidence that I’m bad at statistical thinking. Or perhaps E is a set of CVs of male and female candidates, H is the hypothesis that a certain male candidate is a bit better than a certain female candidate, and I get evidence that I’m likely to overrate the CVs of males relative to those of females. Or perhaps E consists of gauge and dial readings in the small plane I’m flying over Alaska, H is the hypothesis that I have enough fuel to reach Sitka, and I realize that my altitude is over 13,000 feet, which I know means that my reasoning from E to H is likely affected by hypoxia. Or finally, perhaps E is a body of meteorological data that seems to me to support rain tomorrow, H is the hypothesis that it’ll rain tomorrow, and I learn that my friend, another reliable meteorologist with the same data E, has predicted that it won’t rain tomorrow.
You know what I mean by a good man? One who is complete, finished, – whom no constraint or need can render bad. I see such a person in you, if only you go steadily on and bend to your task, and see to it that all your actions and words harmonize and correspond with each other and are stamped in the same mould [or form]. …
Machine learning has the computer generate parameters for a neural network on the basis of a lot of data. Suppose that we think that computers can be conscious. I wonder if we are in a position, then, to know that any particular training session won’t be unpleasant for the computer. …
In April 2017, Siddhartha Mukherjee wrote an interesting article in the New Yorker. Titled ‘AI versus MD’ the article discussed the future of automated medicine. Automation is already rampant in medicine. …
Comment #1 June 17th, 2018 at 1:40 pm
I’m guessing you’ve been asked this before, but I don’t know your answer so I’ll ask anyway: if someone were to discover a poly-time algorithm for solving NP-complete problems (and say that algorithm is also efficient in practice so that you could solve large SAT instances on a regular computer) what would be the moral thing to do? …
I return to the material in  “Paris-Harrington in an NF context”. Various people had commented that the concept of a relatively large set of natural numbers is unstratified, and in that essay I mused about whether or not the extra strength of PH over finite Ramsey was to do with this failure of stratification. In the present—self-contained— note I shall show that—somewhat to my annoyance—it is not: Paris-Harrington has a stratified formulation.
At around 3.5 to 4 years of age, children in Western and other numerate cultures experience a profound shift in their understanding of numbers: they come to understand how counting works. They can use number words to denote the cardinality of collections of items in a precise fashion by placing each item to be counted into a one-to-one correspondence with elements of a counting list and using the last item to denote the cardinality of the set (see e.g., Sarnecka, in press; Le Corre, 2014). Children’s acquisition of number concepts is often conceptualized in terms of individual discovery and personal reconstruction. For example, Carey (2009, 302) writes that children learn to individuate three items “before figuring out how the numeral list represents natural number”. Davidson, Eng, and Barner (2012, 163) put it this way: “Sometime between the ages of 3-and-a-half and 4, children discover that counting can be used to generate sets of the correct size for any word in their count list” (emphasis added in both).
This paper examines the role of prestige bias in shaping academic philosophy, with a focus on its demographics. I argue that prestige bias exacerbates the structural underrepresentation of minorities in philosophy. It works as a filter against (among others) philosophers of color, women philosophers, and philosophers of low socio- economic status. As a consequence of prestige bias our judgments of philosophical quality become distorted. I outline ways in which prestige bias in philosophy can be mitigated.
The aim of this paper is to cast new light on an important and often overlooked notion of perspectival knowledge arising from Kant. In addition to a traditional notion of perspectival knowledge as “knowledge from a vantage point” (perspectival knowledge1), a second novel notion — “knowledge towards a vantage point” (perspectival knowledge ) —is here introduced. The origin and rationale of perspectival knowledge2 are traced back to Kant’s so-called transcendental illusion (and some of its pre-Critical sources). The legacy of the Kantian notion of perspectival knowledge2 for contemporary discussions on disagreement and the role of metaphysics in scientific knowledge is discussed.
Moral dilemmas, at the very least, involve conflicts between moral
requirements. Consider the cases given below.
this paper examines how new evil demon problems could arise for our access to the internal world of our own minds. I start by arguing that the internalist/externalist debate in epistemology has been widely misconstrued---we need to reconfigure the debate in order to see how it can arise about our access to the internal world. I then argue for the coherence of scenarios of radical deception about our own minds, and I use them to defend a properly formulated internalist view about our access to our minds. The overarching lesson is that general epistemology and the specialized epistemology of introspection need to talk---each has much to learn from each other.
The equal political liberties are among the basic first-principle liberties in John Rawls’s theory of Justice as fairness. But Rawls insists, further, that the “fair value” of the political liberties must be guaranteed, and that a market economy must be embedded in an institutional structure that realizes this guarantee. The aim and the supposed capacity to assure fair value are what distinguish property-owning democracy and liberal democratic socialism from other ideal regime-types. Disavowing an interest in fair value is what disqualifies welfare-state capitalism as a possible realization of Justice as fairness.
Robert Nozick (1938–2002) was a renowned American philosopher
who first came to be widely known through his 1974 book, Anarchy,
State, and Utopia
(1974),[ 1 ]
which won the National Book Award for Philosophy and Religion in 1975. Pressing further the anti-consequentialist aspects of John
Rawls’ A Theory of Justice, Nozick argued that respect for
individual rights is the key standard for assessing state action and,
hence, that the only legitimate state is a minimal state that
restricts its activities to the protection of the rights of life,
liberty, property, and contract.
Aristotle conceives of ethical theory as a field distinct from the
theoretical sciences. Its methodology must match its subject
matter—good action—and must respect the fact that in this
field many generalizations hold only for the most part. We study
ethics in order to improve our lives, and therefore its principal
concern is the nature of human well-being. Aristotle follows Socrates
and Plato in taking the virtues to be central to a well-lived life. Like Plato, he regards the ethical virtues (justice, courage,
temperance and so on) as complex rational, emotional and social
skills. But he rejects Plato's idea that to be completely virtuous one
must acquire, through a training in the sciences, mathematics, and
philosophy, an understanding of what goodness is.
Lord Kelvin famously stated that ‘when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it ... your knowledge is of a meagre and unsatisfactory kind.’1 Today, in an age when thermometers and ammeters produce stable measurement outcomes on familiar scales, Kelvin’s remark may seem superfluous. How else could one gain reliable knowledge of temperature and electric current if not through measurement? But the quantities called ‘temperature’ and ‘current’ as well as the instruments that measure them have long histories during which it was far from clear what was being measured and how – histories in which Kelvin himself played important roles.2
The focus of this entry is on Schopenhauer’s aesthetic theory,
which forms part of his organic philosophical system, but which can be
appreciated and assessed to some extent on its own terms (for ways in
which his aesthetic insights may be detached from his metaphysics see
Shapshay, 2012b). The theory is found predominantly in Book 3 of
the World as Will and Representation (WWR I) and in the
elaboratory essays concerning Book 3 in the second volume (WWR II), and
it is on these texts that I will concentrate here. This entry
offers a brief background on Schopenhauer’s metaphysics before
addressing Schopenhauer’s methodology in aesthetics, his account
of the subjective and objective sides of aesthetic experience (both of
the beautiful and the sublime), his hierarchy of the arts and rationale
for this hierarchy, his view of artistic genius, the exceptional status
of music among the fine arts, and the relationships he theorized
between aesthetics and ethics.
In this paper I provide a new account of linguistic presuppositions, on which they are ancillary speech acts defined by constitutive norms. After providing an initial intuitive characterization of the phenomenon, I present a normative speech act account of presupposition in parallel with Williamson’s analogous account of assertion. I explain how it deals well with the problem of informative presuppositions, and how it relates to accounts for the Triggering and Projection Problems for presuppositions. I conclude with a brief discussion of the consequences of the proposal for the adequacy of Williamson’s account of assertion.
I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while. …
Aristotle and Theophrastus have preserved for us what they take to be Democritus’ definitions of colors and flavors, accounts which seem to identify these sensible qualities with micro-physical features of things in the environment.l For example, we find that being sweet just is being constituted predominately from round, large atoms (DK 68A129 and 135.65). In this way Democritus apparently makes room for colors and flavors in a world constituted from colorless and flavorless atoms: macroscopic objects possess colors and flavors in virtue of their constitution at the atomic-level.2
In his recent book Bananaworld. Quantum mechanics for primates, Jeff Bub revives and provides a mature version of his influential information-theoretic interpretation of Quantum Theory (QT). In this paper, I test Bub’s conjecture that QT should be interpreted as a theory about information, by examining whether his information-theoretic interpretation has the resources to explain (or explain away) quantum conundrums. The discussion of Bub’s theses will also serve to investigate, more in general, whether other approaches succeed in defending the claim that QT is about quantum information. First of all, I argue that Bub’s interpretation of QT as a principle theory fails to fully explain quantum non-locality. Secondly, I argue that a constructive interpretation, where the quantum state is interpreted ontically as information, also fails at providing a full explanation of quantum correlations. Finally, while epistemic interpretations might succeed in this respect, I argue that such a success comes at the price of rejecting some in between the most basic scientific standards of physical theories.
« On The Brexit/Lexit Pseudo-Democrats and Pseudo-Internationalists |
On Aesthetics and The Problem of Tragedy in De Grouchy
The sympathy we feel for physical suffering, and which constitutes part of what we call humanity, would be a sentiment too transient to be often useful, my dear C***, were we not capable of reflection as well as sensation. …
One of the most influential defenses of toleration relies on the invocation of self-skepticism in individual judgment. If we are motivated by the selection of the most morally defensible policy – either in relation to norms governing the society as a whole or a group of individuals within it – accommodating the judgments of those who disagree with us has more to recommend it when there is a significant possibility that they may be right than when such a possibility is minimal or not present at all. Not only would such an accommodation, as Mill famously argued, make it more possible for us to “exchange error for truth,” but it may also prevent the disappearance of true opinions or ways of living that may, in fact, be morally defensible against the present judgment of the majority. Even if we choose to be accommodating of disagreement for other, nonepistemic, reasons, such as the regard for individual autonomy, the presence of self-skepticism has the effect of strengthening those motives, or at least of suppressing others that may be in conflict with them, such as the duty to help others return to “the true church.” When and how much self-skepticism is warranted has, therefore, become a central concern of the epistemic accounts of toleration.
This article develops an account of a theory of rational choice based on the conception of rationality as a normatively justified correspondence between interests and choices. In this conception, rationality is best thought of as a property not of individual actions, but of a complex two-level phenomenon comprised of the social justification of behavioral norms and of the everyday choices made under these norms.
Appellate courts, which have the most control over legal doctrine, tend to operate through collegial (multimember) decision making. How does this collegiality affect their choice of legal doctrine? Can decisions by appellate courts be expected to result in a meaningful collegial rule? How do such collegial rules differ from the rules of individual judges? We explore these questions and show that collegiality has important implications for the structure and content of legal rules, as well as for the coherence, determinacy, and complexity of legal doctrine. We provide conditions for the occurrence of these doctrinal attributes in the output of collegial courts. Finally, we consider the connection between the problems that arise in the collegial aggregation of a set of legal rules and those previously a task overwhelming in practice, but straightforward in theory—she could simply decide all cases as she saw fit according to whatever rule she thought correct. Judges on a collegial (multimember) court, however, face further challenges that inhere in collegiality itself.