The Rawlsian veil of ignorance should induce agents to behave fairly in a distributive context. This work tried to re-propose, through a dictator game with giving and taking options, a sort of original position in which reasoning behind the veil should have constituted a moral cue for subjects involved in the distribution of a common output with unequal means of production. However, our experimental context would unwittingly recall more the Hobbesian state of nature than the Rawlsian original position, showing that the heuristic resource to the Rawlsian idea of a choice behind the veil is inefficacious in distributive contexts.
Under conditions of ideology, a standard model of normative political epistemology – relying on a domain-specific reflective equilibrium – risks status-quo bias. Social critique requires a more critical standpoint. What are the aims of social critique? How is such a standpoint achieved and what grounds its claims? One way of achieving a critical standpoint is through consciousness raising. Consciousness raising offers a paradigm shift in our understanding of the social world; but not all epistemic practices that appear to “raise” consciousness, are warranted. However, under certain conditions sketched in the paper, consciousness raising produces a warranted critical standpoint and a pro tanto claim against others. This is an important epistemic achievement, yet under conditions of collective self-governance, there is no guarantee that all warranted claims can be met simultaneously. There will be winners and losers even after legitimate democratic processes have been followed.
Some non-reductionists claim that so-called ‘exclusion arguments’ against their position rely on a notion of causal sufficiency that is particularly problematic. I argue that such concerns about the role of causal sufficiency in exclusion arguments are relatively superficial since exclusionists can address them by reformulating exclusion arguments in terms of physical sufficiency. The resulting exclusion arguments still face familiar problems, but these are not related to the choice between causal sufficiency and physical sufficiency. The upshot is that objections to the notion of causal sufficiency can be answered in a straightforward fashion and that such objections therefore do not pose a serious threat to exclusion arguments.
Conscious experiences are characterized by mental qualities, such as those involved in seeing red, feeling pain, or smelling cinnamon. The standard approach to modeling mental qualities is to develop a quality-space model, where mental qualities are represented by points in multidimensional spaces and where distances between points inversely correspond to degrees of phenomenal similarity. I begin by arguing that the standard framework cannot capture precision structure: for example, consider the phenomenal contrast between seeing an object as crimson in foveal vision versus seeing an object merely as red in peripheral vision. Then I develop a new formal framework that models mental qualities using regions, rather than points. I explain how this new framework not only provides a natural way of modeling precision, but also yields a variety of further theoretical fruits: it enables us to formulate novel hypotheses about the space and structures of mental qualities, formally differentiates two dimensions of phenomenal similarity, generates a quantitative model of the phenomenal so-rites, and provides a new theoretical tool for the empirical investigation of conscious experiences. A noteworthy consequence of the framework is that the structure of the mental qualities of conscious experiences is fundamentally different from the structure of the perceptible qualities of external objects.
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility (EU) formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity (i.e., being cut off from the methods and results of decision theory). To give the strategy more structure, philosophers have developed "principles of individuation"; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion (i.e., it is duly developed using the available tools of decision theory). We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory.
The nomic structure of our world spans many levels of description. The explanatory and predictive success of the ‘special sciences’ – biology, psychology, geology, and so on – reveals the existence of robust regularities (sometimes called ‘special science laws’) that knit non-fundamental phenomena into intelligible levels of description. There are two conceptions of how these robust regularities fit into the physical world. On a foundationalist conception, the physical laws (or physical properties) are the source of all other nomic facts, including the robustness of these macro-regularities. On an egalitarian conception, the physical laws are no more fundamental than the laws describing the behavior of genes, ecosystems, or societies.2
I survey from a modern perspective what spacetime structure there is according to the general theory of relativity, and what of it determines what else. I describe in some detail both the “standard” and various alternative answers to these questions. Besides bringing many underexplored topics to the attention of philosophers of physics and of science, metaphysicians of science, and foundationally minded physicists, I also aim to cast other, more familiar ones in a new light.
In physics the concept of reduction is often used to describe how features of one theory can be approximated by those of another under specific circumstances. In such circumstances physicists say the former theory reduces to the latter, and often the reduction will induce a simplification of the features in question. (By contrast, the standard terminology in philosophy is to say that the less encompassing, approximating theory reduces the more encompassing theory being approximated.) Accounts of reductive relationships aspire to generality, as broader accounts provide a more systematic understanding of the relationships between theories and which of their features are relevant under which circumstances.
Surplus structure arguments famously identify elements of a theory regarded as excess or superfluous. If there is an otherwise analogous theory that does without such elements, a surplus structure argument prompts adopting it over the one with those elements. Despite their prominence, the form, justification, and range of applicability of such arguments is disputed. I provide an account of these, following Dasgupta () for the form, which makes plain the role of observables and observational equivalence. However, I diverge on the justification: instead of demanding that the symmetries of the theory relevant for surplus structure arguments be defined without recourse to any interpretation of those theories, I suggest that the process of identifying what is observable and its consequences for symmetries work in dialog. They settle through a reflective equilibrium that is responsible to new experiments, arguments, and examples. Besides better aligning with paradigmatic uses of the surplus structure argument, this position also has some broader consequences for scope of these arguments and the relationship between symmetry and interpretation more generally.
Based on three common interpretive commitments in general relativity, I raise a conceptual problem for the usual identification, in that theory, of timelike curves as those that represent the possible histories of (test) particles in spacetime. This problem affords at least three different solutions, depending on different representational and ontological assumptions one makes about the nature of (test) particles, fields, and their modal structure. While I advocate for a cautious pluralism regarding these options, I also suggest that re-interpreting (test) particles as field processes offers the most promising route for natural integration with the physics of material phenomena, including quantum theory.
Christian List  has recently proposed a category-theoretic model of a system of levels, applying it to various pertinent metaphysical questions. We modify and extend this framework to correct some minor defects and better adapt it to application in philosophy of science. This includes a richer use of category theoretic ideas and some illustrations using social choice theory.
Recently, Horsman et al. (2014) have proposed a new framework, Abstraction/Representation (AR) theory, for understanding and evaluating claims about unconventional or non-standard computation. Among its attractive features, the theory in particular implies a novel account of what is means to be a computer. After expounding on this account, I compare it with other accounts of concrete computation, finding that it does not quite fit in the standard categorization: while it is most similar to some semantic accounts, it is not itself a semantic account. Then I evaluate it according to the six desiderata for accounts of concrete computation proposed by Piccinini (2015). Finding that it does not clearly satisfy some of them, I propose a modification, which I call Agential AR theory, that does, yielding an account that could be a serious competitor to other leading account of concrete computation.
If one is interested in reasoning counterfactually within a physical theory, one cannot adequately use the standard possible world semantics. As developed by Lewis and others, this semantics depends on entertaining possible worlds with miracles, worlds in which laws of nature, as described by physical theory, are violated. Van Fraassen suggested instead to use the models of a theory as worlds, but gave up on determining the needed comparative similarity relation for the semantics objectively. I present a third way, in which this similarity relation is determined from properties of the models contextually relevant to the truth of the counterfactual under evaluation. After illustrating this with a simple example from thermodynamics, I draw some implications for future work, including a renewed possibility for a viable deflationary account of laws of nature.
A “stopping rule” in a sequential experiment is a rule or procedure for determining when the experiment should end. For example, consider a pair of experiments designed to obtain evidence about the proportion of fruit flies in a given population with red eyes [Savage, 1962, pp. 17–8]. In both experiments, flies are caught, observed, and released sequentially and fairly, reporting in the end the number of red-eyed flies. In the first, the experiment is designed to stop after observing 100 flies, while the second is designed to stop after observing 6 red-eyed flies. In general the data from these experiments could be very different, but it is also possible that they be the same: in this case, 100 total flies would be observed in both experiments, of which 6 (including the last) would have red eyes. Is the evidence that each of the two would then provide for or against an hypothesis about the proportion of red-eyed flies the same? The stopping rule principle (SRP) states that this is so: Stopping Rule Principle: The evidential relationship between the data from a completed sequential experiment and a statistical hypothesis does not ever depend on the experiment’s stopping rule.
Amalgamating evidence from heterogeneous sources and across levels of inquiry is becoming increasingly important in many pure and applied sciences. This special issue provides a forum for researchers from diverse scientific and philosophical perspectives to discuss evidence amalgamation, its methodologies, its history, its pitfalls and its potential. We situate the contributions therein within six themes from the broad literature on this subject: the variety-of-evidence thesis, the philosophy of meta-analysis, the role of robustness/sensitivity analysis for evidence amalgamation, its bearing on questions of extrapolation and external validity of experiments, its connection with theory development, and its interface with causal inference, especially regarding causal theories of cancer.
This review concerns the notions of physical possibility and necessity as they are informed by contemporary physical theories and the reconstructive explications of past physical theories according to present standards. Its primary goal is twofold: first, to motivate and introduce a range of accessible issues of philosophical relevance around these notions; and second, to provide extensive references to the research literature on them. Although I will have occasion to comment on the direction and shape of this literature, pointing out certain lacunae in argument or scholarly attention, I intend to advance no overriding thesis or point of view, aside from the selection of issues I deem most interesting.
I review and amplify on some of the many uses of representing a scientific theory in a particular context as a collection of models endowed with a similarity structure, which encodes the ways in which those models are similar to one another. This structure, which is related to topological structure, proves fruitful in the analysis of a variety of issues central to the philosophy of science. These include intertheoretic reduction, emergent properties, the epistemic connections between modeling and inference, the semantics of counterfactual conditionals, and laws of nature. The morals are twofold: first, the further adoption of formal methods for describing similarity (and related topological) structure has the potential to aid in decisive progress in philosophy of science; and second, the selection and justification of such structure is not a matter of technical convenience, but rather often involves great conceptual and philosophical subtlety. I conclude with various directions for future research.
Recent work on the hole argument in general relativity by Weatherall (2016b) has drawn attention to the neglected concept of (mathematical) models’ representational capacities. I argue for several theses about the structure of these capacities, including that they should be understood not as many-to-one relations from models to the world, but in general as many-to-many relations constrained by the models’ isomorphisms. I then compare these ideas with a recent argument by Belot (2017) for the claim that some isometries “generate new possibilities” in general relativity. Philosophical orthodoxy, by contrast, denies this. Properly understanding the role of representational capacities, I argue, reveals how Belot’s rejection of orthodoxy does not go far enough, and makes better sense of our practices in theorizing about spacetime.
Here is a very plausible thesis:
Exactly one object is a primary bearer of my present mental states. This is a problem for the conjunction of standard perdurance, physicalism and special relativity. For according to standard perdurance on physicalism:
The primary bearers of my mental states are time slices. …
In Part III of his Ethics, “On the Origin and Nature of
the Affects,” which is the subject of this article, Spinoza
addresses two of the most serious challenges facing his thoroughgoing
naturalism. First, he attempts to show that human beings follow the
order of nature. Human beings, on Spinoza’s view, have causal natures
similar in kind to other ordinary objects, other “finite
modes” in the technical language of the Ethics, so they
ought to be analyzed and understood in the same way as the rest of
nature. Second, Spinoza attempts to show that moral concepts, such as
the concepts of good and evil, virtue, and perfection, have a basis in
« Quantum Computing Lecture Notes 2.0
The Collapsing Leviathan
I was seriously depressed for the last week, by noticeably more than my baseline amount for the new pandemic-ravaged world. The depression seems to have been triggered by two pieces of news:
The US Food and Drug Administration—yes, the same FDA whose failure to approve covid tests in February infamously set the stage for the deaths of 100,000 Americans—has now also banned the Gates Foundation’s program for at-home covid testing. …
“real” deadlines, breaks the occasional promise, and bends the rules of games. This shouldn’t surprise anyone with two feet in reality; presumptive normative standards are habitually and unthinkingly violated in entirely unremarkable ways. These acts are surely not evil, but it’s puzzling whether we can treat them as wrong at all. How can it be wrong to do something that’s so commonplace, so venial, that criticizing someone for doing it itself feels wrong? The task of this essay is to attempt to answer this question.
In this chapter we will see how string theory contains some surprising symmetries – ‘dualities’ – which, we will argue, put pressure on the view that the spacetime in which strings are described can be literally identified with classical, physical spacetime – instead it is ‘emergent’ from the theory. While the following stands on the previous chapter, and exemplifies its physics, it can be read on its own to understand the essential conclusions. We focus on one such symmetry, ‘T-duality’, but at the end review others.
It is widely held that the content of perceptual experience is propositional in nature. However, in a well-known article, “Is Perception a Propositional Attitude?” (2009), Crane has argued against this thesis. He therein assumes that experience has intentional content and indirectly argues that experience has non-propositional content by showing that from what he considers to be the main reasons in favour of “the propositional-attitude thesis”, it does not really follow that experience has propositional content. In this paper I shall discuss Crane’s arguments against the propositional-attitude thesis and will try to show, in contrast, that they are unconvincing. My conclusion will be that, despite all that Crane claims, perceptual content could after all be propositional in nature. KEYWORDS: Crane, propositional-attitude thesis, perceptual experience, propositional content, non-propositional content, accuracy conditions.
Suppose that Alice on a street corner sells Bob a “Rolex” for $15. Bob goes home and his wife Carla says: “You got scammed!” Bob takes the “Rolex” to a jeweller and finds that it is indeed a Rolex. He goes back to Carla and says: “No, I got a good deal!” But Carla says: “But if it was a fake, you would have bought it, too.”
Here is Carla’s reasoning behind her counterfactual. …
The consequences of Quine’s criterion of ontological commitment epitomized in his treatment of the term ‘Pegasus’ in “On What There Is” are evaluated in terms of Quine’s own work, in particular in “The Variable” and “Variables Explained Away”. There is a cost to maintaining this criterion with regard to the empirical consequences of some non-existent objects, given considerations prompted by Quine’s holism. This cost can be reduced by adopting a noneist position according to which non-existent objects can be values of bound variables as well.
The boundaries of social categories are frequently altered to serve normative projects, such as social reform. Griffiths and Khalidi argue that the value-driven modification of categories diminishes the epistemic value of social categories. I argue that concerns over value-modified categories are an endorsement of problematic assumptions of the value-free ideal of science. Contrary to those concerns, non-epistemic value considerations can increase the epistemic success of a scientific category. For example, the early history of the category infantile autism shows how non-epistemic value considerations can contribute to delimiting and establishing infantile autism as a distinct category in mainstream psychiatry. In the case of infantile autism, non-epistemic considerations have led to a new interpretation of existing data, the expansion of research to include biology, and the creation of diagnostic criteria that further contribute to collecting relevant data. Given this case study, we see that non-epistemic considerations may not be epistemically detrimental but can be epistemically beneficial in scientific classification.
Archimedes’ statics is considered as an example of ancient Greek applied mathematics; it is even seen as the beginning of mechanics. Wilbur Knorr made the case regarding this work, as other works by him or other mathematicians from ancient Greece, that it lacks references to the physical phenomena it is supposed to address. According to Knorr, this is understandable if we consider the propositions of the treatise in terms of purely mathematical elaborations suggested by quantitative aspects of the phenomena. In this paper, we challenge Knorr’s view, and address propositions of Archimedes’ statics in their relation to physical phenomena.
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called ‘models’ and ‘modeling-practices’ are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
The concept of animal culture began to be increasingly used in the context of animal behaviour research around the 1960s. In spite of its success, I shall argue that animal culture as it is currently conceived does not represent a fully articulated “natural kind”. But how does it fail in this regard and what consequences follow? Firstly, an analysis of the epistemological landscape of author keywords related to the concept of animal cultures is presented. I then systematically enumerate the ways in which culture cannot be considered a natural kind in the study of animal behaviour. Finally, a plausible interpretation of the scientific status of the animal culture concept is suggested that is congenial to both its well established use in animal behaviour research and its inferential limitations.