A number of arguments purport to show that quantum field theory cannot be given an interpretation in terms of localizable particles. We show, in light of such arguments, that the classical ~ → 0 limit can aid our understanding of the particle content of quantum field theories. In particular, we demonstrate that for the massive Klein-Gordon field, the classical limits of number operators can be understood to encode local information about particles in the corresponding classical field theory.
The origin of life occupies a very important place in the study of the evolution. Its liminal location between life and non-life poses special challenges to researchers who study this subject. Current approaches in studying the origin and evolution of early life are reductive: they either reduce the domain of non-life to the domain of life or vice versa. This contribution seeks to provide a perspective that would avoid reductionism of any kind. Its goal is to outline a frame that would include both domains and their respective evolutions as its particular cases. The study examines the main theoretical perspectives on the origin and evolution of early life and provides a constructive critique of these perspectives. An objective view requires viewing an object or a phenomenon from all available points of view. The goal of this contribution is not to prove the current perspectives wrong and to deny their achievements. It seeks to provide an angle that would be sufficiently wide and would allow synthesizing current perspectives for a comprehensive and objective interpretation of the origin and evolution of early life. In other words, it seeks to outline a frame for an objective view that will help understand life’s place within the universe.
To make sense of large data sets, we often look for patterns in how data points are “shaped” in the space of possible measurement outcomes. The emerging field of topological data analysis (TDA) offers a toolkit for formalizing the process of identifying such shapes. This paper aims to discover why and how the resulting analysis should be understood as reflecting significant features of the systems that generated the data. I argue that a particular feature of TDA—its functoriality— is what enables TDA to translate visual intuitions about structure in data into precise, computationally tractable descriptions of real-world systems.
This paper generalises Enelow (J Polit 43(4):1062–1089, 1981) and Lehtinen’s (Theory Decis 63(1):1–40, 2007b) model of strategic voting under amendment agendas by allowing any number of alternatives and any voting order. The generalisation enables studying utilitarian efficiencies in an incomplete information model with a large number of alternatives. Furthermore, it allows for studying how strategic voting affects path-dependence. Strategic voting increases utilitarian efficiency also when there are more than three alternatives. The existence of a Condorcet winner does not guarantee path-independence if the voters engage in strategic voting under incomplete information. A criterion for evaluating path-dependence, the degree of path-dependence, is proposed, and the generalised model is used to study how strategic voting affects it. When there is a Condorcet winner, strategic voting inevitably increases the degree of path-dependence, but when there is no Condorcet winner, strategic voting decreases path-dependence. Computer simulations show, however, that on average it increases the degree of path-dependence.
The most common argument against the use of rational choice models outside economics is that they make unrealistic assumptions about individual behavior. We argue that whether the falsity of assumptions matters in a given model depends on which factors are explanatorily relevant. Since the explanatory factors may vary from application to application, effective criticism of economic model building should be based on model-specific arguments showing how the result really depends on the false assumptions. However, some modeling results in imperialistic applications are relatively robust with respect to unrealistic assumptions.
Political science and economic science . . . make use of the same language, the same mode of abstraction, the same instruments of thought and the same method of reasoning. (Black 1998, 354) Proponents as well as opponents of economics imperialism agree that imperialism is a matter of unification; providing a unified framework for social scientific analysis. Uskali Mäki distinguishes between derivational and ontological unification and argues that the latter should serve as a constraint for the former. We explore whether, in the case of rational-choice political science, self-interested behavior can be seen as a common causal element and solution concepts as the common derivational element, and whether the former constraints the use of the latter. We find that this is not the case. Instead, what is common to economics and rational-choice political science is a set of research heuristics and a focus on institutions with similar structures and forms of organization.
This paper examines the welfare consequences of strategic voting under the Borda rule in a comparison of utilitarian efficiencies in simulated voting games under two behavioural assumptions: expected utility-maximising behaviour and sincere behaviour. Utilitarian efficiency is higher in the former than in the latter. Strategic voting increases utilitarian efficiency particularly if the distribution of preference intensities correlates with voter types. The Borda rule is shown to have two advantages: strategic voting is beneficial even if some but not all voter types engage in strategic behaviour, and even if the voters’ information is based on unreliable signals.
The distinguishability between pairs of quantum states, as measured by quantum fidelity, is formulated on phase space. The fidelity is physically interpreted as the probability that the pair are mistaken for each other upon an measurement. The mathematical representation is based on the concept of symplectic capacity in symplectic topology. The fidelity is the absolute square of the complex-valued overlap between the symplectic capacities of the pair of states. The symplec-tic capacity for a given state, onto any conjugate plane of degrees of freedom, is postulated to be bounded from below by the Gromov width h/2. This generalize the Gibbs-Liouville theorem in classical mechanics, which state that the volume of a region of phase space is invariant under the Hamiltonian flow of the system, by constraining the shape of the flow. It is shown that for closed Hamiltonian systems, the Schrodinger equation is the mathematical representation for the conservation of fidelity.
The measurement problem is addressed from the viewpoint that it is the distinguishability between the state preparation and its quantum ensemble, i.e. the set of states with which it has a non-zero overlap, that is at the heart of the difference between classical and quantum measurements. The measure for the degree of distinguishability between pairs of quantum states, i.e. the quantum fidelity, is for this purpose generalized, by the application of the superposition principle, to the setting where there exists an arbitrary-dimensional quantum ensemble.
Models of decision-making under uncertainty gain much of their power from the specification of states so as to resolve all uncertainty. However, this specification can undermine the presumed observability of preferences on which axiomatic theories of decision-making are based. We introduce the notion of a contingency. Contingencies need not resolve all uncertainty, but preferences over functions from contingencies to outcomes are (at least in principle) observable. In sufficiently simple situations, states and contingencies coincide. In more challenging situations, the analyst must choose between sacrificing observability in order to harness the power of states that resolve all uncertainty, or preserving observability by working with contingencies.
This paper reports the first empirical investigation of the hypothesis that epistemic appraisals form part of the structure of concepts. To date, studies of concepts have focused on the way concepts encode properties of objects, and the way those features are used in categorisation and in other cognitive tasks. Philosophical considerations show the importance of also considering how a thinker assesses the epistemic value of beliefs and other cognitive resources, and in particular, concepts.
Can future robots and AI-systems have consciousness and genuinely human intelligence – or even better, superhuman intelligence? Is it possible for them to behave ethically? Here we look at these questions from the point of view of philosophy and AI, and argue that these questions are related: their answer hinges on the fulfillment of the same condition. Starting from an analysis of the concept of consciousness, we argue that the key capacity that computers and robots should possess in order to emulate human cognition and (ethical) consciousness is the capacity to learn and apply ‘coherent webs-of-theories’. We conjecture that where classic AI has been, in essence, ‘data-driven’, the greatest leap forward would be ‘theory-driven’ AI. We review prominent work in deep learning and cognitive neuroscience to back-up this claim. This paper is an attempt at synthesis between recent work in philosophy, AI and cognitive science.
According to one narrative about the history of the concept of emergence in metaphysics and philosophy of science, when British emergentists initially appealed to emergence in the early twentieth century, they aimed to lay the groundwork for a philosophy of nature that was supposed to constitute a middle course between two antagonistic worldviews: reductive physicalism and non-physicalist dualism. While reductive physicalism aims to establish that all concrete goings-on, ranging from social phenomena to biological and chemical processes, are reducible to fundamental physical states and processes explicated by, and invoked in, an ideal physics, non-physicalist dualism holds that some phenomena resist any kind of physical reducibility, and are radically autonomous vis-à-vis physical goings-on. The emergentist idea is that a more plausible way of making sense of the natural world is through accepting that some phenomena resist physical reduction, but that is not to say that such phenomena “float free” of the physical. Such phenomena are taken to be “emergent”, suggesting that there is an emergence relation between the emergent entities and their so-called physical “emergence bases”.
In a recent paper, Justin D’Ambrosio (2020) has offered an empirical argument in support of a negative solution to the puzzle of Macbeth’s dagger—namely, the question of whether, in the famous scene from Shakespeare’s play, Macbeth sees a dagger in front of him. D’Ambrosio’s strategy consists in showing that “seeing” is not an existence-neutral verb; that is, that the way it is used in ordinary language is not neutral with respect to whether its complement exists. In this paper, we offer an empirical argument in favor of an existence-neutral reading of “seeing”. In particular, we argue that existence-neutral readings are readily available to language users. We thus call into question D’Ambrosio’s argument for the claim that Macbeth does not see a dagger. According to our positive solution, Macbeth sees a dagger, even though there is not a dagger in front of him.
Cheap talk has often been thought incapable of supporting the emergence of cooperation because costless signals, easily faked, are unlikely to be reliable (Zahavi and Zahavi, 1997). I show how, in a social network model of cheap talk with reinforcement learning, cheap talk does enable the emergence of cooperation, provided that individuals also temporally discount the past. This establishes one mechanism that suffices for moving a population of initially uncooperative individuals to a state of mutually beneficial cooperation even in the absence of formal institutions.
This paper examines two questions about scientists’ search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon (2009) that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments.
On the basis of a coherently applied physicalist ontology, I will argue that there is nothing conceptual in logic and mathematics. What we usually call “mathematical concepts”—from the most exotic ones to the most “evident” ones—are just names tagged to various elements of mathematical formalism. In fact they have nothing to do with concepts, as they have nothing to do with the actual things; they can be completely ignored by both philosophy and physics.
In this note we provide a concise report on the complexity of the causal ordering problem, originally introduced by Simon to reason about causal dependencies implicit in systems of mathematical equations. We show that Simon’s classical algorithm to infer causal ordering is NP-Hard—an intractability previously guessed but never proven. We present then a detailed account based on Nayak’s suggested algorithmic solution (the best available), which is dominated by computing transitive closure—bounded in time by O(|V|·|S|), where S(E, V) is the input system structure composed of a set E of equations over a set V of variables with number of variable appearances (density) |S|. We also comment on the potential of causal ordering for emerging applications in large-scale hypothesis management and analytics. Keywords: Causal ordering, Causal reasoning, Structural equations, Hypothesis management.
. Where do journal editors look to find someone to referee your manuscript (in the typical “double blind” review system in academic journals)? One obvious place to look is the reference list in your paper. …
Quantum entanglement poses a challenge to the traditional metaphysical view that an extrinsic property of an object is determined by its intrinsic properties. So structural realists might be tempted to cite quantum entanglement as evidence for structural realism. I argue, however, that quantum entanglement undermines structural realism. If we classify two entangled electrons as a single system, we can say that their spin properties are intrinsic properties of the system, and that we can have knowledge about these intrinsic properties. Specifically, we can know that the parts of the system are entangled and spatially separated from each other. In addition, the concept of supervenience neither illuminates quantum entanglement nor helps structural realism.
We look to mitonuclear ecology and the phenomenon of Mother’s Curse to argue that the sex of parents and offspring among populations of eukaryotic organisms, as well as the mitochondrial genome, ought to be taken into account in the conceptualization of evolutionary fitness. Subsequently, we show how characterizations of fitness considered by philosophers that do not take sex and the mitochondrial genome into account may suffer. Last, we reflect on the debate regarding the fundamentality of trait versus organism fitness and gesture at the idea that the former lies at the conceptual basis of evolutionary theory.
Should a scientist rely on methodological triangulation? Heesen et al. (Synthese 196(8):3067–3081, 2019) recently provided a convincing affirmative answer. However, their approach requires belief gambles if the evidence is discordant. We instead propose epistemically modest triangulation (EMT), according to which one should withhold judgement in such cases. We show that for a scientist in a methodologically diffident situation the expected utility of EMT is greater than that of Heesen et al.’s (2019) triangulation or that of using a single method. We also show that EMT is more appropriate for increasing epistemic trust in science. In short: triangulate, but do not gamble with evidence.
Early modern philosophy in Europe and Great Britain is awash with
discussions of the emotions: they figure not only in philosophical
psychology and related fields, but also in theories of epistemic
method, metaphysics, ethics, political theory and practical reasoning
in general. Moreover, interest in the emotions links philosophy with
work in other, sometimes unexpected areas, such as medicine, art,
literature, and practical guides on everything from child-rearing to
the treatment of subordinates. Because of the breadth of the topic,
this article can offer only an overview, but perhaps it will be enough
to give some idea how philosophically rich and challenging the
conception of the emotions was in this period.
I investigate the extent to which perspectival realism (PR) agrees with frequentist statistical methodology and philosophy, with an emphasis on J. Neyman’s views. Based on the example of the stopping rule problem I argue that PR can naturally be associated with frequentist statistics. Then I analyze Neyman’s conception of statistical inference to conclude that PR and Neyman’s conception are incongruent. Additionally, I show that Neyman’s philosophy is internally inconsistent. I conclude that Neyman’s frequentism weakens the philosophical validity and universality of PR as analyzed from the point of view of statistical methodology.
Penelope Maddy’s Second Philosophy is one of the most well-known approaches in recent philosophy of mathematics. She applies her second-philosophical method to analyze mathematical methodology by reconstructing historical cases in a setting of means-ends relations. However, outside of Maddy’s own work, this kind of methodological analysis has not yet been extensively used and analyzed. In the present work, we will make a first step in this direction. We develop a general framework that allows us to clarify the procedure and aims of the Second Philosopher’s investigation into set-theoretic methodology; provides a platform to analyze the Second Philosopher’s methods themselves; and can be applied to further questions in the philosophy of set theory.
To this end, I maintain that this property is individuated by its phenomenal roles, which can be internal – individuating the property per se – and external – determining further phenomenal or physical properties or states. I then argue that this individuation allows phenomenal roles to be organized in a necessarily asymmetrical net, thereby overcoming the circularity objection to dispositionalism. Finally, I provide reasons to argue that these roles satisfy modal fixity, as posited by Bird, and are not fundamental properties, contra Chalmers’ panpsychism. Thus, bodily pain can be considered a substantial dispositional property entrenched in non-fundamental laws of nature.
Joseph Henrich's ambitious tome, The WEIRDest People in the World, is driving me nuts. It's good enough and interesting enough that I want to read it. Henrich's general idea is that people in Western, Educated, Industrial, Rich, Democratic (WEIRD) societies differ psychologically from people in more traditionally structured societies, and that the family policies of the Catholic Church in medieval Europe lie at the historical root of this difference. …
Traditionally, the mechanism design literature has been primarily focused on settings where the bidders’ valuations are independent. However, in settings where valuations are correlated, much stronger results are possible. For example, the entire surplus of efficient allocations can be extracted as revenue. These stronger results are true, in theory, under generic conditions on parameter values. But in practice, they are rarely, if ever, implementable due to the stringent requirement that the mechanism designer knows the distribution of the bidders types exactly. In this work, we provide a computationally efficient and sample efficient method for designing mechanisms that can robustly handle imprecise estimates of the distribution over bidder valuations. This method guarantees that the selected mechanism will perform at least as well as any ex-post mechanism with high probability. The mechanism also performs nearly optimally with sufficient information and correlation. Further, we show that when the distribution is not known, and must be estimated from samples from the true distribution, a sufficiently high degree of correlation is essential to implement optimal mechanisms. Finally, we demonstrate through simulations that this new mechanism design paradigm generates mechanisms that perform significantly better than traditional mechanism design techniques given sufficient samples.
This article is concerned with one of the notable but forgotten research strands that developed out of French nineteenth-century positivism, a strand that turned attention to the study of scientific discovery and was actively pursued by French epistemologists around the turn of the nineteenth century. I first sketch the context in which this research program emerged. I show that the program was a natural offshoot of French neopositivism; the latter was a current of twentieth-century thought that, even if implicitly, challenged the positivism of first-generation positivists such as Comte. I then survey what French epistemologists—including Ernest Naville, Élie Rabier, Pierre Duhem, Édouard Le Roy, Abel Rey, André Lalande, Théodule-Armand Ribot, Edmond Goblot, and Jacques Picard, among others—had to say about the logic, psychology, and sociology of discovery. My story demonstrates the inaccuracy of existing historical accounts of the philosophical study of scientific discovery.
It has been claimed that a unique feature of human culture is that it accumulates beneficial modifications over time. On the basis of a couple of methodological considerations, we here argue that, perhaps surprisingly, there is insufficient evidence for a proper test of this claim. And we indicate what further research would be needed to firmly establish the cumulativeness of human culture.