– Humans and nonhuman great apes share a sense for intuitive statistics, making intuitive probability judgments based on proportional information. This ability is of tremendous importance, in particular for predicting the outcome of events using prior information and for inferring general regularities from limited numbers of observations. Already in infancy, humans functionally integrate intuitive statistics with other cognitive domains, rendering this type of reasoning a powerful tool to make rational decisions in a variety of contexts. Recent research suggests that chimpanzees are capable of one type of such cross-domain integration: The integration of statistical and social information. Here, we investigated whether apes can also integrate physical information into their statistical inferences. We tested 14 sanctuary-living chimpanzees in a new task setup consisting of two “gumball machine”-apparatuses that were filled with different combinations of preferred and non-preferred food items. In four test conditions, subjects decided which of two apparatuses they wanted to operate to receive a random sample, while we varied both the proportional composition of the food items as well as their spatial configuration above and below a barrier. To receive the more favorable sample, apes needed to integrate proportional and spatial information.
Formulations of Anselm’s ontological argument have been the subject of a number of recent studies. We examine these studies in light of Anselm’s text and (a) respond to criticisms that have surfaced in reaction to our earlier representations of the argument, (b) identify and defend a more refined representation of Anselm’s argument on the basis of new research, and (c) compare our representation of the argument, which analyzes that than which none greater can be conceived as a definite description, to a representation that analyzes it as an arbitrary name.
David Lewis’s 1983 identity theory of mind holds that:
For each mental state type M there is a causal role RM such that to be a state of type M is to fulfill RM. For each actually occurring mental state type M, the causal role RM is fulfilled by physical states and only physical states. …
Gauge symmetries provide one of the most puzzling examples of the applicability of mathematics in physics. The presented work focuses on the role of analogical reasoning in the gauge argument, motivated by Mark Steiner’s claim that the application of the gauge principle relies on a Pythagorean analogy whose success undermines naturalist philosophy. In this paper, we present two different views concerning the analogy between gravity, electromagnetism, and nuclear interactions, each providing a different philosophical response to the problem of the applicability of mathematics in the natural sciences. The first is based on an account of Weyl’s original work, which first gave rise to the gauge principle. Drawing on his later philosophical writings, we develop an idelaist reading of the mathematical analogies in the gauge argument. On this view, mathematical analogies serve to ensure a conceptual harmony in our scientific account of nature. We further discuss the construction of Yang and Mills’s gauge theory in light of this idealist reading.
This paper examines some neglected Chrysippean fragments on insecure apprehension (κατάληψις). First, I present Chrysippus’ account of how non-Sages can begin to fortify their insecure apprehension and upgrade it into knowledge (ἐπιστήμη). Next, I reconstruct Chrysippus’ explanation of how sophisms and counter-arguments lead one to abandon one’s insecure apprehension. One such counter-argument originates in the sceptical Academy and targets the Stoic claim that insecure apprehension can be acquired on the basis of custom (συνήθεια). I show how Chrysippus could defend the possibility of custom-based apprehension, while also denying that there is custom-based knowledge.
Historically, the empirical study of phenotypic diversification has fallen into two rough camps; (1) "structuralist approaches" focusing on developmental constraint, bias and innovation (with evo-devo at the core); and (2) "adaptationist approaches" focusing on adaptation and natural selection. Whilst debates, such as that surrounding the proposed "Extended" Evolutionary Synthesis, often juxtapose these two positions, this review focuses on the grey space in between.
We are hovering over an abyss. We don't notice it most of the time because for the most part we look out from the abyss rather than down into it. But an abyss it surely is – an abyss that has vast implications for the way we think, the way we live, and the way we interrelate. Socrates was aware of this abyss way back – some 2500 years ago. He noted that the technicians of his time thought they had a great deal of wisdom and knowledge because they had a great deal of technical expertise. But technical expertise is neither wisdom nor knowledge. It is technique. The technician knows what to do in order to make something work, but this in itself does not require a real understanding of how or why it works, nor of whether its workings are good. All it requires is familiarity with regular patterns of activity.
We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality effects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton , it combines two weighted Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinction between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature.
Changes in how economic systems have been compared began with considering idealized formulations of possible economic systems as when Marx and Engels critiqued the proposals posed by the utopian socialists in the 19th century, although they also critiqued the actually existing market capitalist system of the time, through theoretical debates about the possible functioning and efficiency of alternative systems during the socialist calculation debate of the early to mid-20th century. Following World War II with the emergence of the Cold War, the focus moved toward comparing growth and other economic variables along with political factors in the two leading economies: the market capitalist United States and the command socialist Soviet Union. Following the breakup of the latter in 1991, some said that comparative economics had died as the emphasis moved to considering the dynamics of formerly command socialist economies as they transitioned towards market capitalism. However, as this process largely ended after 2000, broader approaches developed considering a wider array of institutional and cultural variables and structures and their multiple combinations as empirical analysis expanded and deeper and more complicated varieties of economic systems have come to be studied. These forms include not only further development of varieties of capitalism and new comparative economics, but such new forms as the new traditional economy.
It seems to me to be appropriate to begin this discussion with a short resume of Saul Kripke's contributions to the massive Wittgensteinian library focused on rule following. Wittgenstein states PI 201. This was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. The answer was: if everything can be made out to accord with the rule, then it can also be made out to conflict with it. And so there would be neither accord nor conflict here.
A theorem on the partitioning of a randomly selected large population into stationary and non-stationary components by using a property of the stationary population identity is stated and proved. The methods of partitioning demonstrated are original and these are helpful in real-world situations where age-wise data is available. Applications of this theorem for practical purposes are summarized at the end.
In this paper, I distinguish three general approaches to public trust in science, which I call the individual approach, the semi-social approach, and the social approach, and critically examine their proposed solutions to what I call the problem of harmful distrust. I argue that, despite their differences, the individual and the semi-social approaches see the solution to the problem of harmful distrust as consisting primarily in trying to persuade individual citizens to trust science and that both approaches face two general problems, which I call the problem of overidealizing science and the problem of overburdening citizens. I then argue that in order to avoid these problems we need to embrace a (thoroughly) social approach to public trust in science, which emphasizes the social dimensions of the reception, transmission, and uptake of scientific knowledge in society and the ways in which social forces influence both positively and negatively the trustworthiness of science.
the nature and value of experimental philosophy;my empirical work on the not-especially-ethical behavior of ethics professors;how there's a type of intellectual integrity in embracing ideals that you don't quite live up to;the nature of belief and how to think about cases where your sincere judgments don't align well with you everyday behavior;the nature of consciousness and why something that seems "crazy" must be true about consciousness;consciousness in non-human animals;the value of philosophy.It's a pretty good introduction, I think, to some of my central philosophical ideas and how they hang together. …
Here’s an amusing question. Let’s say that I took all the atoms in the observable universe and shuffled their positions by independent randomly and uniformly choosing positions for them through the volume of the observable universe. …
According to classical theism, the universe depends on God in a way that goes beyond mere (efficient) causation. I have previously argued that this ‘deep dependence’ of the universe on God is best understood as a type of grounding. In a recent paper in this journal, Aaron Segal argues that this doctrine of deep dependence causes problems for creaturely free will: if our choices are grounded in facts about God, and we have no control over these facts, then we do not control our choices and are therefore not free. This amounts to a grounding analogue of the Consequence Argument for the incompatibility of free will and determinism. If successful, it would have application beyond classical theism: similar concerns would apply to any view that takes our choices to be grounded in a deeper reality which is beyond our control. However, I show that the argument is not successful. Segal’s Grounding Consequence Argument is so closely analogous to the Causal Consequence Argument that any response to the one provides a response to the other. As a result, if you don’t think that prior causes (whether deterministic or indeterministic) undermine free will, you shouldn’t think that prior grounds undermine free will.
Studying consciousness requires contrasting conscious and unconscious perception. While many studies have reported unconscious perceptual effects, recent work has questioned whether such effects are genuinely unconscious, or whether they are due to weak conscious perception. Some philosophers and psychologists have reacted by denying that there is such a thing as unconscious perception, or by holding that unconscious perception has been previously overestimated. This article has two parts. In the first part, I argue that the most significant attack on unconscious perception commits the criterion content fallacy: the fallacy of interpreting evidence that observers were conscious of something as evidence that they were conscious of the task-relevant features of the stimuli. In the second part, I contend that the criterion content fallacy is prevalent in consciousness research. For this reason, I hold that if unconscious perception exists, scientists studying consciousness could routinely underestimate it. I conclude with methodological recommendations for moving the debate forward.
Against the orthodox view of the Nash equilibrium as “the embodiment of the idea that economic agents are rational” (Aumann, 1985, p 43), some theorists have proposed ‘non-classical’ concepts of rationality in games, arguing that rational agents should be capable of improving upon inefficient equilibrium outcomes. This paper considers some implications of these proposals for economic theory, by focusing on institutional design. I argue that revisionist concepts of rationality conflict with the constraint that institutions should be designed to be incentive-compatible, that is, that they should implement social goals in equilibrium. To resolve this conflict, proponents of revisionist concepts face a choice between three options: (1) reject incentive compatibility as a general constraint, (2) deny that individuals interacting through the designed institutions are rational, or (3) accept that their concepts do not cover institutional design. I critically discuss these options and I argue that a more inclusive concept of rationality, e.g. the one provided by Robert Sugden’s version of team reasoning, holds the most promise for the non-classical project, yielding a novel argument for incentive compatibility as a general constraint.
The aim of this article is to present a variant of epistemic relativism that is compatible with a language practice especially popular among scientists. We argue that in science, but also in philosophy, propositions are naturally ‘relativized’ to sets of hypotheses or theories, and that a similar language practice allows one to interpret canonical problems of epistemology. We apply the model to Gettier’s problem, and derive a condition under which counterexamples à la Gettier to Plato’s account of knowledge do not arise. We argue that these findings give further content to a well-known result by Zagzebski (1994). Our interpretation points to a type of epistemic relativism having links with contextualism in epistemology, and perspectivism in philosophy of science.
From the Hilbert space formalism we note that five simple conditions are satisfied by the orthogonality relation between the (pure) states of a quantum system. We argue, by proving a mathematical theorem, that they capture the essentials of this relation. Based on this, we investigate the rationale behind these conditions in the form of six physical hypotheses. Along the way, we reveal an implicit theoretical assumption in theories of physics and prove a theorem which formalizes the idea that the Superposition Principle makes quantum physics different from classical physics. The work follows the paradigm of mathematical foundations of quantum theory, which I will argue by methodological reflection that it exemplifies a formal approach to analysing concepts in theories.
This paper defends the view that we have special relationship duties that do not derive from our moral duties. Our special relationship duties, I argue, are grounded in what I call close relationships. Sharing a close relationship with another person, I suggest, requires that both people conceive of themselves as being motivated to promote the other’s interests. So, staying true to oneself demands being committed to promoting the interests of those with whom we share a close relationship. Finally, I show that the proposed account of special relationship duties circumvents two problems facing self-conception accounts of special relationship duties.
According to Augustine, abstract objects are ideas in the Mind of God. Because numbers are a type of abstract object, it would follow that numbers are ideas in the Mind of God. Let us call such a view the Augustinian View of Numbers (AVN). In this paper, I present a formal theory for AVN. The theory stems from the symmetry conception of God as it appears in Studtmann (2021). I show that Robinson’s Arithmetic is a conservative extension of the axioms in Studtmann’s original paper. The extension is made possible by identifying the set of natural numbers with God, 0 with Being, and the successor function with the essence function. The resulting theory can then be augmented to include Peano Arithmetic by adding a set-theoretic version of induction and a comprehension schema restricted to arithmetically definable properties. In addition to these formal matters, the paper provides a characterization of the mind of God. According to the characterization, the Being essences that constitute God’s mind act as both numbers and representations – each (except for Being itself) has all the properties of some number and encodes all the properties of that number’s predecessor. The conception of God that emerges by the end of the discussion is a conception of an infinite, ineffable, axiologically and metaphysically ultimate entity that contains objects that not only serve as numbers but also encode information about each other.
Epistemologists spend a great deal of time thinking about how we should respond to our evidence. They spend far less time thinking about the ways that evidence can be acquired in the first place. This is an oversight. Some ways of acquiring evidence are better than others. Many normative epistemologies struggle to accommodate this fact. In this article I develop one that can and does. I identify a phenomenon – epistemic feedback loops – in which evidence acquisition has gone awry, with the result that even beliefs based on the evidence are irrational. Examples include evidence acquired under the influence of confirmation bias and evidence acquired under the influence of cognitively penetrated experiences caused by implicit bias. I then develop a theoretical framework which enables us to understand why beliefs that are the outputs of epistemic feedback loops are irrational. Finally, I argue that many popular approaches to epistemic normativity may need to be abandoned on the grounds that they cannot comfortably explain feedback loops. The scope of this last claim is broad: it includes almost all contemporary theories of justified/rational belief and of the epistemology of cognitive penetration.
Counterexamples to Good's Theorem
Posted on Tuesday, 19 Oct 2021. Good (1967) famously "proved" that the expected utility of an informed decision is always at least as great as the expected utility of an uninformed decision. …
For a long time I’ve been inclining towards relationalism about space (or more generally spacetime), but lately my intuitions have been shifting. And here is an argument that seems to move me pretty far from it. …
So, you want to start a revolution. There is something significant in the world around you that is wrong: unjust, oppressive, unfair, unequal. Half measures won’t suffice. Something dramatic, revolutionary, is required. You have ideas. You might have a plan. But although you are certain of the wrong around you, you are not certain of the path forward. You have some doubt about the plan, whether it will work, its moral costs, and whether there are problems you cannot yet see. You have revolutionary doubt. That is good. We need revolutions. But revolutions should not be only (or ever?) conducted by the certain. This article will help you to nourish that doubt, to see why it is almost always epistemically appropriate if also almost always difficult to maintain, to learn how to live and act with it, and to give it its due without it leading to paralysis and inaction.
Given Tarski’s version of Euclidean straightedge and compass geometry, it is shown how to express construction theorems, and shown that for any purely existential theorem there is a construction theorem implying it. Three questions about possible extensions of this result are then listed.
A standard puzzle for the opponent of the Principle of Sufficient Reason (PSR) is to explain why we don’t observe objects coming into existence ex nihilo. Here is a thought that I think hasn’t been explored enough. …
A mathematical problem is computable if it can be solved in
principle by a computing device. Some common synonyms for
“computable” are “solvable”,
“decidable”, and “recursive”. Hilbert believed
that all mathematical problems were solvable, but in the 1930’s
Gödel, Turing, and Church showed that this is not the case. There
is an extensive study and classification of which mathematical
problems are computable and which are not. In addition, there is an
extensive classification of computable problems into computational
complexity classes according to how much computation—as a
function of the size of the problem instance—is needed to
answer that instance.
Conceptual engineering is thought to face an ‘implementation challenge’: the challenge of securing uptake of engineered concepts. But is the fact that implementation is challenging really a defect to be overcome? What kind of picture of political life would be implied by making engineering easy to implement? We contend that the ambition to obviate the implementation challenge goes against the very idea of liberal democratic politics. On the picture we draw, the implementation challenge can be overcome by institutionalizing control over conceptual uptake, and there are contexts—such as professions that depend on coordinated conceptual innovation—in which there are good reasons to institutionalize control in this fashion. But the liberal fear of this power to control conceptual uptake ending up in the wrong hands, combined with the democratic demand for freedom of thought as a precondition of genuine consent, yields a liberal democratic rationale for keeping implementation challenging.
Do aesthetic reasons have normative authority over us? Could there be anything like an aesthetic ‘ought’ or an aesthetic obligation? I argue that there are no aesthetic obligations. We have reasons to act certain ways regarding various aesthetic objects – most notably, reasons to attend to and appreciate those objects. But, I argue, these reasons never amount to duties. This is because aesthetic reasons are merely evaluative, not deontic. They can only entice us or invite us – they can never compel us. Beauty gives us goods without shoulds.