In The Divine Fractal, Studtmann (2021) introduced a novel conception of God, what he calls the symmetry conception, and showed that such a conception not only can be formalized within extensional non-well-founded set theory but also entails the Thomistic view that God is identical to her essence. In this paper, I show that Studtmann’s symmetry conception of God can be integrated into a recent approach to quantum gravity, namely causal set theory. The theory that results has two significant consequences. First, God is the necessarily existing set of spacetime events. Second, the square root of the probability that a spacetime event randomly chosen from
We all enjoy rolling our eyes at bad maps; there are entire Facebook groups and Twitter accounts devoted to them, with hundreds of thousands of followers. Bad maps misrepresent and mislead. They skew and hide important truths and misdirect our attention. Often, they are self-serving, promoting the values of their makers. The problem is, it is by no means easy to delineate standards for what counts as a good map, or to explain how they contrast with bad maps. A seemingly simple answer is that a bad map is one that misrepresents, while a good map represents accurately. But what counts as misrepresentation? Roads are not literally black lines; the earth is not literally marked by political borders; all maps use nonliteral representational conventions. Maps are never exact copies of what they map. It is a substantial epistemological problem to demarcate the difference between nonliteral, distorted, and partial maps that serve legitimate epistemic ends and those that irresponsibly mislead. Merely insisting that a map must be accurate will not help us much in distinguishing good from bad maps.
I don’t mean anything fancy or technical by ‘reference’, just the concrete phenomenon of picking out a certain object, or certain objects, with – paradigmatically – words. If I ask, ‘Who are you referring to (by that phrase)?’, then what I want to know is your reference (with that phrase) on that occasion. I could also ask, ‘When are you referring to?’, hoping for clarification on the temporal reference of an uttered sentence or clause. Or if I say, ‘Which time are you referring to?’, then I’m asking instead for the particular occasion you are talking about (when the sentence you uttered could describe multiple occasions equally well). Reference to hypothetical, unlikely, or counterfactual scenarios does not, perhaps, have a corresponding idiom in English, but can be established by clear analogies with reference to individuals and times (Stone 1997).
Critics of John Norton’s Material Theory of Induction (MTI) have mostly focused on its relation to the Humean Problem of Induction (Okasha, 2005, p. 250). However, Hume’s challenge is just one of many philosophical issues about induction. Thomas Kelly (2010, pp. 757-758) plausibly argues that the natural place to challenge Norton’s theory is where apparently rational inductions occur without background knowledge of local facts, because Norton claims that such knowledge is essential for reasonable inductive inference.
An experiment by Proietti et al purporting to instantiate the ‘Wigner’s Friend’ thought experiment is discussed. It is pointed out that the stated implications of the experiment regarding the alleged irreconcilability of facts attributed to different observers warrant critical review. In particular, violation of a Clauser-Horne-Shimony inequality by the experimental data actually shows that the attribution of measurement outcomes to the “Friends” (modeled by internal photons undergoing unitary interactions) is erroneous. An elementary but often overlooked result regarding improper mixtures is adduced in support of this assessment, and a basic logical error in the analysis leading to the authors’ ontological claims that different observers are subject to irreconcilable ‘facts’ is identified. A counterexample is provided which refutes the popular notion that quantum theory leads to ‘relative facts’ that never manifest as empirical inconsistencies. It is further noted that under an assumption of unbroken unitarity, no measurement correlation can ever yield an outcome, since all systems remain in improper mixtures, and attributing a definite but unknown outcome contradicts their composite pure state. It is pointed out that there already exists a solution to this conundrum in the form of an alternative formulation of quantum theory, which accounts for the data showing that no outcomes occurred at the interior entangled photon level and also predicts that outcomes can and do occur at the exterior “super-observer” level in this type of experiment.
We suggest that four of the deepest problems in science are closely related and may share a common resolution. These are 1) the foundational problems in quantum theory, 2) the problem of quantum gravity, 3) the role of qualia and conscious awareness in nature, 4) the nature of time. We begin by proposing an answer to the question of what a quantum event is: an event is a process in which an aspect of the world which has been indefinite becomes definite. We build from this an architecture of the world in which qualia are real and consequential and time is active, fundamental and irreversible.
In this article, we describe what cryptocurrency is, how it works, and how it relates to familiar conceptions of and questions about money. We then show how normative questions about monetary policy find new expression in Bitcoin and other cryptocurrencies. These questions can play a role in addressing not just what money is, but what it should be. A guiding theme in our discussion is that progress here requires a mixed approach that integrates philosophical tools with the purely technical results of disciplines like computer science and economics.
As epistemically limited agents, we are prone to mistakes. Sometimes these mistakes are about what we are morally required or permitted to do. Such mistakes about moral matters can come about in two ways. They can result from ignorance about nonmoral features of the world—features on which the moral status of our action supervenes. I may be oblivious that the cake I am offering you contains poison and that’s why I believe it’s permissible for But even knowing all relevant nonmoral facts does not eliminate the possibility of error. Many moral questions are hard and rife with opportunities for mistake. We may fail to recognize facts as morally relevant, misjudge their significance, fail to reason properly about how various competing morally relevant factors weigh up, or be guided in our deliberations by Let’s reserve the term moral ignorance to refer to this second kind of moral error—moral error that does not derive from ignorance about non-moral facts. I am construing ignorance and error broadly, to cover both false belief and absence of true belief, though my focus will be on the former. It’s uncontroversial that nonmoral ignorance can function as an excuse. When accused of poisoning my friend, I can appeal to the fact that I didn’t know that the cake contained poison to defend myself. And insofar as I really did not know and my ignorance reflects neither recklessness nor negligence on my part, that defense is a good one.
For natural selection to progress, there must be a sufficiently large evolutionary space to explore. In systems with template-based replication, this space is combinatori-ally large in the length of the information-carrying molecules. Previous work has shown that it is also possible for heredity to occur in much less structured chemistries; this opens the question of how the structure of a reaction network relates to the number of heritable states it can support, and in particular, how the number of heritable states scales with system size for a given network topology. Answering this question would allow us to map out the space of possible chemical mechanisms for heredity, and to identify places where they might be found in the space of organic chemistries that might have been found on the early Earth. We show that by linearising around a fixed point in a chemical reaction network and solving the corresponding eigenvalue problem, it is possible to detect the set of independent autocatalytic subnetworks that can operate in the vicinity of that point. We investigate an upper bound on the scaling of the number of such “autocatalytic cores” with the number of distinct chemical species, and show that the number of cores scales at best as log N in the case of unstructured networks, but that adding a strong energy constraint on the network topology allows it to scale linearly, which is the best possible case.
The world is wealthier than it has ever been but billions of people continue to live in great poverty. Two questions follow from this: Do richer countries have a moral responsibility to do something about this? …
Since the pandemic began, I've been meeting people, apart from my family, mainly through Zoom. I see their faces on a screen. I hear their voices through headphones. This is what it has become to interact with someone. …
Humeanism – the idea that there are no necessary connections between distinct existences – and Nomic Essentialism – the idea that properties essentially play the nomic roles that they do – are two of the most important and influential positions in the metaphysics of science. Traditionally, it has been thought that these positions were incompatible competitors.
We propose six axioms concerning when one candidate should defeat another in a democratic election involving two or more candidates. Five of the axioms are widely satisfied by known voting procedures. The sixth axiom is a weakening of Kenneth Arrow’s famous condition of the Independence of Irrelevant Alternatives (IIA). We call this weakening Coherent IIA. We prove that the five axioms plus Coherent IIA single out a voting procedure studied in our recent work: Split Cycle. In particular, Split Cycle is the most resolute voting procedure satisfying the six axioms for democratic defeat. In addition, we analyze how Split Cycle escapes Arrow’s Impossibility Theorem and related impossibility results.
Decorated cospans are a framework for studying open systems invented by Brendan Fong. Since I’m now visiting the institute he and David Spivak set up—the Topos Institute—it was a great time to give a talk explaining the history of decorated cospans, their problems, and how those problems have been solved:
Structured vs Decorated Cospans
Recent research on bacteria and other microorganisms has provided interesting insights into the nature of life, cooperation, evolution, individuality or species. In this paper, I focus on the capacity of bacteria to produce molecules that are usually classied as 'signals' and I defend two claims. First, I argue that certain interactions between bacteria should actually qualify as genuine forms of communication. Second, I use this case study to revise our general theories of signaling. Among other things, I argue that a plausible requirement for a state to qualify as a signal is that it is a minimal cause.
The thesis of this paper is that, if it is construed individualistically, epistemic justification does not capture the conditions that philosophers of science would impose on justified belief in a scientific hypothesis. The difficulty arises from beliefs acquired through testimony. From this I derive a lesson that epistemologists generally, and epistemologists of testimony in particular, should learn from philosophers of science: we ought to repudiate epistemic individualism and move towards a more fully social epistemology.
Rhinos are one of the largest and most charismatic land animals in existence. Second in size among land mammals to only elephants, all five species of the family Rhinocerotidae are in grave danger primarily due to poaching. As such they are the subject of intense international attention in conservation science. In what follows I’ll focus on the African white rhino, which is comprised of two subspecies, the southern white rhino (SWR, Ceratotherium simum simum) and the northern white rhino (NWR, Ceratotherium simum cotton). The SWR faced a tight population bottleneck roughly a century ago, but due to conservation efforts it has rebounded and currently numbers ∼20,000 individuals, most residing in South Africa. The NWR, by contrast, has vanished from the wild and is presumably the most endangered mammal in the world. Two females, 20-year-old Fatu and her 30-year-old mother, Najin, are the sole surviving NWRs, both living in the Ol Pejeta Conservancy in Kenya. The last male, Sudan, died in 2018. Neither surviving females are viable mothers. As a result, the NWR is functionally extinct.
The causal efficacy of a material system is usually thought to be produced by the law-like actions and interactions of its constituents. Here, a specific system is constructed and explained that produces a cause that cannot be understood in this way, but instead has novel and autonomous efficacy. The construction establishes a proof-of-feasibility of strong emergence. The system works by utilizing randomness in a targeted and cyclical way, and by relying on sustained evolution by natural selection. It is not vulnerable to standard arguments against strong emergence, in particular ones that assume that the physical realm is causally closed. Moreover, it does not suffer from epiphenomenalism or causal overdetermination. The system uses only standard material components and processes, and is fully consistent with naturalism. It is discussed whether the emergent cause can still be viewed as ‘material’ in the way that term is commonly understood.
« Steven Weinberg (1933-2021): a personal view
Striking new Beeping Busy Beaver champion
For the past few days, I was bummed about the sooner-than-expected loss of Steven Weinberg. Even after putting up my post, I spent hours just watching old interviews with Steve on YouTube and reading his old essays for gems of insight that I’d missed. …
Humeans identify the laws of nature with universal generalizations that systematize rather than govern the particular matters of fact. Humeanism is frequently accused of circularity: laws explain their instances, but Humean laws are, in turn, grounded by those instances. Unfortunately, this argument trades on controversial assumptions about grounding and explanation that Humeans routinely reject. However, recently an ostensibly semantic circularity objection has been offered, which seeks to avoid reading such assumptions into the Humean view. This paper argues that the new semantic version tacitly relies on the familiar metaphysical one and, therefore, it ultimately brings nothing new to the table.
One of the few points of agreement between most theists and non-theists working on the problem of evil is that the existence of a perfect God is incompatible with the existence of pointless evil. In a series of influential papers, however, Peter van Inwagen has argued that careful attention to the reasoning behind this claim reveals fatal difficulties related to the Sorites Paradox. In this paper, I explain van Inwagen’s appeal to sorites reasoning, distinguish between two different arguments in his work, and argue that they both commit the same so-far-unnoticed mistake.
A cause is regularly followed by its effect. This idea is at the core
of regularity theories of causation. The most influential regularity
theory can be found in Hume (1739). The theory has been refined by
Mill (1843) who insisted that the relevant regularities are laws of
nature. Further refinements used to enjoy popularity until David Lewis
(1973) criticized the regularity theory and proposed an analysis of
causation in terms of counterfactuals (see the entry on
counterfactual theories of causation). Since then, counterfactual theories of causation have risen and
regularity theories have more and more fallen into disuse.
This post is aimed primarily at Catholic readers, and especially Catholic readers given to a certain mode of scrupulosity (a disorder where one is unduly and irrationally worried about one’s sinfulness) I will describe further on down. …
I think the hardest problem for divine simplicity is the problem of God’s contingent beliefs. In our world, God believes there are horses. In a horseless world, God doesn’t believe there are horses. Yet according to divine simplicity, God has the same intrinsic features in both the horsey and the horseless worlds. …
Much of the philosophical literature on the relations between thermodynamics and statistical mechanics has to do with the process of relaxation to equilibrium. There has been comparatively little discussion of how to obtain what have traditionally been recognized as laws of thermodynamics, the zeroth, first, and second laws, from statistical mechanics. This note is about how to obtain analogues of those laws as theorems of statistical mechanics. The difference between the zeroth and second laws of thermodynamics and their statistical mechanical analogues is that the statistical mechanical laws are probabilistically qualified; what the thermodynamical laws say will happen, their statistical mechanical analogues say will probably happen. For this reason, it is entirely appropriate — indeed, virtually inevitable — for the quantities that are statistical mechanical analogues of temperature and entropy to be attributes of probability distributions. I close with some remarks about the relations between so-called “Gibbsian” and “Boltzmannian” methods in statistical mechanics.
The Everett interpretation of quantum mechanics is, inter alia, an interpretation of objective probability: an account of what probability really is. In this respect, it is unlike other realist interpretations of quantum theory or indeed any proposed modification to quantum mechanics (like pilot-wave theory and dynamical collapse theories); in none of these is probability itself the locus of inquiry. As for why the Everett interpretation is so engaged with the question of probability, it is in its nature: its starting point is the unitary, deterministic equations of quantum mechanics, and it introduces no hidden variables with values unknown.
In The Morality of Defensive Force, Quong defends a powerful account of the grounds and conditions under which an agent may justifiably inflict serious harm on another person. In this paper, I examine Quong’s account of the necessity constraint on permissible harming—the RESCUE account. I argue that RESCUE does not succeed. Section 2 describes RESCUE. Section 3 raises some worries about Quong’s conceptual construal of the right to be rescued and its attendant duties. Section 4 argues that RESCUE does not deliver the verdicts that Quong wants. In those sections, I assume that the attacker is culpable for the threat he poses. Section 5 considers cases where the attacker, though responsible for the wrongful threat he poses and therefore liable to defensive force, has an epistemic justification for acting as he does and thus is not morally culpable. In his discussion of necessity, Quong does not explicitly deal with such cases. I suggest that RESCUE does not operate in the same way when attackers are mistaken as when they are morally culpable.
A large body of work has demonstrated the utility of the Bayesian framework for capturing inference in both specialist and everyday contexts. However, the central tool of the framework, conditionalization via Bayes’ rule, does not apply directly to a common type of learning: the acquisition of conditional information. How should an agent change her beliefs on learning that “If A, then C”? This issue, which is central to both reasoning and argumentation, has recently prompted considerable research interest. In this paper, we critique a prominent proposal and provide a new, alternative, answer.
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure.
« Slowly emerging from blog-hibervacation
Steven Weinberg (1933-2021): a personal view
Steven Weinberg was, perhaps, the last truly towering figure of 20th-century physics. In 1967, he wrote a 3-page paper saying in effect that as far as he could see, two of the four fundamental forces of the universe—namely, electromagnetism and the weak nuclear force—had actually been the same force until a tiny fraction of a second after the Big Bang, when a broken symmetry caused them to decouple. …