-
12604.907235
The AdS/CFT correspondence posits a holographic equivalence between a gravitational theory in Anti-de Sitter (AdS) spacetime and a conformal field theory (CFT) on its boundary, linked by gauge-invariant quantities like field strengths Fµν and fluxes Φ. This paper examines that link, drawing on my prior analysis of the Aharonov-Bohm (AB) effect, where such quantities exhibit nonlocality, discontinuity, and incompleteness. I demonstrate that gauge potentials Aµ in the Lorenz gauge—not their invariant derivatives—mediate the AB effect’s local, continuous dynamics, a reality extending to gravitational fields gµν as substantival entities. In AdS/CFT, the CFT’s reduction of bulk Aµ and gµν to gauge-invariant imprints fails to reflect this ontology, a flaw so fundamental that it excludes exact gauge/gravity duality—neither standard mappings nor reformulations suffice. A new mathematical proof formalizes this: the bulk’s diffeomorphism freedom cannot correspond to the boundary’s gauge freedoms, Abelian or non-Abelian, under this reality. This critique spans the gauge/gravity paradigm broadly, from AdS/CFT to holographic QCD, where symmetry invisibility obscures bulk physics. While duality’s successes in black hole thermodynamics and strongly coupled systems highlight its utility, I suggest these reflect approximations within specific regimes, not a full equivalence. I propose a shift toward a framework prioritizing Aµ and gµν ’s roles, with gravitational AB effects in AdS as a testing ground. This work seeks to enrich holography’s dialogue, advancing a potential-centric view for quantum gravity.
-
12647.90734
The article sets out to clarify a number of confusions that exist in connection with the Born-Oppenheimer approximation (BOA). It is generally claimed that chemistry cannot be reduced to quantum mechanics because of the nature of this commonly used approximation in quantum chemistry, that is popularly believed to require a ‘clamping’ of the nuclei. It is also claimed that the notion of molecular structure, which is so central to chemistry, cannot be recovered from the quantum mechanical description of molecules and that it must be imposed by hand through the BOA. Such an alleged failure of reduction is then taken to open the door to concepts such as emergence and downward causation.
-
12667.907359
It has been argued that, in scientific observations, the theory of the observed source should not be involved in the observation process to avoid circular reasoning and ensure reliable inferences. However, the issue of underdetermination of the source has been largely overlooked. I argue that concerns about circularity in inferring the source stem from the hypothetico-deductive (H-D) method. The epistemic threat, if any, arises not from the theory-laden nature of observation but from the underdetermination of the source by the data, since the data could be explained by proposing incompatible sources for it. Overcoming this under-determination is key to reliably inferring the source. I propose a bidirectional version of inference to the only explanation as a methodological framework that addresses this challenge while circumventing concerns about theory-ladenness. Nevertheless, fully justifying the viability of the background theoretical framework and its accurate description of the source requires a broader conception of evidence. To this end, I argue that integrating meta-empirical assessment into inference to the only explanation offers a promising strategy, extending the concept of evidence in a justifiable manner.
-
12703.907369
The quantum measurement problem is one of the most profound challenges in modern physics, questioning how and why the wavefunction collapses during measurement to produce a single observable outcome. In this paper, we propose a novel solution through a logical framework called Aethic reasoning, which reinterprets the ontology of time and information in quantum mechanics. Central to this approach is the Aethic principle of extrusion, which models wavefunction collapse as progression along a Markov chain of block universes, effectively decoupling the Einsteinian flow of time from quantum collapse events. This principle introduces an additional degree of freedom to time, enabling the first Aethic postulate: that informational reality is reference-dependent, akin to the relativity of simultaneity in special relativity. This reference point, or Aethus, is rigorously defined within a mathematical structure. Building on this foundation, the second postulate resolves the distinction between quantum superpositions and logical contradictions by encoding superpositions in a “backend” Aethic framework before rendering observable states. The third postulate further distinguishes quantum coherence from decoherence using a two-generational model of state inheritance, potentially advancing beyond simpler interpretations of information leakage. Together, these postulates yield a direct theoretical derivation of the collapse postulate, fully consistent with empirical results such as the outcome of the double-slit experiment. By addressing foundational aspects of quantum mechanics through a logically robust and philosophically grounded lens, this framework sheds new light on the measurement problem and offers a solid foundation for future exploration.
-
12800.90738
This article describes confirmation of the proposition that numbers are identified with operators in the following three steps. 1. The set of operators to construct finite cardinals satisfies Peano Axioms. 2. Accordingly, the natural numbers can be identified with these operators. 3. From the operators, five kinds of operators are derived, and on the basis of the step 2, the integers, the fractions, the real numbers, the complex numbers and the quaternions are identified with the five kinds of operators respectively. These operators stand in a sequential inclusion relationship, in contrast to the embedding relationship between those kinds of numbers defined as sets.
-
12824.907389
Inconsistencies! What do they mean? Can we support them? With this paper, we hope to contribute to the claim that we can tolerate inconsistencies in certain situations even without considering any logic that may enable us to do that, say some paraconsistent logic. We argue that in many cases where we apply reason, we work in domains where inconsistencies appear, and even so, we neither get them out (but ‘support’ them) nor modify the underlying logic (such as classical logic) to avoid logical troubles. To make things more precise, we distinguish between inconsistency, anomaly, and contradiction. Our thesis is that we can reason sensibly with classical logic even in the presence of inconsistencies once (as we explain) we either ‘do not go there’ or make things so that the inconsistent sentences cannot be joined to arrive at a contradiction. Some sample cases are given to motivate the discussion.
-
128110.907397
Empiricists following Poincaré have argued that spacetime geometry can be freely chosen by convention, while adjusting unobservable structure so as to maintain empirical adequacy. In this article, I first strengthen a no-go result of Weatherall and Manchak against the conventionality of geometry, and then argue that any remaining conventionality arises from scientific incompleteness. To illustrate, I discuss a new kind of conventionality that is available in the presence of higher spatial dimensions, and illustrate how the incompleteness in such models can be resolved by introducing new physical theories like Kaluza-Klein theory. Conventional choices of this kind may provide a fruitful starting point in the search for new science, but if successful would eliminate the conventionalist alternatives.
-
182941.907406
Free choice sequences play a key role in the Brouwerian continuum. Using recent modal analysis of potential infinity, we can make sense of free choice sequences as potentially infinite sequences of natural numbers without adopting Brouwer’s distinctive idealistic metaphysics. This provides classicists with a means to make sense of intuitionistic ideas from their own classical perspective. I develop a modal-potentialist theory of real numbers that suffices to capture the most distinctive features of intuitionistic analysis, such as Brouwer’s continuity theorem, the existence of a sequence that is monotone, bounded, and non-convergent, and the inability to decompose the continuum non-trivially.
-
213214.90742
We draw on value theory in social psychology to conceptualize the range of motives that can influence researchers’ attitudes, decisions, and actions. To conceptualize academic research values, we integrate theoretical insights from the literature on personal, work, and scientific work values, as well as the outcome of interviews and a survey among 255 participants about values relating to academic research. Finally, we propose a total of 246 academic research value items spread over 11 dimensions and 34 sub-themes. We relate our conceptualization and proposed items to existing work and provide recommendations for future scale development. Gaining a better understanding of researchers’ different values can improve careers in science, attract a more diverse range of people to enter science, and elucidate some of the mechanisms that lead to both exemplary and questionable scientific practices.
-
433308.90743
Sunwin chính chủ sở hữu bộ core game cùng hệ thống chăm sóc khách hàng vô địch. Sunwin hiện nay giả mạo rất nhiều anh em chú ý check kĩ uy tín đường link để đảm bảo an toàn và trải nghiệm game đỉnh cao duy nhất. …
-
433308.907438
Sunwin chính chủ sở hữu bộ core game cùng hệ thống chăm sóc khách hàng vô địch. Sunwin hiện nay giả mạo rất nhiều anh em chú ý check kĩ uy tín đường link để đảm bảo an toàn và trải nghiệm game đỉnh cao duy nhất. …
-
474100.907447
Hannah Rubin, Mike D. Schneider, Remco Heesen, Alejandro Bortolus, Emelda E. Chukwu, Chad L. Hewitt, Ricardo Kaufer, Veli Mitova, Anne Schwenkenbecher, Evangelina Schwindt, Temitope O. Sogbanmu, Helena Slanickova, Katie Woolaston
Knowledge brokers, usually conceptualized as passive intermediaries between scientists and policymakers in evidence-based policymaking, are understudied in philosophy of science. Here, we challenge that usual conceptualization. As agents in their own right, knowledge brokers have their own goals and incentives, which complicate the effects of their presence at the science-policy interface. We illustrate this in an agent-based model and suggest several avenues for further exploration of the role of knowledge brokers in evidence-based policy.
-
531785.907457
One Approach to the Necessary Conditions of Free Will Logical Paradox and the Essential Unpredictability of Physical Agents Even today, there is no precise definition of free will – only mere hypotheses and intuitions. This is why this paper will approach the question of free will from a negative perspective, depicting a scenario in which free will seemingly exists. Subsequently, it will attempt to refute this scenario (as a necessary condition for free will). The absence of free will might seem absolute if scientific determinism holds true. Therefore, the goal of the study is to present a logical argument (paradox) that demonstrates the impossibility of an omniscient (P) predictor (scientific determinism), highlighting its inherent self-contradiction. This paradox reveals that the prediction (P = C) by a (P) physical agent of itself is objectively impossible. In other words, even a fully deterministic agent in a deterministic universe cannot predict its own future state, not even in a Platonic sense.
-
531802.907467
A nested interferometer experiment by Danan et al (2013) is discussed and some ontological implications explored, primarily in the context of time-symmetric interpretations of quantum theory. It is pointed out that photons are supported by all components of their wavefunctions, not selectively truncated "first order" portions of them, and that figures representing both a gap in the photon's path and signals from the cut-off path are incorrect. It is also noted that the Transactional Formulation (traditionally known as the Transactional Interpretation) readily accounts for the observed phenomena.
-
531894.907476
Quantum mechanics with a fundamental density matrix has been proposed and discussed recently. Moreover, it has been conjectured that the universe is not in a pure state but in a mixed state in this theory. In this paper, I argue that this mixed state conjecture has two main problems: the redundancy problem and the underdetermination problem, which are lacking in quantum mechanics with a definite initial wave function of the universe.
-
599553.907484
This is a bit of a shaggy dog story, but I think it’s fun, and there’s a moral about the nature of mathematical research. Act 1
Once I was interested in the McGee graph, nicely animated here by Mamouka Jibladze:
This is the unique (3,7)-cage, meaning a graph such that each vertex has 3 neighbors and the shortest cycle has length 7. …
-
627676.907492
These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. …
-
703804.9075
We develop a theory of policy advice that focuses on the relationship between the competence of the advisor (e.g., an expert bureaucracy) and the quality of advice that the leader may expect. We describe important tensions between these features present in a wide class of substantively important circumstances. These tensions point to the presence of a trade-off between receiving advice more often and receiving more informative advice. The optimal realization of this trade-off for the leader sometimes induces her to prefer advisors of limited competence – a preference that, we show, is robust under different informational assumptions. We consider how institutional tools available to leaders affect preferences for advisor competence and the quality of advice they may expect to receive in equilibrium.
-
704891.907511
There are two main strands of arguments regarding the value-free ideal (VFI): desirability and achievability (Reiss and Sprenger 2020). In this essay, I will argue for what I will call a compatibilist account of upholding the VFI focusing on its desirability even if the VFI is unachievable. First, I will explain what the VFI is. Second, I will show that striving to uphold the VFI (desirability) is compatible with the rejection of its achievability. Third, I will demonstrate that the main arguments against the VFI do not refute its desirability. Finally, I will provide arguments on why it is desirable to strive to uphold the VFI even if the VFI is unachievable and show what role it can play in scientific inquiry. There is no single definition of the VFI, yet the most common way to interpret it is that non-epistemic values ought not to influence scientific reasoning (Brown 2024, 2). Non-epistemic values are understood as certain ethical, social, cultural or political considerations. Therefore, it is the role of epistemic values, such as accuracy, consistency, empirical adequacy and simplicity, to be part of and to ensure proper scientific reasoning.
-
704908.907519
There is an overwhelmingly abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between ‘AI’ and ‘ethics’ can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and set the grounds for more informed discussions about scope, methods, and trainings of AI ethicists.
-
790973.90753
Prioritarianism is generally understood as a kind of moral
axiology. An axiology provides an account of what makes items, in
this case outcomes, good or bad, better or worse. A moral
axiology focuses on moral value: on what makes outcomes
morally good or bad, morally better or worse. Prioritarianism, specifically, posits that the moral-betterness
ranking of outcomes gives extra weight (“priority”) to
well-being gains and losses affecting those at lower levels of
well-being. It differs from utilitarianism, which is indifferent to
the well-being levels of those affected by gains and
losses.[ 1 ]
Although it is possible to construe prioritarianism as a
non-axiological moral view, this entry follows the prevailing approach
and trains its attention on axiological prioritarianism.
-
806489.907538
Statistics play an essential role in an extremely wide range of human reasoning. From theorizing in the physical and social sciences to determining evidential standards in legal contexts, statistical methods are ubiquitous, and thus various questions about their application inevitably arise. As tools for making inferences that go beyond a given set of data, they are inherently a means of reasoning ampliatively, and so it is unsurprising that philosophers interested in the notions of evidence and inductive inference have been concerned to utilize statistical frameworks to further our understanding of these topics. However, the field of statistics has long been the subject of heated philosophical controversy. Given that a central goal for philosophers of science is to help resolve problems about evidence and inference in scientific practice, it is important that they be involved in current debates in statistics and data science. The purpose of this topical collection is to promote such philosophical interaction. We present a cross-section of these subjects, written by scholars from a variety of fields in order to explore issues in philosophy of statistics from different perspectives.
-
820276.907547
I take a pragmatist perspective on quantum theory. This is not a view of the world described by quantum theory. In this view quantum theory itself does not describe the physical world (nor our observations, experiences or opinions of it). Instead, the theory offers reliable advice—on when to expect an event of one kind or another, and on how strongly to expect each possible outcome of that event. The event’s actual outcome is a perspectival fact—a fact relative to a physical context of assessment. Measurement outcomes and quantum states are both perspectival. By noticing that each must be relativized to an appropriate physical context one can resolve the measurement problem and the problem of nonlocal action. But if the outcome of a quantum measurement is not an absolute fact, then why should the statistics of such outcomes give us any objective reason to accept quantum theory? One can describe extensions of the scenario of Wigner’s friend in which a statement expressing the outcome of a quantum measurement would be true relative to one such context but not relative to another. However, physical conditions in our world prevent us from realizing such scenarios. Since the outcome of every actual quantum measurement is certified at what is essentially a single context of assessment, the outcome relative to that context is an objective fact in the only sense that matters for science. We should accept quantum theory because the statistics these outcomes display are just those it leads us to expect.
-
820294.907557
Extrapolating causal effects is becoming an increasingly important kind of inference in Evidence-Based Policy, development economics, and microeconometrics more generally. While several strategies have been proposed to aid with extrapolation, the existing methodological literature has left our understanding of what extrapolation consists of and what constitutes successful extrapolation underdeveloped. This paper addresses this lack in understanding by offering a novel account of successful extrapolation. Building on existing contributions pertaining to the challenges involved in extrapolation, this more nuanced and comprehensive account seeks to provide tools that facilitate the scrutiny of specific extrapolative inferences and general strategies for extrapolation. Offering such resources is important especially in view of the increasing amounts of real-world decision-making in policy, development, and beyond that involve extrapolation.
-
820355.907567
I dispute the conventional claim that the second law of thermodynamics is saved from a "Maxwell's Demon" by the entropy cost of information erasure, and show that instead it is measurement that incurs the entropy cost. Thus Brillouin, who identified measurement as savior of the second law, was essentially correct, and putative refutations of his view, such as Bennett's claim to measure without entropy cost, are seen to fail when the applicable physics is taken into account. I argue that the tradition of attributing the defeat of Maxwell's Demon to erasure rather than to measurement arose from unphysical classical idealizations that do not hold for real gas molecules, as well as a physically ungrounded recasting of physical thermodynamical processes into computational and information-theoretic conceptualizations. I argue that the fundamental principle that saves the second law is the quantum uncertainty principle applying to the need to localize physical states to precise values of observables in order to eQect the desired disequilibria aimed at violating the second law. I obtain the specific entropy cost for localizing a molecule in the Szilard engine, which coincides with the quantity attributed to Landauer's principle. I also note that an experiment characterized as upholding an entropy cost of erasure in a "quantum Maxwell's Demon" actually demonstrates an entropy cost of measurement.
-
1051032.907576
We draw a distinction between the traditional reference class problem which describes an obstruction to estimating a single individual probability—which we re-term the individual reference class problem—and what we call the reference class problem at scale, which can result when using tools from statistics and machine learning to systematically make predictions about many individual probabilities simultaneously. We argue that scale actually helps to mitigate the reference class problem, and purely statistical tools can be used to efficiently minimize the reference class problem at scale, even though they cannot be used to solve the individual reference class problem.
-
1051048.907585
Modal Empiricism in philosophy of science proposes to understand the possibility of modal knowledge from experience by replacing talk of possible worlds with talk of possible situations, which are coarse-grained, bounded and relative to background conditions. This allows for an induction towards objective necessity, assuming that actual situations are representative of possible ones. The main limitation of this epistemology is that it does not account for probabilistic knowledge. In this paper, we propose to extend Modal Empiricism to the probabilistic case, thus providing an inductivist epistemology for probabilistic knowledge. The key idea is that extreme probabilities, close to 1 and 0, serve as proxies for testing mild probabilities, using a principle of model combination.
-
1051068.907594
In operational quantum mechanics two measurements are called operationally equivalent if they yield the same distribution of outcomes in every quantum state and hence are represented by the same operator. In this paper, I will show that the ontological models for quantum mechanics and, more generally, for any operational theory sensitively depend on which measurement we choose from the class of operationally equivalent measurements, or more precisely, which of the chosen measurements can be performed simultaneously. To this goal, I will take first three examples—a classical theory, the EPR-Bell scenario and the Popescu-Rochlich box; then realize each example by two operationally equivalent but different operational theories—one with a trivial and another with a non-trivial compatibility structure; and finally show that the ontological models for the different theories will be different with respect to their causal structure, contextuality, and fine-tuning.
-
1166375.907603
QBism explicitly takes the subjective view: probabilities of events are defined solely by past experiences, i.e. the record of observations. As shown by the authors (Fuchs et al, 2013), this: “... removes the paradoxes, conundra, and pseudo-problems that have plagued quantum foundations for the past nine decades”. It is criticised for its lack of ontology and anthropocentric nature. However, if Everett's (1957) formulation is taken at face value, exactly the features of QBism are the result, and the ontology is inherent. The anthropocentric nature of the solution is simply an indication that the quantum state is relative, as is central to Everett. Problems of measurement and locality do not arise.
-
1166428.907612
The localization problem in relativistic quantum theory has persisted for more than seven decades, yet it is largely unknown and continues to perplex even those well-versed in the subject. At the heart of this problem lies a fundamental conflict between localizability and relativistic causality, which can also be construed as part of the broader dichotomy between measurement and unitary dynamics. This article provides a historical review of the localization problem in one-particle relativistic quantum mechanics, clarifying some persistent misconceptions in the literature, and underscoring the antinomy between causal dynamics and localized observables.