
68927.142187
Recent research has identified a tension between the Safety principle that knowledge is belief without risk of error, and the Closure principle that knowledge is preserved by competent deduction. Timothy Williamson reconciles Safety and Closure by proposing that when an agent deduces a conclusion from some premises, the agent’s method for believing the conclusion includes their method for believing each premise. We argue that this theory is untenable because it implies problematically easy epistemic access to one’s methods. Several possible solutions are explored and rejected.

126614.142386
Reality contains multiple standpoints and encompasses any fact that obtains from any such standpoint. Any fact that obtains at all, obtains relative to some standpoint. Any true representation cannot but adopt some standpoint and, because there are multiple standpoints relative to which different facts obtain, no single representation can be a truly complete representation of all the facts.

250996.142408
The subjective Bayesian answer to the problem of induction
Posted on Wednesday, 28 Sep 2022. Some people – important people, like Richard Jeffrey or Brian Skyrms – seem to believe that Laplace and de Finetti have solved the problem of induction, assuming nothing more than probabilism. …

251667.142423
Social environments often impose tradeoffs between pursuing personal goals and maintaining a favorable reputation. We studied how individuals navigate these tradeoffs using Reinforcement Learning (RL), paying particular attention to the role of social value orientation (SVO). We had human participants play an interated Trust Game against various software opponents and analyzed the behaviors. We then incorporated RL into two cognitive models, trained these RL agents against the same software opponents, and performed similar analyses. Our results show that the RL agents reproduce many interesting features in the human data, such as the dynamics of convergence during learning and the tendency to defect once reciprocation becomes impossible. We also endowed some of our agents with SVO by incorporating terms for altruism and inequality aversion into their reward functions. These prosocial agents differed from proself agents in ways that resembled the differences between prosocial and proself participants. This suggests that RL is a useful framework for understanding how people use feedback to make social decisions.

291661.142446
An action is unratifiable when, on the assumption that one performs it, another option has higher expected utility. Unratifiable actions are often claimed to be somehow rationally defective. But in some cases where multiple options are unratifiable, one unratifiable option can still seem preferable to another. We should respond, I argue, by invoking a graded notion of ratifiability.

359065.142462
Quantum states containing records of incompatible outcomes of quantum measurements are valid states in the tensor product Hilbert space. Since they contain false records, they conflict with the Born rule and with our observations. I show that excluding them requires a finetuning to a zeromeasure subspace of the Hilbert space that seems “conspiratorial”, in the sense that it depends on future events, in particular of future choices of the measurement settings, it depends on the evolution law (normally thought to be independent of the initial conditions), it violates statistical independence (even in interpretations that satisfy it in the context of Bell’s theorem, like standard quantum mechanics, pilotwave theories, collapse theories, manyworlds etc.). Even the innocent assumption that there are measuring devices requires this kind of fine tuning. These results are independent of the interpretation of quantum mechanics. To explain away this apparent finetuning, I propose that an yet unknown law or superselection rule may restrict the full tensor product Hilbert space to this very special subspace.

366836.142476
In a series of recent papers, I presented a puzzle and theory of definition.2 I did not, however, indicate how the theory resolves the puzzle. This was an oversight, on my part, and one I hope to correct. My aim here is to provide that resolution: to demonstrate that my theory can consistently embrace the principles I prove to be inconsistent. To the best of my knowledge, this theory is the only one capable of this embrace—which marks yet another advantage it has over competitors.

408629.142495
A metainference is usually understood as a pair consisting of a collection of inferences, called premises, and a single inference, called conclusion. In the last few years, much attention has been paid to the study of metainferences—and, in particular, to the question of what are the valid metainferences of a given logic. So far, however, this study has been done in quite a poor language. Our usual sequent calculi have no way to represent, e.g. negations, disjunctions or conjunctions of inferences. In this paper we tackle this expressive issue. We assume some background sentential language as given and define what we call an inferential language, that is, a language whose atomic formulas are inferences. We provide a modeltheoretic characterization of validity for this language—relative to some given characterization of validity for the background sentential language—and provide a prooftheoretic analysis of validity. We argue that our novel language has fruitful philosophical applications. Lastly, we generalize some of our definitions and results to arbitrary metainferential levels.

421015.14253
How could the initial, drastic decisions to implement “lockdowns” to control the spread of Covid19 infections be justifiable, when they were made on the basis of such uncertain evidence? We defend the imposition of lockdowns in some countries by, first, looking at the evidence that undergirded the decision (focusing particularly on the decisionmaking process in the United Kingdom); second, arguing that this provided sufficient grounds to restrict liberty, given the circumstances; and third, defending the use of poorly empirically constrained epidemiological models as tools that can legitimately guide public policy.

536423.142546
What is the ontology of a realist quantum theory such as Bohmian mechanics? This has been an important but debated issue in the foundations of quantum mechanics. In this paper, I present a new result which may help examine the ontology of a realist physical theory and make it more complete. It is that when different values of a physical quantity lead to different evolution of the assumed ontic state of an isolated system in a theory, this physical quantity also represents something in the ontology of the theory. Moreover, I use this result to analyze the ontologies of several realist quantum theories. It is argued that in Bohmian mechanics and collapse theories such as GRWm and GRWf, the wave function should be included in the ontology of the theory. In addition, when admitting the reality of the wave function, mass, charge and spin should also be taken as the properties of a quantum system.

557062.14256
According to Mercier and Sperber (2009, 2011, 2017), people have an immediate and intuitive feeling about the strength of an argument. These intuitive evaluations are not captured by current evaluation methods of argument strength, yet they could be important to predict the extent to which people accept the claim supported by the argument. In an exploratory study, therefore, a newly developed intuitive evaluation method to assess argument strength was compared to an explicit argument strength evaluation method (the PAS scale; Zhao et al., 2011), on their ability to predict claim acceptance (predictive validity) and on their sensitivity to differences in the manipulated quality of arguments (construct validity). An experimental study showed that the explicit argument strength evaluation performed well on the two validity measures. The intuitive evaluation measure, on the other hand, was not found to be valid. Suggestions for other ways of constructing and testing intuitive evaluation measures are presented.

594333.142977
Peirce’s diagrammatic system of Existential Graphs (EGα) is a logical proof system corresponding to the Propositional Calculus (P L). Most known proofs of soundness and completeness for EG_{α} depend upon a translation of Peirce’s diagrammatic syntax into that of a suitable Fregestyle system. In this paper, drawing upon standard results but using the native diagrammatic notational framework of the graphs, we present a purely syntactic proof of soundness, and hence consistency, for EG_{α}, along with two separate completeness proofs that are constructive in the sense that we provide an algorithm in each case to construct an EGα formal proof starting from the empty Sheet of Assertion, given any expression that is in fact a tautology according to the standard semantics of the system.

711882.142997
The paper investigates from a prooftheoretic perspective various noncontractive logical systems circumventing logical and semantic paradoxes. Until recently, such systems only displayed additive quantifiers (Grišin, Cantini). Systems with multiplicative quantifers have also been proposed in the 2010s (Zardini), but they turned out to be inconsistent with the naive rules for truth or comprehension. We start by presenting a firstorder system for disquotational truth with additive quantifiers and we compare it with Grišin set theory. We then analyze the reasons behind the inconsistency phenomenon affecting multiplicative quantifers: after interpreting the exponentials in affine logic as vacuous quantifiers, we show how such a logic can be simulated within a truthfree fragment of a system with multiplicative quantifiers. Finally, we prove that the logic of these multiplicative quantifiers (but without disquotational truth) is consistent, by showing that an infinitary version of the cut rule can be eliminated. This paves the way to a syntactic approach to the proof theory of infinitary logic with infinite sequents.

728702.143011
We present completeness results for inference in Bayesian networks with respect to two different parameterizations, namely the number of variables and the topological vertex separation number. For this we introduce the parameterized complexity classes W[1]PP and XLPP, which relate to W[1] and XNLP respectively as PP does to NP. The second parameter is intended as a natural translation of the notion of pathwidth to the case of directed acyclic graphs, and as such it is a stronger parameter than the more commonly considered treewidth. Based on a recent conjecture, the completeness results for this parameter suggest that deterministic algorithms for inference require exponential space in terms of pathwidth and by extension treewidth. These results are intended to contribute towards a more precise understanding of the parameterized complexity of Bayesian inference and thus of its required computational resources in terms of both time and space. Keywords: Bayesian networks; inference; parameterized complexity theory.

767383.143024
When it comes to finding whether a firm has violated antitrust law, economists are often called upon as expert witnesses by the parties involved in litigation. This paper focuses on a challenge that economists may face when appearing as expert witnesses in US federal courts, namely to comply with the socalled Daubert standard of admissibility of expert testimony. I propose a new framework for analysing the interplay between model applicability and admissibility standard in courtrooms. The framework distinguishes between weak applicability claims, stating that a model’s critical assumptions are shared by the target, and strong applicability claims, connecting empirical models and specific market features. I use this distinction to examine a recent antitrust case where an expert testimony based on economic models has been assessed following the Daubert standard.

828800.143038
A de minimis risk is defined as a risk that is so small that it may be legitimately ignored when making a decision. While ignoring small risks is common in our daytoday decision making, attempts to introduce the notion of a de minimis risk into the framework of decision theory have run up against a series of wellknown difficulties. In this paper, I will develop an enriched decision theoretic framework that is capable of overcoming two major obstacles to the modelling of de minimis risk. The key move is to introduce, into decision theory, a nonprobabilistic conception of risk known as normic risk.

875894.14305
According to the standard analysis of degree questions (see, among others, Rullmann 1995 and Beck and Rullmann 1997), a degree question’s LF contains a variable that ranges over individual degrees and is bound by the degreequestion operator how. In contrast with this, we claim that the variable bound by the degreequestion operator how does not range over individual degrees but over intervals of degrees, by analogy with Schwarzschild and Wilkinson’s (2002) proposal regarding the semantics of comparative clauses. Not only does the intervalbased semantics predict the existence of certain readings that are not predicted under the standard view, it is also able, together with other natural assumptions, to account for the sensitivity of degree questions to negativeislands, as well as for the fact, uncovered by Fox and Hackl (2007), that negative islands can be obviated by some properly placed modals. Like Fox and Hackl (2007), we characterize negative island effects as arising from the fact that the relevant question, due to its meaning alone, can never have a maximally informative answer. Contrary to Fox and Hackl (2007), however, we do not need to assume that scales are universally dense, nor that the notion of maximal informativity responsible for negative islands is blind to contextual parameters.

882862.143064
It is sometimes said there are two ways of formulating Newtonian gravitation theory. On the first, matter gives rise to a gravitational field deflecting bodies from inertial motion within flat spacetime. On the second, matter’s accelerative effects are encoded in dynamical spacetime structure exhibiting curvature and the field is ‘geometrized away’. Are these two accounts of Newtonian gravitation theoretically equivalent? Conventional wisdom within the philosophy of physics is that they are, and recently several philosophers have made this claim explicit. In this paper I develop an alternative approach to Newtonian gravitation on which the equivalence claim fails, and in the process identify an important but largely overlooked consideration for interpreting physical theories. I then apply this analysis to (a) put limits on the uses of Newtonian gravitation within the methodology of science, and (b) defend the interpretive approach to theoretical equivalence against formal approaches, including the recently popular criterion of categorical equivalence.

882900.143077
We argue that the epistemic functions of replication in science are best understood by their role in assessing kinds of experimental error. Direct replications serve to assess the reliability of an experiment through its precision: the presence and degree of random error. Conceptual replications serve to assess the validity of an experiment through its accuracy: the presence and degree of systematic errors. To illustrate the aptness of this view, we examine the Hubble constant controversy in astronomy, showing how astronomers have responded to the concordances and discordances in their results by carrying out the different kinds of replication that we identify, with the aim of establishing a precise, accurate value for the Hubble constant. We contrast our view with Machery’s “resampling” account of replicability, which maintains that replications only assess reliability.

1093618.143091
I survey, for a general scientific audience, three decades of research into which sorts of problems admit exponential speedups via quantum computers—from the classics (like the algorithms of Simon and Shor), to the breakthrough of Yamakawa and Zhandry from April 2022. I discuss both the quantum circuit model, which is what we ultimately care about in practice but where our knowledge is radically incomplete, and the socalled oracle or blackbox or query complexity model, where we’ve managed to achieve a much more thorough understanding that then informs our conjectures about the circuit model. I discuss the strengths and weaknesses of switching attention to sampling tasks, as was done in the recent quantum supremacy experiments. I make some skeptical remarks about widelyrepeated claims of exponential quantum speedups for practical machine learning and optimization problems. Through many examples, I try to convey the “law of conservation of weirdness,” according to which every problem admitting an exponential quantum speedup must have some unusual property to allow the amplitude to be concentrated on the unknown right answer(s).

1114027.143103
More precisely, I show that in a continuous basis, the contributing basis vectors are present in a state vector with real and equal coefficients, but they are distributed with variable density among the eigenspaces of the observable. Counting the contributing basis vectors while taking their density into account gives the Born rule without making other assumptions. State counting yields the Born rule only if the basis is continuous, but all known physically realistic observables admit such bases.

1257473.143116
John Broome and Duncan Foley’s paper discusses several important and interesting questions regarding how we can handle the climate crisis. It is also innovative on the institutional level with its proposal of a World Climate Bank. This is indeed valuable; we need much more creative institutional thinking about the challenge of the climate crisis. All too much thinking has been focused on individual behaviour instead of collective solutions and institutional change.

1257512.14314
Marton (2019) argues that that it follows from the standard antirealist theory of truth, which states that truth and possible knowledge are equivalent, that knowing possibilities is equivalent to the possibility of knowing, whereas these notions should be distinct. Moreover, he argues that the usual strategies of dealing with the ChurchFitch paradox of knowability are either not able to deal with his modalepistemic collapse result or they only do so at a high price. Against this, I argue that Marton’s paper does not present any seriously novel challenge to antirealism not already found in the ChurchFitch result. Furthermore, Edgington (1985)’s reformulated antirealist theory of truth can deal with his modalepistemic collapse argument at no cost.

1287223.143178
Scoring rules measure the accuracy or epistemic utility of a credence assignment. A significant literature uses plausible conditions on scoring rules on finite sample spaces to argue for both probabilism— the doctrine that credences ought to satisfy the axioms of probabilism— and for the optimality of Bayesian update as a response to evidence. I prove a number of formal results regarding scoring rules on infinite sample spaces that impact the extension of these arguments to infinite sample spaces. A common condition in the arguments for probabilism and Bayesian update is strict propriety: that according to each probabilistic credence, the expected accuracy of any other credence is worse. Much of the discussion needs to divide depending on whether we require finite or countable additivity of our probabilities. I show that in a number of natural infinite finitely additive cases, there simply do not exist strictly proper scoring rules, and the prospects for arguments for probabilism and Bayesian update are limited. In many natural infinite countably additive cases, on the other hand, there do exist strictly proper scoring rules that are continuous on the probabilities, and which support arguments for Bayesian update, but which do not support arguments for probabilism. There may be more hope for accuracybased arguments if we drop the assumption that scores are extendedrealvalued. I sketch a framework for scoring rules whose values are nets of extended reals, and show the existence of a strictly proper netvalued scoring rules in all infinite cases, both for f.a. and c.a. probabilities. These can be used in an argument for Bayesian update, but it is not at present known what is to be said about probabilism in this case.

1290696.1432
Antirealists who hold the knowability thesis, namely that all truths are knowable, have been put on the defensive by the ChurchFitch paradox of knowability. Rejecting the nonfactivity of the concept of knowability used in that paradox, Edgington has adopted a factive notion of knowability, according to which only actual truths are knowable. She has used this new notion to reformulate the knowability thesis. The result has been argued to be immune against the ChurchFitch paradox, but it has encountered several other triviality objections. Schlöder in a forthcoming paper defends the general approach taken by Edgington, but amends it to save it in turn from the triviality objections. In this paper I will argue, first, that Schlöder’s justification for the factivity of his version of the concept of knowability is vulnerable to criticism, but I will also offer an improved justification that is in the same spirit as his. To the extent that some philosophers are right about our intuitive concept of knowability being a factive one, it is important to explore factive concepts of knowability that are made formally precise. I will subsequently argue that Schlöder’s version of the knowability thesis overgenerates knowledge or, in other words, it leads to attributions of knowledge where there is ignorance. This fits a general pattern for the research programme initiated by Edgington. This paper also contains preliminary investigations into the internal and logical structure of lines of inquiries, which raise interesting research questions.

1402640.143214
Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decisionmaking that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures.

1403625.143234
The Repugnant Conclusion: For any population consisting of people with very high positive welfare, there is a better population in which everyone has very low positive welfare, other things being equal.¹ In Figure 16.1, the width of each block represents the number of people whereas the height represents their lifetime welfare. Dashes indicate that the block in question should be much wider than shown, that is, the population size is much larger than shown.

1403657.143248
According to Positive Egalitarianism, not only do relations of inequality have negative value, as Negative Egalitarians claim, but relations of equality also have positive value. The egalitarian value of a population is a function of both pairwise relations of inequality (negative) and pairwise relations of equality (positive). Positive and Negative Egalitarianism diverge, especially in different number cases. Hence, an investigation of Positive Egalitarianism might shed new light on the vexed topic of population ethics and our duties to future generations. We shall here, in light of some recent criticism, further develop the idea of giving positive value to equal relations.

1452488.143261
Schlenker 2009, 2010a,b provides an algorithm for deriving the presupposition projection properties of an expression from that expression’s classical semantics. In this paper, we consider the predictions of Schlenker’s algorithm as applied to attitude verbs. More specifically, we compare Schlenker’s theory with a prominent view which maintains that attitudes exhibit belief projection, so that presupposition triggers in their scope imply that the attitude holder believes the presupposition (Kartunnen, 1974; Heim, 1992; Sudo, 2014). We show that Schlenker’s theory does not predict belief projection, and discuss several consequences of this result.

1518704.143276
It’s generally taken to be established that no local hiddenvariable theory is possible. That conclusion applies if our world is a thread, where a thread is a world where particles follow trajectories, as in PilotWave theory. But if our world is taken to be a set of threads locality can be recovered. Our world can be described by a manythreads theory, as defined by Jeffrey Barrett in the opening quote. Particles don’t follow trajectories because a particle in our world is a set of elemental particles following different trajectories, each in a thread. The “elements” of a superposition are construed as subsets in such a way that a particle in our world only has definite position if all its settheoretic elements are at corresponding positions in each thread. Wavefunction becomes a 3D density distribution of particles’ subset measures, the stuff of an electron’s “probability cloud”. Current PilotWave theory provides a nonrelativistic dynamics for the elemental particles (approximated by Many Interacting Worlds theory). EPRBell nonlocality doesn’t apply because the relevant measurement outcomes in the absolute elsewhere of an observer are always in superposition.