
49415.809195
In this series of posts, I will raise some issues for the logical pluralism of Beall & Restall (hereafter 'B&R')  a muchdiscussed, topicrevivifying view in the philosophy of logic. My study of their view was prompted by Mark Colyvan, whose course on Philosophy of Logic at Sydney Uni I'm helping to teach this year. …

141206.809262
[Note: This is (roughly) the text of a talk I delivered at the biassensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. …

265157.809283
How to serve two epistemic masters
Posted on Thursday, 23 May 2019
In this
2018 paper, J. Dmitri Gallow shows that it is difficult to combine
multiple deference principles. The argument is a little complicated,
but the basic idea is surprisingly simple. …

279037.809298
One of the central philosophical debates prompted by general relativity concerns the status of the metric field. A number of philosophers have argued that the metric field should no longer be regarded as part of the background arena in which physical fields evolve; it should be regarded as a physical field itself. Earman and Norton write, for example, that the metric tensor in general relativity ‘incorporates the gravitational field and thus, like other physical fields, carries energy and momentum’.1 Indeed, they baldly claim that according to general relativity ‘geometric structures, such as the metric tensor, are clearly physical fields in spacetime’.2 On such a view, spacetime itself— considered independently of matter—has no metrical properties, and the mathematical object that best represents spacetime is a bare topological manifold. As Rovelli puts the idea: ‘the metric/gravitational field has acquired most, if not all, the attributes that have characterized matter (as opposed to spacetime) from Descartes to Feynman...

394451.809311
Within the context of the QuinePutnam indispensability argument, one discussion about the status of mathematics is concerned with the ‘Enhanced Indispensability Argument’, which makes explicit in what way mathematics is supposed to be indispensable in science, namely explanatory. If there are genuine mathematical explanations of empirical phenomena, an argument for mathematical platonism could be extracted by using inference to the best explanation. The best explanation of the primeness of the life cycles of Periodical Cicadas is genuinely mathematical, according to Baker (2005, 2009). Furthermore, the result is then also used to strengthen the platonist position (e.g. Baker 2017a). We pick up the circularity problem brought up by Leng (2005) and Bangu (2008). We will argue that Baker’s attempt to solve this problem fails, if Hume’s Principle is analytic. We will also provide the opponent of the Enhanced Indispensability Argument with the socalled ‘interpretability strategy’, which can be used to come up with alternative explanations in case Hume’s Principle is nonanalytic.

460429.809325
Suppose that you have been invited to attend an expartner’s
wedding and that the best thing you can do is accept the invitation
and be pleasant at the wedding. But, suppose furthermore that if you
do accept the invitation, you’ll freely decide to get inebriated
at the wedding and ruin it for everyone, which would be the worst
outcome. The second best thing to do would be to simply decline the
invitation. In light of these facts, should you accept or decline the
invitation? (Zimmerman 2006: 153). The answer to this question hinges
on the actualism/possibilism debate in ethics, which concerns the
relationship between an agent’s free actions and her moral
obligations.

780061.809338
The three central tenets of traditional Bayesian epistemology are these:
Precision Your doxastic state at a given time is represented by a credence function, $c$, which takes each proposition $X$ about which you have an opinion and returns a single numerical value, $c(X)$, that measures the strength of your belief in $X$. …

846905.809351
Need considerations play an important role in empirically informed theories of distributive justice. We propose a concept of needbased justice that is related to social participation and provide an ethical measurement of needbased justice. The βεindex satisfies the needprinciple, monotonicity, sensitivity, transfer and several ‘technical’ axioms. A numerical example is given.

892262.809364
Curiously, people assign less punishment to a person who attempts and fails to harm somebody if their intended victim happens to suffer the harm for coincidental reasons. This “blame blocking” effect provides an important evidence in support of the twoprocess model of moral judgment (Cushman, 2008). Yet, recent proposals suggest that it might be due to an unintended interpretation of the dependent measure in cases of coincidental harm (Prochownik, 2017; also Malle, Guglielmo, & Monroe, 2014). If so, this would deprive the twoprocess model of an important source of empirical support. We report and discuss results that speak against this alternative account.

892347.809377
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration.

923601.80939
My grad student Christian Williams and I finished this paper just in time for him to talk about it at SYCO:
• John Baez and Christian Williams, Enriched Lawvere theories for operational semantics. Abstract. …

1010508.809404
We demonstrate how deep and shallow embeddings of functional programs can coexist in the Coq proof assistant using metaprogramming facilities of MetaCoq. While deep embeddings are useful for proving metatheoretical properties of a language, shallow embeddings allow for reasoning about the functional correctness of programs.

1086718.809416
Paul Busch has emphasized on various occasions the importance for physics of going beyond a merely instrumentalist view of quantum mechanics. Even if we cannot be sure that any particular realist interpretation describes the world as it actually is, the investigation of possible realist interpretations helps us to develop new physical ideas and better intuitions about the nature of physical objects at the micro level. In this spirit, Paul Busch himself pioneered the concept of “unsharp quantum reality”, according to which there is an objective nonclassical indeterminacy—a lack of sharpness—in the properties of individual quantum systems. We concur with Busch’s motivation for investigating realist interpretations of quantum mechanics and with his willingness to move away from classical intuitions. In this article we try to take some further steps on this road. In particular, we pay attention to a number of prima facie implausible and counterintuitive aspects of realist interpretations of unitary quantum mechanics. We shall argue that from a realist viewpoint, quantum contextuality naturally leads to “perspectivalism” with respect to properties of spatially extended quantum systems, and that this perspectivalism is important for making relativistic covariance possible.

1113042.80943
Is it possible to introduce a small number of agents into an environment, in such a way that an equilibrium results in which almost everyone (including the original agents) cooperates almost all the time? This is a compelling question for those interested in the design of beneficial gametheoretic AI, and it may also provide insights into how to get human societies to function better. We investigate this broad question in the specific context of finitely repeated games, and obtain a mostly positive answer. Our main novel technical tool is the use of limited altruism (LA) types, which behave altruistically towards other LA agents but not towards selfish agents. The uncertainty about which type of agent one is facing turns out to be essential in establishing cooperation. We provide characterizations in several families of games of which LA types are effective for our purposes.

1147898.809443
John Stuart Mill famously wrote:
We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellowcreatures; if not by opinion, by the reproaches of his own conscience. …

1210721.809457
According to a conventional view, there exists no common cause model of quantum correlations satisfying locality requirements. Indeed, Bell’s inequality is derived from some locality requirements and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argued that in the derivation of the inequality, the existence of a common commoncause for multiple correlations is implicitly assumed and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, Graßhoff et al. showed that when there are three pairs of perfectly correlated events and a common cause of each correlation exist, we cannot construct a common cause model that is consistent with quantum mechanical prediction and also meets several locality requirements. In this paper, first, as a consequence of the fact shown by Graßhoff et al., we will confirm that there exists no local common cause model when a twoparticle system is in any maximally entangled state. After that, based on Hardy’s famous argument, we will prove that there exists no local common cause model when a twoparticle system is in any nonmaximally entangled state. Therefore, it will be concluded that for any entangled state, there exists no local common cause model. It will be revealed that the nonexistence of a common cause model satisfying locality is not limited to a particular state like the singlet state.

1397942.809472
In this article, it is argued that, for a classical Hamiltonian system which is closed, the ergodic theorem emerge from the GibbsLiouville theorem in the limit that the system has evolved for an infinitely long period of time. In this limit, from the perspective of an ignorant observer, who do not have perfect knowledge about the complete set of degrees of freedom for the system, distinctions between the possible states of the system, i.e. the information content, is lost leading to the notion of statistical equilibrium where states are assigned equal probabilities. Finally, by linking the concept of entropy, which gives a measure for the amount of uncertainty, with the concept of information, the second law of thermodynamics is expressed in terms of the tendency of an observer to loose information over time.

1397966.809485
In this article, it is argued that the Gibbs Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never lost. Thus, it is proposed that the GibbsLiouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the GibbsLiouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation.

1567114.809498
Decision theory and philosophy of action both attempt to explain what it is for an ideally rational agent to answer the question “What to do?” From the agent’s point of view, the answer to that question is settled in practical deliberation and motivates her to act. The mental states that determine her answer are the sources of rationalizing explanations of the agent’s behavior. They explain why she performed a given action in terms of why it made sense, from her point of view, to so act. Rationalizing explanations should be contrastive, of the form “Agent S performed action A, rather than actions B, C, or D, because P, Q, and R” where B, C, and D are whatever S takes to be the possible alternatives to A, and P, Q, and R are whichever of S’s deliberative considerations and other factors yield a good explanation.

1575425.809514
In this article, it is argued that the Gibbs Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never lost. Thus, it is proposed that the GibbsLiouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the GibbsLiouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation.

1667452.809528
Agents make predictions based on similar past cases, while also learning the relative importance of various attributes in judging similarity. We ask whether the resulting "empirically optimal similarity function (EOSF) is unique, and how easy it is to find it. We show that with many observations and few relevant variables, uniqueness holds. By contrast, when there are many variables relative to observations, nonuniqueness is the rule, and finding the EOSF is computationally hard. The results are interpreted as providing conditions under which rational agents who have access to the same observations are likely to converge on the same predictions, and conditions under which they may entertain different probabilistic beliefs.

1676015.809543
We finish this chapter by turning from illustrations of strategies for proofconstruction to consider a basic issue of principle, one which we have so far passed quietly by. (a) Start from an example. Tachyons are, by definition, elementary particles which are superluminal, i.e. which travel faster than the speed of light. So, adopting a QL language quantifying over elementary particles, and with the obvious predicates, the following is true: (1) 8x(Tx ! Sx).

1676049.809556
There is conflicting experimental evidence about whether the “stakes” or importance of being wrong affect judgments about whether a subject knows a proposition. To date, judgments about stakes effects on knowledge have been investigated using binary paradigms: responses to “low” stakes cases are compared with responses to “high stakes” cases. However, stakes or importance are not binary properties—they are scalar: whether a situation is “high” or “low” stakes is a matter of degree. So far, no experimental work has investigated the scalar nature of stakes effects on knowledge: do stakes effects increase as the stakes get higher? Do stakes effects only appear once a certain threshold of stakes has been crossed? Does the effect plateau at a certain point? To address these questions, we conducted experiments that probe for the scalarity of stakes effects using several experimental approaches. We found evidence of scalar stakes effects using an “evidenceseeking” experimental design, but no evidence of scalar effects using a traditional “evidencefixed” experimental design. In addition, using the evidenceseeking design, we uncovered a large, but previously unnoticed framing effect on whether participants are skeptical about whether someone can know something, no matter how much evidence they have. The rate of skeptical responses and the rate at which participants were willing to attribute “lazy knowledge”—that someone can know something without having to check— were themselves subject to a stakes effect: participants were more skeptical when the stakes were higher, and more prone to attribute lazy knowledge when the stakes were lower. We argue that the novel skeptical stakes effect provides resources to respond to criticisms of the evidenceseeking approach that argue that it does not target knowledge.

1692661.809569
This article shows how fundamental higherorder theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are cetegorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1storder theories of Gödel’s results necessarily leave the mathematical structures illdefined, e.g., there are necessarily models with infinite integers.

1863297.809582
. We’ve reached our last Tour (of SIST)*: Pragmatic and Error Statistical Bayesians (Excursion 6), marking the end of our reading with Souvenir Z, the final Souvenir, as well as the Farewell Keepsake in 6.7. …

1925400.809596
We address problems (that have since been addressed) in a proofsversion of a paper by Eva, Hartmann and Rad, who where attempting to justify the KullbackLeibler divergence minimization solution to van Fraassen’s Judy Benjamin problem.

1925420.809619
In this paper, I analyze the extent to which classical phase transitions, both firstorder and continuous, pose a challenge for intertheoretic reduction. My main contention is that phase transitions are compatible with reduction, at least with a notion of intertheoretic reduction that combines Nagelian reduction and what Nickles (1973) called reduction_{2}. I also argue that, even if the same approach to reduction applies to both types of phase transitions, there is a crucial difference in their physical treatment. In fact, in addition to the thermodynamic limit, in the case of continuous phase transitions there is a second infinite limit involved that is related with the number of iterations in the renormalization group transformation. I contend that the existence of this second limit, which has been largely underappreciated in the philosophical debate, marks an important difference in the reduction of firstorder and continuous phase transitions and also in the justification of the idealizations involved in these two cases.

1925485.809634
Several treatments of the Shooting Room Paradox have failed to recognize the crucial role played by its involving a number of players unbounded in expectation. We indicate Reflection violations and other vulnerabilities in extant proposals, then show that the paradox does not arise when the expected number of participants is finite; the Shooting Room thus takes its place in the growing list of puzzles that have been shown to require infinite expectation. Recognizing this fact, we conclude that prospects for a “straight solution” are dim.

1925509.809646
The analysis of theoryconfirmation generally takes the form: show that a theory in conjunction with physical data and auxiliary hypotheses yield a prediction about phenomena; verify the prediction; provide a quantitative measure of the degree of theoryconfirmation this yields. The issue of confirmation for an entire framework (e.g., Newtonian mechanics en bloc, as opposed, say, to Newton’s theory of gravitation) either does not arise, or is dismissed in so far as frameworks are thought not to be the kind of thing that admits scientific confirmation. I argue that there is another form of scientific reasoning that has not received philosophical attention, what I call Newtonian abduction, that does provide confirmation for frameworks as a whole, and does so in two novel ways. (In particular, Newtonian abduction is not inference to the best explanation, but rather is closer to Peirce’s original idea of abduction.) I further argue that Newtonian abduction is at least as important a form of reasoning in science as the deductive form sketched above. The form is beautifully summed up by Maxwell (1876): “The true method of physical reasoning is to begin with the phenomena and to deduce the forces from them by a direct application of the equations of motion.”

1957652.80966
Kurt Gödel (1931) showed that the formalist program of David Hilbert was doomed by demonstrating that within a consistent formal system that a Peano axiomatized arithmetic can be carried out is incomplete. It will be able to generate statements that can neither be proved or disproved. Likewise, for such a system its consistency cannot be proved within the system itself. Central to the incompleteness theorem was the use of paradox based on selfreferencing. “This statement is unprovable.” A great irony of the theorem is that Gödel used the system developed by Whitehead and Russell (191013) to avoid various logical paradoxes to generate his own paradox, thus hammering home the failure of the formalist program.