1. 140319.768395
    In the posthumously published ‘Truth and Probability’ (1926), Ramsey sets out an influential account of the nature, measurement, and norms of partial belief. The essay is a foundational work on subjectivist interpretations of probability, according to which probabilities can be interpreted as rational degrees of belief (see entry on Interpretations of Probability). Many of its key ideas and arguments have since featured in other foundational works within the subjectivist tradition (e.g., Savage 1954, Jeffrey 1965). Ramsey’s central claim in ‘Truth and Probability’ is that the laws of probability supply us with a ‘logic of partial belief’. That is, the laws specify what would need to be true of any consistent set of partial beliefs, in a manner analogous to how the laws of classical logic might be taken to generate necessary conditions on any consistent set of full beliefs. His case for this is based on a novel account of what partial beliefs are and how they can be measured.
    Found 1 day, 14 hours ago on Edward Elliott's site
  2. 241236.768467
    Recent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays in general do not traverse null geodesics. Having presented this result, we discuss its philosophical significance, both for the clock hypothesis (and, in particular, a recent purported proof thereof for light clocks), and for the operational meaning of the metric field.
    Found 2 days, 19 hours ago on PhilSci Archive
  3. 395910.7685
    Automated geometry theorem provers start with logic-based formulations of Euclid’s axioms and postulates, and often assume the Cartesian coordinate representation of geometry. That is not how the ancient mathematicians started: for them the axioms and postulates were deep discoveries, not arbitrary postulates. What sorts of reasoning machinery could the ancient mathematicians, and other intelligent species (e.g. crows and squirrels), have used for spatial reasoning? “Diagrams in minds” perhaps? How did natural selection produce such machinery?
    Found 4 days, 13 hours ago on Aaron Sloman's site
  4. 406066.76853
    George Boole (1815–1864) was an English mathematician and a founder of the algebraic tradition in logic. He worked as a schoolmaster in England and from 1849 until his death as professor of mathematics at Queen’s University, Cork, Ireland. He revolutionized logic by applying methods from the then-emerging field of symbolic algebra to logic. Where traditional (Aristotelian) logic relied on cataloging the valid syllogisms of various simple forms, Boole’s method provided general algorithms in an algebraic language which applied to an infinite variety of arguments of arbitrary complexity. These results appeared in two major works, The Mathematical Analysis of Logic (1847) and The Laws of Thought (1854).
    Found 4 days, 16 hours ago on Stanford Encyclopedia of Philosophy
  5. 462363.768558
    Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non- Von Neumann computational models they use share many characteristics with biological computation.
    Found 5 days, 8 hours ago on Rodney Brooks's site
  6. 550510.768587
    Famously, Pascal’s Wager purports to show that a prudentially rational person should aim to believe in God’s existence, even when sufficient epistemic reason to believe in God is lacking. Perhaps the most common view of Pascal’s Wager, though, holds it to be subject to a decisive objection, the so-called Many Gods Objection, according to which Pascal’s Wager is incomplete since it only considers the possibility of a Christian God. I will argue, however, that the ambitious version of this objection most frequently encountered in the literature on Pascal’s Wager fails. In the wake of this failure I will describe a more modest version of the Many Gods Objection and argue that this version still has strength enough to defeat the canonical Wager. The essence of my argument will be this: the Wager aims to justify belief in a context of uncertainty about God’s existence, but this same uncertainty extends to the question of God’s requirements for salvation. Just as we lack sufficient epistemic reason to believe in God, so too do we lack sufficient epistemic reason to judge that believing in God increases our chance of salvation. Instead, it is possible to imagine diverse gods with diverse requirements for salvation, not all of which require theistic belief. The context of uncertainty in which the Wager takes place renders us unable to single out one sort of salvation requirement as more probable than all others, thereby infecting the Wager with a fatal indeterminacy.
    Found 6 days, 8 hours ago on Craig Duncan's site
  7. 554061.768617
    A simple argument proposes a direct link between realism about quantum mechanics and one kind of metaphysical holism: if elementary quantum theory is at least approximately true, then there are entangled systems with intrinsic whole states for which the intrinsic properties and spatiotemporal arrangements of salient subsystem parts do not suffice.
    Found 6 days, 9 hours ago on Elizabeth Miller's site
  8. 663751.768656
    It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
    Found 1 week ago on Kevin Kelly's site
  9. 696803.768683
    In the framework of Brans—Dicke theory, a cosmological model regarding the expanding universe has been formulated by considering an inter—conversion of matter and dark energy. A function of time has been incorporated into the expression of the density of matter to account for the non—conservation of the matter content of the universe. This function is proportional to the matter content of the universe. Its functional form is determined by using empirical expressions of the scale factor and the scalar field in field equations. This scale factor has been chosen to generate a signature flip of the deceleration parameter with time. The matter content is found to decrease with time monotonically, indicating a conversion of matter into dark energy. This study leads us to the expressions of the proportions of matter and dark energy of the universe. Dependence of various cosmological parameters upon the matter content has been explored.
    Found 1 week, 1 day ago on PhilSci Archive
  10. 696818.768723
    How is it possible that models from game theory, which are typically highly idealised, can be harnessed for designing institutions through which we interact? I argue that game theory assumes that social interactions have a specific structure, which is uncovered with the help of directed graphs. The graphs make explicit how game theory encodes counterfactual information in natural collections of its models and can therefore be used to track how model-interventions change model-outcomes. For model-interventions to inform real-world design requires the truth of a causal hypothesis, namely that structural relations specified in a model approximate causal relations in the target interaction; or in other words, that the directed graph can be interpreted causally. In order to increase their confidence in this hypothesis, market designers complement their models with natural and laboratory experiments, and computational methods. Throughout the paper, the reform of a matching market for medical residents provides a case study for my proposed view, which hasn’t been previously considered in the philosophy of science.
    Found 1 week, 1 day ago on PhilSci Archive
  11. 697175.768751
    Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.
    Found 1 week, 1 day ago on Sheldon Goldstein's site
  12. 697188.768786
    We discuss an article by Steven Weinberg [38] expressing his discontent with the usual ways to understand quantum mechanics. We examine the two solutions that he considers and criticizes and propose another one, which he does not discuss, the pilot wave theory or Bohmian mechanics, for which his criticisms do not apply.
    Found 1 week, 1 day ago on Sheldon Goldstein's site
  13. 723313.768806
    This paper explores some of the ways in which agentive, deontic, and epistemic concepts combine to yield ought statements—or simply, oughts—of different characters. Consider an example. Suppose I place a coin on the table, either heads up or tails up, though the coin is covered and you do not know which. And suppose you are then asked to bet whether the coin is heads up or tails up, with $10 to win if you bet correctly. If the coin is heads up but you bet tails, there is a sense in which we would naturally say that you ought to have made the other choice—at least, things would have turned out better for you if you had. But an ought statement like this does not involve any suggestion that you should be criticized for your actual choice. Nobody could blame you, in this situation, for betting incorrectly. By contrast, imagine that the coin is placed in such a way that you can see that it is heads up, but you bet tails anyway. Again we would say that you ought to have done otherwise, but this time it seems that you could legitimately be criticized for your choice.
    Found 1 week, 1 day ago on John Horty's site
  14. 725638.768821
    An approach to frame semantics is built on a conception of frames as finite automata, observed through the strings they accept. An institution (in the sense of Goguen and Burstall) is formed where these strings can be refined or coarsened to picture processes at various bounded granularities, with transitions given by Brzozowski derivatives.
    Found 1 week, 1 day ago on Tim Fernando's site
  15. 726992.768835
    Beall and Murzi (J Philos 110(3):143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules (over sufficiently expressive base theories). As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field (Notre Dame J Form Log 58(1):1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi (J Philos Log. doi:10.1007/s10992-017-9438-x, 2017), is a generalisation of Kripke’s notion of grounded truth (J Philos 72:690–716, 1975), and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic (nor, more generally, of a revisionary approach to semantic paradox), we take the notion of naïve
    Found 1 week, 1 day ago on Julien Murzi's site
  16. 728355.768849
    Scientific research is almost always conducted by communities of scientists of varying size and complexity. Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project. Scientists manage to do this without a central authority allocating them to different projects. Thanks largely to the pioneering studies of Philip Kitcher and Michael Strevens , understanding this self-organization has become an important area of research in the philosophy of science.
    Found 1 week, 1 day ago on Michael Weisberg's site
  17. 754578.768872
    Recently the first protective measurement has been realized in experiment [Nature Phys. 13, 1191 (2017)], which can measure the expectation value of an observable from a single quantum system. This raises an important and pressing issue of whether protective measurement implies the reality of the wave function. If the answer is yes, this will improve the influential PBR theorem [Nature Phys. 8, 475 (2012)] by removing auxiliary assumptions, and help settle the issue about the nature of the wave function. In this paper, we demonstrate that this is indeed the case. It is shown that a ψ-epistemic model and quantum mechanics have different predictions about the variance of the result of a Zeno-type protective measurement with finite N .
    Found 1 week, 1 day ago on PhilSci Archive
  18. 1093026.768909
    The thesis I am called upon to defend is this: given any collection of objects, no matter how disparate or widely scattered, there is a further object composed of them all. For example, there is an object composed of my left tennis shoe and the lace that is threaded through its eyelets—so far, perhaps, no surprise. But there are all of the following objects as well: the object composed of the lace threaded through my left shoe and the lace threaded through my right shoe; the object composed of the Eiffel Tower and the tip of my nose; the object composed of the moon and the six pennies scattered across my desktop. For any objects a through z, whatever and wherever they may be, there is an object having those objects as its parts. This thesis goes by several names: conjunctivism (Chisholm), unrestricted composition (Lewis), and mereological universalism (van Inwagen). It is often thought to fly in the face of common sense, but it has won the allegiance of several philosophers, and it is a standard element in the formal theory of part and whole as it was developed in the twentieth century. In what follows I shall explain why I believe it to be true.
    Found 1 week, 5 days ago on James Van Cleve's site
  19. 1489371.768938
    You may very well know the Five Books website, where a wide-ranging cast of contributors are asked “to make book recommendations in their area of work and explain their choices in an interview”. The recommendations are often quirky, sometimes even slightly bizarre, but rarely without interest. …
    Found 2 weeks, 3 days ago on Peter Smith's blog
  20. 1532453.76896
    It is now commonly acknowledged that much early theorising concerning modal notions suffered from various confusions and conflations. A major advance, at least in twentieth century philosophy, was Kripke’s work, which brought great clarity too the nature of—and varieties of—modality (e.g.
    Found 2 weeks, 3 days ago on PhilPapers
  21. 1659945.768974
    There are four notions in this thesis that deserve close examination: epistemic status, opinion, dependence, and moral features. The first four sections of this paper examine each of these notions in turn. Along the way, I raise some objections to existing accounts of moral encroachment. For instance, many theories fail to give sufficient attention to moral encroachment on credences. Also, many theories focus on moral features that do not have the correct structure to support standard analogies between pragmatic and moral encroachment. The fifth and final section of the paper addresses several objections and frequently asked questions.
    Found 2 weeks, 5 days ago on Sarah Moss's site
  22. 1669643.768989
    Change and local spatial variation are missing in Hamiltonian General Relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchaˇr waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity.
    Found 2 weeks, 5 days ago on PhilSci Archive
  23. 1872608.769003
    It’s no secret that there are many competing views on the semantics of conditionals. One of the tools of the trade is that of any experimental scientist: put the object of study in various environments and see what happens.
    Found 3 weeks ago on Kai von Fintel's site
  24. 1923727.769016
    I predicted that the degree of agreement behind the ASA’s “6 principles” on p-values , partial as it was,was unlikely to be replicated when it came to most of the “other approaches” with which some would supplement or replace significance tests– notably Bayesian updating, Bayes factors, or likelihood ratios (confidence intervals are dual to hypotheses tests). …
    Found 3 weeks, 1 day ago on D. G. Mayo's blog
  25. 2030677.769035
    What does ‘might’ mean? One hypothesis is that ‘It might be raining’ is essentially an avowal of ignorance like ‘For all I know, it’s raining’. But it turns out these two constructions embed in different ways—in particular as parts of larger constructions like Wittgenstein (1953)’s ‘It might be raining and it’s not’ and Moore (1942)’s ‘It’s raining and I don’t know it’, respectively. A variety of approaches have been developed to account for those differences. All approaches agree that both Moore sentences and Wittgenstein sentences are classically consistent. In this paper I argue against this consensus. I adduce a variety of new data which I argue can best be accounted for if we treat Wittgenstein sentences as being classically inconsistent. This creates a puzzle, since there is decisive reason to think that pMight pq is consistent with pNot pq. How can it also be that pMight p and not pq and pNot p and might pq are inconsistent? To make sense of this situation, I propose a new theory of epistemic modals and their interaction with embedding operators. This account makes sense of the subtle embedding behavior of epistemic modals, shedding new light on their meaning and, more broadly, the dynamics of information in natural language.
    Found 3 weeks, 2 days ago on PhilPapers
  26. 2038264.76905
    Cognition in living entities – and their social groupings or institutional artifacts – is necessarily as complicated as their embedding environments, which, for humans, includes a particularly rich cultural milieu. The asymptotic limit theorems of information and control theories permit construction of a new class of empirical ‘regression-like’ statistical models for cognitive developmental processes, their dynamics, and modes of dysfunction. Such models may, as have their simpler analogs, prove useful in the study and remediation of cognitive failure at and across the scales and levels of organization that constitute and drive the phenomena of life. These new models particularly focus on the roles of sociocultural environment and stress, in a large sense, as both trigger for the failure of the regulation of biocognition and as ‘riverbanks’ determining the channels of pathology, with implications across life-course developmental trajectories. We examine the effects of an embedding cultural milieu and its socioeconomic implementations using the ‘lenses’ of metabolic optimization, control system theory, and an extension of symmetry-breaking appropriate to information systems. A central implication is that most, if not all, human developmental disorders are fundamentally culture-bound syndromes. This has deep implications for both individual treatment and public health policy.
    Found 3 weeks, 2 days ago on PhilSci Archive
  27. 2068064.769063
    The United Nations Population Division’s latest report predicts a global population of over 11 billion by 2100. That is the ‘medium’ projection, based on standard demographic transition theory. There is also a ‘low’ projection, in which the total fertility rate is lower by half a child per woman; here, population peaks at 8.7 billion mid-century, returning to 7.3 billion by 2100.
    Found 3 weeks, 2 days ago on Hilary Greaves's site
  28. 2068098.769077
    This chapter focusses on the question of optimal human population size: how many people it is best to have alive on Earth at a given time. The exercise is one of optimisation subject to constraints. Population axiology is one highly relevant input to the exercise, as it supplies the objective: it tells us which logically possible states of affairs – in the sense of assignments of well-being levels to persons – are better than which others. But not all logically possible states of affairs are achievable: we cannot in practice have (say) a population of a quadrillion humans, all living lives of untold bliss, on Earth simultaneously. The real world supplies constraints.
    Found 3 weeks, 2 days ago on Hilary Greaves's site
  29. 2125791.769091
    The logical systems within which Frege, Schroder, Russell, Zermelo and other early mathematical logicians worked were all higher-order. It was not until the 1910s that first-order logic was even distinguished as a subsystem of higher-order logic. As late as in the 1920s, higher-order quantification was still quite generally allowed: in fact, it does not seem as if any major logician, among non-intuitionists, except Thoralf Skolem restricted himself to first-order logic. Proofs were sometimes allowed to be infinite and infinitely long expressions were allowed in the languages that were used.
    Found 3 weeks, 3 days ago on PhilPapers
  30. 2269679.769107
    I've written quite a lot on this blog recently about how we should aggregate the credences or subjective probabilities of a group of individuals to give their collective credences (here, here, here). In some of those posts, this one in particular, I asked how we should combine credences if we wish to use them to make a group decision. …
    Found 3 weeks, 5 days ago on M-Phi