1. 24151.208139
    The New England Journal of Medicine NEJM announced new guidelines for authors for statistical reporting  yesterday*. The ASA describes the change as “in response to the ASA Statement on P-values and Statistical Significance and subsequent The American Statistician special issue on statistical inference” (ASA I and II, in my abbreviation). …
    Found 6 hours, 42 minutes ago on D. G. Mayo's blog
  2. 93278.208191
    According to a nowadays widely discussed analysis by Itamar Pitowsky, the theoretical problems of QT are originated from two ‘dogmas’: the first forbidding the use of the notion of measurement in the fundamental axioms of the theory; the second imposing an interpretation of the quantum state as representing a system’s objectively possessed properties and evolution. In this paper I argue that, contrarily to Pitowsky analysis, depriving the quantum state of its ontological commitment is not sufficient to solve the conceptual issues that affect the foundations of QT.
    Found 1 day, 1 hour ago on PhilSci Archive
  3. 102280.20821
    Much has been said about Moore’s proof of the external world, but the notion of proof that Moore employs has been largely overlooked. I suspect that most have either found nothing wrong with it, or they have thought it somehow irrelevant to whether the proof serves its anti-skeptical purpose. I show, however, that Moore’s notion of proof is highly problematic. For instance, it trivializes in the sense that any known proposition is provable. This undermines Moore’s proof as he conceives it since it introduces a skeptical regress that he goes at length to resist. I go on to consider various revisions of Moore’s notion of proof and finally settle on one that I think is adequate for Moore’s purposes and faithful to what he says concerning immediate knowledge.
    Found 1 day, 4 hours ago on Michael De's site
  4. 124062.208225
    The paper discusses two contemporary views about the foundation of statistical mechanics and deterministic probabilities in physics: one that regards a measure on the initial macro-region of the universe as a probability measure that is part of the Humean best system of laws (Mentaculus) and another that relates it to the concept of typicality. The first view is tied to Lewis’ Principal Principle, the second to a version of Cournot’s principle. We will defend the typicality view and address open questions about typicality and the status of typicality measures.
    Found 1 day, 10 hours ago on Dustin Lazarovici's site
  5. 150004.20824
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
    Found 1 day, 17 hours ago on PhilPapers
  6. 179227.208254
    In his 1961 paper, “Irreversibility and Heat Generation in the Computing Process,” Rolf Landauer speculated that there exists a fundamental link between heat generation in computing devices and the computational logic in use. According to Landauer, this heating effect is the result of a connection between the logic of computation and the fundamental laws of thermodynamics. The minimum heat generated by computation, he argued, is fixed by rules independent of its physical implementation. The limits are fixed by the logic and are the same no matter the hardware, or the way in which the logic is implemented. His analysis became the foundation for both a new literature, termed “the thermodynamics of computation” by Charles Bennett, and a new physical law, Landauer’s principle.
    Found 2 days, 1 hour ago on John Norton's site
  7. 219988.208268
    We reexamine some of the classic problems connected with the use of cardinal utility functions in decision theory, and discuss Patrick Suppes’ contributions to this field in light of a reinterpretation we propose for these problems. We analytically decompose the doctrine of ordinal-ism, which only accepts ordinal utility functions, and distinguish between several doctrines of cardinalism, depending on what components of ordinalism they specifically reject. We identify Suppes’ doctrine with the major deviation from ordinalism that conceives of utility functions as representing preference differences, while being nonetheless empirically related to choices. We highlight the originality, promises and limits of this choice-based cardinalism.
    Found 2 days, 13 hours ago on Jean Baccelli's site
  8. 220001.208286
    In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs to be re-evaluated accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes.
    Found 2 days, 13 hours ago on Jean Baccelli's site
  9. 220017.208304
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and those for a society relying on mechanisms that are not necessarily controllable. The difficulty by companies in regulating the spread of misinformation, as well as those by authorities to protect task-workers managed by a software infrastructure, could be just some of the effects of this technological paradigm.
    Found 2 days, 13 hours ago on PhilPapers
  10. 266939.208317
    Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept learning, we show that it is easier to infer the meaning of complex concepts than that of simple concepts.
    Found 3 days, 2 hours ago on PhilSci Archive
  11. 387985.208331
    I It is intuitively plausible to assume that if it is asserted that ‘a is overall better than b (all things considered)’ such a verdict is often based on multiple evaluations of the items a and b under considerations, which are sometimes also called ‘criteria’, ‘features’, or ‘attributes’. I Usually, an item a is better than an item b in some aspects, but not in others, and there is a weighing or outranking of these aspects to determine which item is better.
    Found 4 days, 11 hours ago on Erich Rast's site
  12. 441027.208345
    Despite initial appearance, paradoxes in classical logic, when comprehension is unrestricted, do not go away even if the law of excluded middle is dropped, unless the law of noncontradiction is eliminated as well, which makes logic much less powerful. Is there an alternative way to preserve unrestricted comprehension of common language, while retaining power of classical logic? The answer is yes, when provability modal logic is utilized. Modal logic NL is constructed for this purpose. Unless a paradox is provable, usual rules of classical logic follow. The main point for modal logic NL is to tune the law of excluded middle so that we allow for φ and its negation ¬φ to be both false in case a paradox provably arises. Curry's paradox is resolved differently from other paradoxes but is also resolved in modal logic NL. The changes allow for unrestricted comprehension and naïve set theory, and allow us to justify use of common language in formal sense.
    Found 5 days, 2 hours ago on PhilSci Archive
  13. 556994.208359
    In this chapter I urge a fresh look at the problem of explaining equilibration. The process of equilibration, I argue, is best seen, not as part of the subject matter of thermodynamics, but as a presupposition of thermodynamics. Further, the relevant tension between the macroscopic phenomena of equilibration and the underlying microdynamics lies not in a tension between time-reversal invariance of the microdynamics and the temporal asymmetry of equilibration, but in a tension between preservation of distinguishability of states at the level of microphysics and the continual effacing of the past at the macroscopic level. This suggests an open systems approach, where the puzzling question is not the erasure of the past, but the question of how reliable prediction, given only macroscopic data, is ever possible at all. I suggest that the answer lies in an approach that has not been afforded sufficient attention in the philosophical literature, namely, one based on the temporal asymmetry of causal explanation.
    Found 6 days, 10 hours ago on PhilSci Archive
  14. 603927.208374
    Work on chance has, for some time, focused on the normative nature of chance: the way in which objective chances constrain what partial beliefs, or credences, we ought to have. According to me, an agent is an expert if and only if their credences are maximally accurate; they are an analyst expert with respect to a body of evidence if and only if their credences are maximally accurate conditional on that body of evidence. I argue that the chances are maximally accurate conditional on local, intrinsic information. This matches nicely with a requirement that Schaffer (2003, 2007) places on chances, called at different times (and in different forms) the Stable Chance Principle and the Intrinsicness Requirement. I call my account the Accuracy-Stability account. I then show how the Accuracy-Stability account underlies some arguments for the New Principle, and show how it revives a version of Van Fraassen’s calibrationist approach. But two new problems arise: first, the Accuracy-Stability account risks collapsing into simple frequentism. But simple frequentism is a bad view. I argue that the same reasoning which motivates the Stability requirement motivates a continuity requirement, which avoids at least some of the problems of frequentism. I conclude by considering an argument from Briggs (2009) that Humean chances aren’t fit to be analyst experts; I argue that the Accuracy-Stability account overcomes Briggs’ difficulties.
    Found 6 days, 23 hours ago on Michael Townsen Hicks's site
  15. 614235.208391
    Logical pluralism is the view that there is more than one correct logic. Most logical pluralists think that logic is normative in the sense that you make a mistake if you accept the premisses of a valid argument but reject its conclusion. Some authors have argued that this combination is self-undermining: Suppose that L1 and L2 are correct logics that coincide except for the argument from Γ to φ, which is valid in L1 but invalid in L2. If you accept all sentences in Γ, then, by normativity, you make a mistake if you reject φ. In order to avoid mistakes, you should accept φ or suspend judgment about φ. Both options are problematic for pluralism. Can pluralists avoid this worry by rejecting the normativity of logic? I argue that they cannot. All else being equal, the argument goes through even if logic is not normative.
    Found 1 week ago on PhilPapers
  16. 614764.208404
    We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time.
    Found 1 week ago on PhilSci Archive
  17. 614855.20842
    In 2012, CERN scientists announced the discovery of the Higgs boson, claiming their experimental results finally achieved the 5σ criterion for statistical significance. Although particle physicists apply especially stringent standards for statistical significance, their use of “classical” (rather than Bayesian) statistics is not unusual at all. Classical hypothesis testing—a hybrid of techniques developed by Fisher, Neyman and Pearson—remains the dominant form of statistical analysis, and p-values and statistical power are often used to quantify evidential strength.
    Found 1 week ago on PhilSci Archive
  18. 614893.208435
    Traditionally, epistemologists have distinguished between epistemic and pragmatic goals. In so doing, they presume that much of game theory is irrelevant to epistemic enterprises. I will show that this is a mistake. Even if we restrict attention to purely epistemic motivations, members of epistemic groups will face a multitude of strategic choices. I illustrate several contexts where individuals who are concerned solely with the discovery of truth will nonetheless face difficult game theoretic problems. Examples of purely epistemic coordination problems and social dilemmas will be presented. These show that there is a far deeper connection between economics and epistemology than previous appreciated.
    Found 1 week ago on PhilSci Archive
  19. 615084.208448
    It is well known that there is a freedom-of-choice loophole or superdeterminism loophole in Bell’s theorem. Since no experiment can completely rule out the possibility of superdeterminism, it seems that a local hidden variable theory consistent with relativity can never be excluded. In this paper, we present a new analysis of local hidden variable theories. The key is to notice that a local hidden variable theory assumes the universality of the Schrodinger equation, and it permits that a measurement can be in principle undone in the sense that the wave function of the composite system after the measurement can be restored to the initial state. We propose a variant of the EPR-Bohm experiment with reset operactions that can undo measurements. We find that according to quantum mechanics, when Alice’s measurement is undone after she obtained her result, the correlation between the results of Alice’s and Bob’s measurements depends on the time order of these measurements, which may be spacelike separated. Since a local hidden variable theory consistent with relativity requires that relativistically non-invariant relations such as the time order of space-like separated events have no physical significance, this result means that a local hidden variable theory cannot explain the correlation and reproduce all predictions of quantum mechanics even when assuming superdeterminism. This closes the major superdeterminism loophole in Bell’s theorem.
    Found 1 week ago on PhilSci Archive
  20. 615124.208462
    We compare and contrast two distinct approaches to understanding the Born rule in de Broglie-Bohm pilot-wave theory, one based on dynamical relaxation over time (advocated by this author and collaborators) and the other based on typicality of initial conditions (advocated by the ‘Bohmian mechanics’ school). It is argued that the latter approach is inherently circular and physically misguided. The typicality approach has engendered a deep-seated confusion between contingent and law-like features, leading to misleading claims not only about the Born rule but also about the nature of the wave function. By artificially restricting the theory to equilibrium, the typicality approach has led to further misunderstandings concerning the status of the uncertainty principle, the role of quantum measurement theory, and the kinematics of the theory (including the status of Galilean and Lorentz invariance). The restriction to equilibrium has also made an erroneously-constructed stochastic model of particle creation appear more plausible than it actually is. To avoid needless controversy, we advocate a modest ‘empirical approach’ to the foundations of statistical mechanics. We argue that the existence or otherwise of quantum nonequilibrium in our world is an empirical question to be settled by experiment.
    Found 1 week ago on PhilSci Archive
  21. 637075.208476
    Philosophy is the highest court of the sciences, and philosophy, since Socrates, has pursued analysis. Since then, too, a certain assumption about analysis has functioned as orthodoxy. When we analyse a term, such as knowledge, we break it down into components that are individually necessary, and together sufficient, for the analysandum to apply. In the middle of the 20th century, Friedrich Waismann, a former member of the Vienna Circle, challenged that orthodoxy. The quote below comes in the specific context of how to analyse a “material object statement” in terms of the sense experiences that would either verify or refute it:
    Found 1 week ago on Sam Cumming's site
  22. 640605.208493
    I argue that a general logic of definitions must tolerate ω-inconsistency. I present a semantical scheme, S, under which some definitions imply ω-inconsistent sets of sentences. I draw attention to attractive features of this scheme, and I argue that S yields the minimal general logic of definitions. I conclude that any acceptable general logic should permit definitions that generate ω- inconsistency. This conclusion gains support from the application of S to the theory of truth. Keywords Circular definitions, revision theory, truth, paradox, McGee’s Theorem, omega-inconsistent theories.
    Found 1 week ago on Anil Gupta's site
  23. 640690.208509
    Brian Haig, Professor Emeritus Department of Psychology University of Canterbury Christchurch, New Zealand The American Statistical Association’s (ASA) recent effort to advise the statistical and scientific communities on how they should think about statistics in research is ambitious in scope. …
    Found 1 week ago on D. G. Mayo's blog
  24. 820877.208526
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License <www.philosophersimprint.org/019028/> Predicate Containment. For true singular propositions, the predicate’s semantic value contains the subject’s semantic value. I’ll call this standard type of semantics logical extensionalism since it treats the truth (or falsity) of a singular proposition as if it depends on whether the predicate’s extension contains the subject’s semantic value. For ease of expression, I’ll call any variant of logical extensionalism an extensional approach.
    Found 1 week, 2 days ago on PhilPapers
  25. 878790.208542
    I argue that you can be permitted to discount the interests of your adversaries even though doing so would be impartially suboptimal. This means that, in addition to the kinds of moral options that the literature traditionally recognises, there exist what I call other-sacrificing options. I explore the idea that you cannot discount the interests of your adversaries as much as you can favour the interests of your intimates; if this is correct, then there is an asymmetry between negative partiality toward your adversaries and positive partiality toward your intimates.
    Found 1 week, 3 days ago on PhilPapers
  26. 1168678.208556
    While scientific inquiry crucially relies on the extraction of patterns from data, we still have a very imperfect understanding of the metaphysics of patterns—and, in particular, of what it is that makes a pattern real. In this paper we derive a criterion of real-patternhood from the notion of conditional Kolmogorov complexity. The resulting account belongs in the philosophical tradition, initiated by Dennett (1991), that links real-patternhood to data compressibility, but is simpler and formally more perspicuous than other proposals defended heretofore in the literature. It also successfully enforces a non-redundancy principle, suggested by Ladyman and Ross (2007), that aims at excluding as real those patterns that can be ignored without loss of information about the target dataset, and which their own account fails to enforce.
    Found 1 week, 6 days ago on PhilSci Archive
  27. 1168833.208569
    To understand something involves some sort of commitment to a set of propositions comprising an account of the understood phenomenon. Some take this commitment to be a species of belief; others, such as Elgin and I, take it to be a kind of cognitive policy. This paper takes a step back from debates about the nature of understanding and asks when this commitment involved in understanding is epistemically appropriate, or ‘acceptable’ in Elgin’s terminology. In particular, appealing to lessons from the lottery and preface paradoxes, it is argued that this type of commitment is sometimes acceptable even when it would be rational to assign arbitrarily low probabilities to the relevant propositions. This strongly suggests that the relevant type of commitment is sometimes acceptable in the absence of epistemic justification for belief, which in turn implies that understanding does not require justification in the traditional sense. The paper goes on to develop a new probabilistic model of acceptability, based on the idea that the maximally informative accounts of the understood phenomenon should be optimally probable. Interestingly, this probabilistic model ends up being similar in important ways to Elgin’s proposal to analyze the acceptability of such commitments in terms of ‘reflective equilibrium’.
    Found 1 week, 6 days ago on PhilSci Archive
  28. 1196062.208584
    Let me introduce to you the topic of modal model theory, injecting some ideas from modal logic into the traditional subject of model theory in mathematical logic. For example, we may consider the class of all models of some first-order theory, such as the class of all graphs, or the class of all groups, or all fields or what have you. …
    Found 1 week, 6 days ago on Joel David Hamkins's blog
  29. 1284257.208597
    Many physical theories characterize their observables with unlimited precision. Non-fundamental theories do so needlessly: they are more precise than they need to be to capture the matters of fact about their observables. A natural expectation is that a truly fundamental theory would require unlimited precision in order to exhaustively capture all of the fundamental physical matters of fact. I argue against this expectation and I show that there could be a fundamental theory with limited precision.
    Found 2 weeks ago on PhilSci Archive
  30. 1398312.208611
    For computer simulation models to usefully inform climate risk management decisions, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many times over, and because computing resources are finite, uncertainty assessment is more feasible using models that need less computer processor time. Such models are generally simpler in the sense of being more idealized, or less realistic. So modelers face a trade-off between realism and extent of uncertainty quantification. Seeing this trade-off for the important epistemic issue that it is requires a shift in perspective from the established simplicity literature in philosophy of science.
    Found 2 weeks, 2 days ago on PhilPapers