1. 48653.908272
    Sentences about logic are often used to show that certain embedding expressions (attitude verbs, conditionals, etc.) are hyperintensional. Yet it is not clear how to regiment “logic talk” in the object language so that it can be compositionally embedded under such expressions. In this paper, I develop a formal system called hyperlogic that is designed to do just that. I provide a hyperintensional semantics for hyperlogic that doesn’t appeal to logically impossible worlds, as traditionally understood, but instead uses a shiftable parameter that determines the interpretation of the logical connectives. I argue this semantics compares favorably to the more common impossible worlds semantics, which faces difficulties interpreting propositionally quantified logic talk.
    Found 13 hours, 30 minutes ago on Alexander W. Kocurek's site
  2. 59970.908373
    Dynamic Causal Decision Theory (EDC, ch.s 7 and 8) Posted on Thursday, 23 Sep 2021. Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality. …
    Found 16 hours, 39 minutes ago on wo's weblog
  3. 63861.90842
    Yet we know from syntax and crosslinguistic work that conditionals can also be formed with ‘if’-clauses that modify the verb (‘V if S’), as in (2), or a noun (‘N if S’), as in (4). Tests such as the VP-ellipsis and Condition C data in (3), and the coordination and island data in (5), confirm that the ‘if’-clause is a constituent of the verb phrase and noun phrase, respectively, rather than scoping over the rest of the sentence (e.g., Lasersohn 1996, Bhatt & Pancheva 2006).
    Found 17 hours, 44 minutes ago on Alex Silk's site
  4. 82739.908469
    The desirable gambles framework offers the most comprehensive foundations for the theory of lower previsions, which in turn affords the most general account of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and the machinery for coherent inference but detaches the interpretation of desirability from the multiplicative scale invariance axiom. The second strays from the additive combination axiom to accommodate repeated gambles that return rewards by a non-stationary processes that is not necessarily additive. Unlike the first approach, which is a conservative amendment to the desirable gambles framework, the second is a radical departure. Yet, common to both is a method for describing rewards called discounted utility.
    Found 22 hours, 59 minutes ago on PhilSci Archive
  5. 97532.90849
    When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves’ (or AJBT) test to answer this question (Thaler & Sunstein, 2008, 5). In a recent paper, L. A. Paul and Sunstein (ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally trans-formative experience, that is, one that results in our values changing (Paul, 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Pettigrew, 2019, Chapters 6 and 7).
    Found 1 day, 3 hours ago on PhilPapers
  6. 156306.908505
    The debate between ΛCDM and MOND is often cast in terms of competing gravitational theories. However, recent philosophical discussion suggests that the ΛCDM–MOND debate demonstrates the challenges of multiscale modeling in the context of cosmological scales. I extend this discussion and explore what happens when the debate is thought to be about modeling rather than about theory, offering a model-focused interpretation of the ΛCDM–MOND debate. This analysis shows how a model-focused interpretation of the debate provides a better understanding of challenges associated with extension to a different scale or domain, which are tied to commitments about explanatory fit.
    Found 1 day, 19 hours ago on PhilSci Archive
  7. 233172.908519
    When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation.
    Found 2 days, 16 hours ago on Nicolas Troquard's site
  8. 255803.908534
    This paper aims to shed light on the relation between Boltzmannian statistical mechanics and Gibbsian statistical mechanics by studying the Mechanical Averaging Principle, which says that, under certain conditions, Boltzmannian equilibrium values and Gibbsian phase averages are approximately equal. What are these conditions? We identify three conditions each of which is individually sufficient (but not necessary) for Boltzmannian equilibrium values to be approximately equal to Gibbsian phase averages: the Khinchin condition, and two conditions that result from two new theorems, the Average Equivalence Theorem and the Cancelling Out Theorem. These conditions are not trivially satisfied, and there are core models of statistical mechanics, the six-vertex model and the Ising model, in which they fail.
    Found 2 days, 23 hours ago on PhilSci Archive
  9. 255861.908548
    The relational interpretation (or RQM, for Relational Quantum Mechanics) solves the measurement problem by considering an ontology of sparse relative facts. Facts are realized in interactions between any two physical systems and are relative to these systems. RQM’s technical core is the realisation that quantum transition amplitudes determine physical probabilities only when their arguments are facts relative to the same system. The relativity of facts can be neglected in the approximation where decoherence hides interference, thus making facts approximately stable.
    Found 2 days, 23 hours ago on PhilSci Archive
  10. 256009.908599
    The Precautionary Principle is typically construed as a conservative decision rule aimed at preventing harm. But Martin Peterson (JME 33: 5–10, 2007; The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, Oxford, 2017) has argued that the principle is better understood as an epistemic rule, guiding decision-makers in forming beliefs rather than choosing among possible acts. On the epistemic view, he claims there is a principle concerning expert disagreement underlying precautionary-based reasoning called the ecumenical principle: all expert views should be considered in a precautionary appraisal, not just those that are the most prominent or influential. In articulating the doxastic commitments of decision-makers under this constraint, Peterson precludes any probabilistic rule that might result in combining expert opinions. For combined or consensus probabilities are likely to provide decision-makers with information that is more precise than warranted. Contra Peterson, I argue that upon adopting a broader conception of probability, there is a probabilistic rule, under which expert opinions are combined, that is immune to his criticism and better represents the ecumenical principle.
    Found 2 days, 23 hours ago on PhilSci Archive
  11. 286507.908643
    I remember the night I first discovered the meaning of the word worship. That morning I had been to church and had gotten into a brief discussion with the Pastor about what he kept calling the 'worthiness of God.' I remember thinking that this phrase seemed odd to me and I wasn’t sure what to make of it. Oh, I had heard it used before. It was the sort of thing one nodded one's head to and then went on one's way. Like talk about the 'glory' of God. I was never sure what that meant either, and given all the violent things God was sometimes said to do for the sake of his 'glory' I wasn't sure I cared to know. But now I began wondering about this phrase. Worthy? Was God worthy? Worthy of what?
    Found 3 days, 7 hours ago on PhilPapers
  12. 313427.908679
    Decision theory requires agents to assign probabilities to states of the world and utilities to the possible outcomes of different actions. When agents commit to having the probabilities and/or utilities in a decision problem defined by objective features of the world, they may find themselves unable to decide which actions maximize expected utility. Decision theory has long recognized that work-around strategies are available in special cases; this is where dominance reasoning, minimax, and maximin play a role. Here we describe a different work around, wherein a rational decision about one decision problem can be reached by “interpolating” information from another problem that the agent believes has already been rationally solved.
    Found 3 days, 15 hours ago on Jonathan Cohen's site
  13. 321618.908698
    Preference Reflection (EDC, ch.7, part 2) Posted on Monday, 20 Sep 2021. Why should you take both boxes in Newcomb's Problem? The simplest argument is that you are then guaranteed to get $1000 more than what you would get if you took one box. …
    Found 3 days, 17 hours ago on wo's weblog
  14. 341825.908713
    Evidence E is misleading with regard to a hypothesis H provided that Bayesian update on E changes one’s credence in H in the direction opposed to truth. It is known that pretty much any evidence is misleading with regard to some hypothesis or other. …
    Found 3 days, 22 hours ago on Alexander Pruss's Blog
  15. 401615.90873
    We consider a learning agent in a partially observable environment, with which the agent has never interacted before, and about which it learns both what it can observe and how its actions affect the environment. The agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) both about what is directly observable and about what actions do in the domain, given the learner’s observational constraints. We differentiate the level of domain knowledge attained by each algorithm, and characterize the type of observations required to reach it. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. Our work continues that of Bolander and Gierasimczuk (2015), which developed DEL-based learning algorithms based to learn domain information in fully observable domains.
    Found 4 days, 15 hours ago on Thomas Bolander's site
  16. 545240.908745
    Many physicists have thought that absolute time became otiose with the introduction of Special Relativity. William Lane Craig disagrees. Craig argues that although relativity is empirically adequate within a domain of application, relativity is literally false and should be supplanted by a Neo-Lorentzian alternative that allows for absolute time. Meanwhile, Craig and co-author James Sinclair have argued that physical cosmology supports the conclusion that physical reality began to exist at a finite time in the past. However, on their view, the beginning of physical reality requires the objective passage of absolute time, so that the beginning of physical reality stands or falls with Craig’s Neo-Lorentzian metaphysics. Here, I raise doubts about whether, given Craig’s NeoLorentzian metaphysics, physical cosmology could adequately support a beginning of physical reality within the finite past. Craig and Sinclair’s conception of the beginning of the universe requires a past boundary to the universe. A past boundary to the universe cannot be directly observed and so must be inferred from the observed matter-energy distribution in conjunction with auxilary hypotheses drawn from a substantive physical theory. Craig’s brand of Neo Lorentzianism has not been sufficiently well specified so as to infer either that there is a past boundary or that the boundary is located in the finite past. Consequently, Neo Lorentzianism implicitly introduces a form of skepticism that removes the ability that we might have otherwise had to infer a beginning of the universe. Furthermore, in analyzing traditional big bang models, I develop criteria that Neo-Lorentzians should deploy in thinking about the direction and duration of time in cosmological models generally. For my last task, I apply the same criteria to bounce cosmologies and show that Craig and Sinclair have been wrong to interpret bounce cosmologies as including a beginning of physical reality.
    Found 6 days, 7 hours ago on PhilSci Archive
  17. 580321.908759
    Why ain'cha rich? (EDC, ch.7, part 1) Posted on Friday, 17 Sep 2021. Topic: decision theory Chapter 7 of Evidence, Decision and Causality looks at arguments for one-boxing or two-boxing in Newcomb's Problem. …
    Found 6 days, 17 hours ago on wo's weblog
  18. 618102.908776
    Traditionally, logic has been the dominant formal method within philosophy. Are logical methods still dominant today, or have the types of formal methods used in philosophy changed in recent times? To address this question, we coded a sample of philosophy papers from the late 2000s and from the late 2010s for the formal methods they used. The results indicate that (a) the proportion of papers using logical methods remained more or less constant over that time period but (b) the proportion of papers using probabilistic methods was approximately three times higher in the late 2010s than it was in the late 2000s. Further analyses explored this change by looking more closely at specific methods, specific levels of technical engagement, and specific subdisciplines within philosophy. These analyses indicate that the increasing proportion of papers using probabilistic methods was pervasive, not confined to particular probabilistic methods, levels of sophistication, or subdisciplines.
    Found 1 week ago on PhilSci Archive
  19. 618119.908816
    We present nine questions related to the concept of negation and, in passing, we refer to connections with the essays in this special issue. The questions were submitted to one of the most eminent logicians who contributed to the theory of negation, Prof. (Jon) Michael Dunn, but, unfortunately, Prof. Dunn was no longer able to answer them. Michael Dunn passed away on 5 April 2021, and the present special issue of Logical Investigations is dedicated to his memory. The questions concern (i) negation-related topics that have particularly interested Michael Dunn or to which he has made important contributions, (ii) some controversial aspects of the logical analysis of the concept of negation, or (iii) simply properties of negation in which we are especially interested. Though sadly and regrettably unanswered by the distinguished scholar who intended to reply, the questions remain and might stimulate answers by other logicians and further research.
    Found 1 week ago on Hitoshi Omori's site
  20. 618156.908843
    In the long run, the development of artificial intelligence (AI) is likely to be one of the biggest technological revolutions in human history. Even in the short run it will present tremendous challenges as well as tremendous opportunities. The more we do now to think through these complex challenges and opportunities, the better the prospects for the kind of outcomes we all hope for, for ourselves, our children, and our planet.
    Found 1 week ago on Stephan Hartmann's site
  21. 683323.908868
    A binary predicate R is standardly called symmetric if for every x and y, the statement R(x, y) is logically equivalent to R( y, x). Examples for symmetric predicates in English include relational adjectives, nouns and verbs, as in the following equivalent sentences.
    Found 1 week ago on Yoad Winter's site
  22. 734167.908893
    This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence that p; that, nonetheless, rational thinking is closed under entailment; that thinking does not supervene on credence; and that in many cases what one thinks on certain matters is, in a very literal sense, a choice. Finally, since there are strong reasons to believe that thinking just is believing, there are strong reasons to think that all this goes for belief as well.
    Found 1 week, 1 day ago on PhilPapers
  23. 753144.908918
    Betting on collapse (EDC, ch.6) Posted on Wednesday, 15 Sep 2021. Topic: decision theory Chapter 6 of Evidence, Decision and Causality presents another alleged counterexample to CDT, involving a bet on the measurement of entangled particles. …
    Found 1 week, 1 day ago on wo's weblog
  24. 792013.908945
    This paper is about two requirements on wish reports whose interaction motivates a novel semantics for these ascriptions. The first requirement concerns the ambiguities that arise when determiner phrases, e.g. definite descriptions, interact with ‘wish’. More specifically, several theorists have recently argued that attitude ascriptions featuring counterfactual attitude verbs license interpretations on which the determiner phrase is interpreted relative to the subject’s beliefs. The second requirement involves the fact that desire reports in general require decision-theoretic notions for their analysis. The current study is motivated by the fact that no existing account captures both of these aspects of wishing. I develop a semantics for wish reports that makes available belief-relative readings but also allows decision-theoretic notions to play a role in shaping the truth conditions of these ascriptions. The general idea is that we can analyze wishing in terms of a two-dimensional notion of expected utility.
    Found 1 week, 2 days ago on PhilPapers
  25. 797699.908965
    We implement a recent characterization of metaphysical indeterminacy in the context of orthodox quantum theory, developing the syntax and semantics of two propositional logics equipped with determinacy and indeterminacy operators. These logics, which extend a novel semantics for standard quantum logic that accounts for Hilbert spaces with superselection sectors, preserve different desirable features of quantum logic and logics of indeterminacy. In addition to comparing the relative advantages of the two, we also explain how each logic answers Williamson’s challenge to any substantive account of (in)determinacy: For any proposition p, what could the difference between “p” and “it’s determinate that p” ever amount to?
    Found 1 week, 2 days ago on Samuel C. Fletcher's site
  26. 803781.909013
    I offer a case that quantum query complexity still has loads of enticing and fundamental open problems—from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.
    Found 1 week, 2 days ago on Scott Aaronson's site
  27. 861862.909046
    The pattern of implicatures of modified numeral ‘more than n’ depends on the roundness of n. Cummins, Sauerland, and Solt (2012) present experimental evidence for the relation between roundness and implicature patterns, and propose a pragmatic account of the phenomenon. More recently, Hesse and Benz (2020) present more extensive evidence showing that implicatures also depend on the magnitude of n and propose a novel explanation based on the Approximate Number System (Dehaene, 1999). Despite the wealth of experimental data, no formal account has yet been proposed to characterize the full posterior distribution over numbers of a listener after hearing ‘more than n’. We develop one such account within the Rational Speech Act framework, quantitatively reconstructing the pragmatic reasoning of a rational listener. We show that our pragmatic account correctly predicts various features of the experimental data.
    Found 1 week, 2 days ago on Jakub Szymanik's site
  28. 891787.909065
    While evenness is understood to be maximal if all types (species, geno-types, alleles, etc.) are represented equally (via abundance, biomass, area, etc.), its opposite, maximal unevenness, either remains conceptually in the dark or is conceived as the type distribution that minimizes the applied evenness index. The latter approach, however, frequently leads to conceptual inconsistency due to the fact that the minimizing distribution is not specifiable or is monomorphic. The state of monomorphism, however, is indeterminate in terms of its evenness/unevenness characteristics. Indeed, the semantic indeterminacy also shows up in the observation that monomorphism represents a state of pronounced discontinuity for the established evenness indices. This serious conceptual inconsistency is latent in the widely held idea that evenness is an independent component of diversity. As a consequence, the established evenness indices largely appear as indicators of relative polymorphism rather than as indicators of evenness.
    Found 1 week, 3 days ago on PhilSci Archive
  29. 926982.90908
    Fixing the Past and the Laws (EDC, ch.5) Posted on Monday, 13 Sep 2021 Chapter 5 of Evidence, Decision and Causality presents a powerful challenge to CDT (drawing on Ahmed (2013) and Ahmed (2014)). Imagine you have strong evidence that a certain deterministic system S is the true system of laws in our universe. …
    Found 1 week, 3 days ago on wo's weblog
  30. 1018500.909094
    I explore, from a proof-theoretic perspective, the hierarchy of classical and paraconsistent logics introduced by Barrio, Pailos and Szmuc in (Journal o f Philosophical Logic, 49, 93-120, 2021). First, I provide sequent rules and axioms for all the logics in the hierarchy, for all inferential levels, and establish soundness and completeness results. Second, I show how to extend those systems with a corresponding hierarchy of validity predicates, each one of which is meant to capture “validity” at a different inferential level. Then, I point out two potential philosophical implications of these results. (i) Since the logics in the hierarchy differ from one another on the rules, I argue that each such logic maintains its own distinct identity (contrary to arguments like the one given by Dicher and Paoli in 2019). (ii) Each validity predicate need not capture “validity” at more than one metainferential level. Hence, there are reasons to deny the thesis (put forward in Barrio, E., Rosenblatt, L. & Tajer, D. (Synthese, 2016)) that the validity predicate introduced in by Beall and Murzi in (Journal o f Philosophy, 110(3), 143–165, 2013) has to express facts not only about what follows from what, but also about the metarules, etc.
    Found 1 week, 4 days ago on PhilPapers