1. 1707.142087
    I’ve been thinking about Petri nets a lot. Around 2010, I got excited about using them to describe chemical reactions, population dynamics and more, using ideas taken from quantum physics. Then I started working with my student Blake Pollard on ‘open’ Petri nets, which you can glue together to form larger Petri nets. …
    Found 28 minutes ago on Azimuth
  2. 229154.142219
    Computer simulations serve myriad purposes in science: from experimental design in high-energy physics, to predicting tomorrow’s weather in meteorology, to exploring and evaluating candidate molecules in drug research. But is simulation also a tool for observing the world? Can we measure the world via computer simulation? It might seem not. Yet, in the geosciences, there are now ‘observational’ datasets composed entirely of simulation output. And in various fields, especially chemistry and engineering, one finds software designed to enable ‘virtual measurements’ of quantities of interest.
    Found 2 days, 15 hours ago on Wendy Parker's site
  3. 229168.142263
    While ‘most’ and ‘more than half’ are generally assumed to be truth-conditionally equivalent, the former is usually interpreted as conveying greater proportions than the latter. Previous work has attempted to explain this difference in terms of pragmatic strengthening or variation in meanings. In this paper, we propose a novel explanation that keeps the truth-conditions equivalence. We argue that the difference in typical sets between the two expressions emerges as a result of two previously independently motivated mechanisms. First, the two expressions have different sets of pragmatic alternatives. Second, listeners tend to minimize the expected distance between their representation of the world and the speaker’s observation. We support this explanation with a computational model of usage in the Rational Speech Act framework. Moreover, we report the results of a quantifier production experiment. We find that the difference in typical proportions associated with the two expressions can be explained by our account.
    Found 2 days, 15 hours ago on Jakub Szymanik's site
  4. 233251.142302
    Can a group be an orthodox rational agent? This requires the group’s aggregate preferences to follow expected utility (static rationality) and to evolve by Bayesian updating (dynamic rationality). Group rationality is possible, but the only preference aggregation rules which achieve it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual values linearly and individual beliefs geometrically. Linear-geometric preference aggregation contrasts with classic linear-linear preference aggregation, which combines both values and beliefs linearly, and achieves only static rationality. Our characterisation of linear-geometric preference aggregation implies as corollaries a characterisation of linear value aggregation (Harsanyi’s Theorem) and a characterisation of geometric belief aggregation.
    Found 2 days, 16 hours ago on PhilPapers
  5. 281676.142333
    A disjunctive Gettier case looks like this. You have a justified belief in p, you have no reason to believe q, and you justifiedly believe the disjunction p or q. But it turns out that p is false and q is true. …
    Found 3 days, 6 hours ago on Alexander Pruss's Blog
  6. 559332.142364
    I argue that we should solve the Lottery Paradox by denying that rational belief is closed under classical logic. To reach this conclusion, I build on my previous result that (a slight variant of) McGee’s election scenario is a lottery scenario (see blinded paper currently under review). Indeed, this result implies that the sensible ways to deal with McGee’s scenario are the same as the sensible ways to deal with the lottery scenario: we should either reject the Lockean Thesis or Belief Closure. After recalling my argument to this conclusion, I demonstrate that a McGee-like example (which is just, in fact, Carroll’s barbershop paradox) can be provided in which the Lockean Thesis plays no role: this proves that denying Belief Closure is the right way to deal with both McGee’s scenario and the Lottery Paradox. A straightforward consequence of my approach is that Carroll’s puzzle is solved, too.
    Found 6 days, 11 hours ago on PhilPapers
  7. 599245.142393
    Problems about the existence of converses for non-symmetric relations go back to Russell 1903. These resurfaced in Fine 2000 and were recently rehearsed in MacBride 2014. In this paper, I focus one problem that is described in all three works. I show how object theory (Zalta 1983, 1993; Bueno, Menzel, & Zalta 2014, Menzel & Zalta2014) provides a solution to those problems.
    Found 6 days, 22 hours ago on Ed Zalta's site
  8. 637633.142421
    Dynamic Belief Update (DBU) is a model checking problem in Dynamic Epistemic Logic (DEL) concerning the effect of applying a number of epistemic actions on an initial epistemic model. It can also be considered as a plan verification problem in epistemic planning. The problem is known to be PSPACE-hard. To better understand the source of complexity of the problem, previous research has investigated the complexity of 128 parameterized versions of the problem with parameters such as number of agents and size of actions. The complexity of many parameter combinations has been determined, but previous research left a few combinations as open problems. In this paper, we solve most of the remaining open problems by proving all of them to be fixed-parameter intractable. Only two parameter combinations are still left as open problem for future research.
    Found 1 week ago on Thomas Bolander's site
  9. 637790.14245
    We claim that the various sharpenings in a supervaluationist analysis are best understood as possible worlds in a Kripke structure. It’s not just that supervaluationism wishes to assert ¬(∀n)(if a man with n hairs on his head is bald then so is a man with n + 1 hairs on his head) while refusing to assert (∃n)(a man with n hairs on his head is bald but is a man with n + 1 hairs on his head is not) and that this refusal can be accomplished by a constructive logic (tho’ it can)—the point is that the obvious Kripke semantics for this endeavour has as its possible worlds precisely the sharpenings that supervaluationism postulates. Indeed the sharpenings do nothing else. The fit is too exact to be coincidence.
    Found 1 week ago on Thomas Forster's site
  10. 637807.142479
    Were governments justified in imposing lockdowns to contain the spread of the COVID-19 pandemic? We argue that a convincing answer to this question is to date wanting, by critically analyzing the factual basis of a recent paper, “How Government Leaders Violated Their Epistemic Duties During the SARS-CoV-2 Crisis” (Winsberg et al. 2020). In their paper, Winsberg et al. argue that government leaders did not, at the beginning of the pandemic, meet the epistemic requirements necessitated to impose lockdowns. We focus on Winsberg et al.’s contentions that knowledge about COVID-19 resultant projections were inadequate; that epidemiologists were biased in their estimates of relevant figures; that there was insufficient evidence supporting the efficacy of lockdowns; and that lockdowns cause more harm than good. We argue that none of these claims are sufficiently supported by evidence, thus impairing their case against lockdowns, and leaving open the question of whether lockdowns were justified.
    Found 1 week ago on PhilPapers
  11. 637812.142509
    Traditionally, it had been assumed that meta-representational Theory of Mind (ToM) emerges around the age of 4 when children come to master standard false belief (FB) tasks. More recent research with various implicit measures, though, has documented much earlier competence and thus challenged the traditional picture. In interactive FB tasks, for instance, infants have been shown to track an interlocutor’s false or true belief when interpreting her ambiguous communicative acts (Southgate et al. 2010 Dev. Sci. 13, 907– 912. (doi:10.1111/j.1467–7687.2009.00946.x)). However, several replication attempts so far have produced mixed findings (e.g. Dörrenberg et al. 2018 Cogn. Dev. 46, 12–30. (doi:10.
    Found 1 week ago on Hannes Rakoczy's site
  12. 637841.142547
    It is commonly accepted that what we ought to do collectively does not imply anything about what each of us ought to do individually. According to this line of reasoning, if cooperating will make no difference to an outcome, then you are not morally required to do it. And if cooperating will be personally costly to you as well, this is an even stronger reason to not do it. However, this reasoning results in a self-defeating, yet entirely predictable outcome. If everyone is rational, they will not cooperate, resulting in an aggregate outcome that is devastating for everyone. This dismal analysis explains why climate change and other collective action problems are so difficult to ameliorate. The goal of this paper is to provide a different, exploratory framework for thinking about individual reasons for action in collective action problems. I argue that the concept of commitment gives us a new perspective on collective action problems. Once we take the structure of commitment into account, this activates requirements of diachronic rationality that give individuals instrumental reasons to cooperate in collective action problems.
    Found 1 week ago on PhilPapers
  13. 666171.142577
    When we make decisions we are invariably comparing outcomes that happen at different times. How much should you sacrifice now to get a better job later? Should you switch to solar? Purchase a gym membership? Studies of intertemporal decision-making suggest that we often exhibit two types of time preferences: future discounting, that all else being equal, we prefer that future pleasures happen sooner than later (and vice versa for pains); and past discounting, that all else being equal, we prefer that pleasures happen in the present or future than in the past (and again, vice versa for pains). Are these time preferences rational? It’s important that we make progress on this question, for assumptions about what discounting is normatively optimal inform public policy decisions throughout the world. Both social science and philosophy discuss the normative standing of discounting, philosophy focusing mostly on past discounting and social science mostly on future discounting. To a very rough first approximation, the two fields appear to disagree on when or if temporal discounting is rational. Future discounting is judged irrational by philosophers and as often rational by social scientists. Past discounting, by contrast, is viewed as rational by some philosophers but as (probably) irrational by social scientists.
    Found 1 week ago on PhilSci Archive
  14. 666324.142628
    Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no–time.
    Found 1 week ago on PhilSci Archive
  15. 674315.142665
    something’s not revealed A little over a year ago, the board of the American Statistical Association (ASA) appointed a new Task Force on Statistical Significance and Replicability (under then president, Karen Kafadar), to provide it with recommendations. …
    Found 1 week ago on D. G. Mayo's blog
  16. 712964.142685
    Democracies around the world are suffering paroxysms of populist rage. Obviously this has many contributing causes and individuals, from rising inequality to social media to political entrepreneurs like Trump. …
    Found 1 week, 1 day ago on The Philosopher's Beard
  17. 839773.1427
    Dynamical models of cognition have played a central role in recent cognitive science. In this paper, we consider a common strategy by which dynamical models describe their target systems neither as purely static or purely dynamic, but rather using a hybrid approach. This hybridity reveals why dynamical models should not be understood as providing unstructured descriptions of a system’s dynamics, and is important for understanding the relationship between dynamical and non-dynamical representations of a system.
    Found 1 week, 2 days ago on PhilSci Archive
  18. 985925.142716
    In their 2018 paper, 'Living on the Edge', Ginger Schultheis issues a powerful challenge to epistemic permissivism about credences, the view that there are bodies of evidence in response to which there are a number of different credence functions it would be rational to adopt. …
    Found 1 week, 4 days ago on M-Phi
  19. 1015653.14273
    I develop a definition of mereological endurantism which overcomes objections that have been proposed in the literature and thereby avoids the charge of obscurity put forward by Sider against the view.
    Found 1 week, 4 days ago on Damiano Costa's site
  20. 1054496.142746
    The COVID-19 pandemic presents us with the question of how healthcare systems can be prevented from being overwhelmed while avoiding general lockdowns. We focus on two strategies that show promise in achieving this, by targeting certain segments of the population, while allowing others to go about their lives unhindered. The first would selectively isolate those who most likely suffer severe adverse effects if infected – in particular the elderly. The second would identify and quarantine those who are likely to be infected through a contact tracing app that would centrally store users’ information. We evaluate the ethical permissibility of these strategies, by comparing, first, the ways in which they target segments of the population for isolation. We argue that the way in which selective isolation targets salient groups discriminates against these groups. While the contact tracing strategy cannot plausibly be objected to in terms of discrimination, its individualized targeting raises privacy concerns, which we argue can be overcome. Second, we compare the ethical implications of their respective aims. Here, we argue that a prominent justification of selective isolation policies – that it is in the best interests of the individuals affected – fails to support this strategy, but rather exacerbates its discriminatory nature.
    Found 1 week, 5 days ago on PhilPapers
  21. 1054556.142759
    Tese apresentada ao Programa de Pós-graduação em Filosofia do Departamento de Filosofia da Faculdade de Filosofia, Letras e Ciências Humanas da Universidade de São Paulo.
    Found 1 week, 5 days ago on PhilSci Archive
  22. 1054585.142773
    Philosophers, psychologists, economists and other social scientists continue to debate the nature of human well-being. We argue that this debate centers around five main conceptualizations of well-being: hedonic well-being, life satisfaction, desire fulfillment, eudaimonia, and noneudaimonic objective-list well-being. Each type of well-being is conceptually different, but are they empirically distinguishable? To address this question, we first developed and validated a measure of desire fulfillment, as no measure existed, and then examined associations between this new measure and several other well-being measures. In addition, we explored associations among all five types of well-being. We found high correlations among all measures of well-being, but generally correlations did not approach unity, even when correcting for unreliability.
    Found 1 week, 5 days ago on Eric Schwitzgebel's site
  23. 1054694.142787
    This article introduces a Probabilistic Logic of Communication and Change, which captures in a unified framework subjective probability, arbitrary levels of mutual knowledge and a mechanism for multi-agent Bayesian updates that can model complex social-epistemic scenarios, such as informational cascades. We show soundness, completeness and decidability of our logic, and apply it to a concrete example of cascade.
    Found 1 week, 5 days ago on Alexandru Baltag's site
  24. 1167823.142804
    Over the summer, I got interested in the problem of the priors again. Which credence functions is it rational to adopt at the beginning of your epistemic life? Which credence functions is it rational to have before you gather any evidence? …
    Found 1 week, 6 days ago on M-Phi
  25. 1197085.142819
    In a recent paper, Pettigrew (Philos Stud, 2019. https://doi.org/10.1007/ s11098-019-01377-y) argues that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which he calls deterministic updating, and which says that your updating plan should be deterministic.
    Found 1 week, 6 days ago on PhilPapers
  26. 1245544.142833
    This paper explores the options available to the anti-realist to defend a Quinean empirical under-determination thesis using examples of dualities. I first explicate a version of the empirical under-determination thesis that can be brought to bear on theories of contemporary physics. Then I identify a class of examples of dualities that lead to empirical under-determination. But I argue that the resulting under-determination is benign, and is not a threat to a cautious scientific realism. Thus dualities are not new ammunition for the anti-realist. The paper also shows how the number of possible interpretative options about dualities that have been considered in the literature can be reduced, and suggests a general approach to scientific realism that one may take dualities to favour.
    Found 2 weeks ago on PhilSci Archive
  27. 1245593.142847
    Writing in 1948, Turing felt compelled to confront a “religious belief” that “any attempt” to construct intelligent machines was seen “a sort of Promethean irreverence.” And yet he has been associated by his own biographer Andrew Hodges with the image of “a Frankenstein — the proud irresponsibility of pure science, concentrated in a single person.” Reader of a 1865 version of Samuel Butler’s Darwin among the machines, Turing challenged the conventional wisdom of what machines really were or could be and prophesized a future pervaded by intelligent machines which may be seen as a dystopia or as a utopia. The question is thus posed: what future did Turing actually envision and propose to machines? I will formulate and study the problem of identifying Turing’s specific Promethean ambition about intelligent machines. I shall suggest that Turing’s primary aim was the development of mechanistic explanations of the human mindbrain. But his secondary aim, implied in irony and wit, was the delivery of a social criticism about gender, race, nation and species chauvinisms. Turing’s association with Mary Shelley’s Frankenstein will be discouraged. Rather, his third aim was to send a precautionary message about the possibility of machines outstripping us in intellectual power in the future.
    Found 2 weeks ago on PhilSci Archive
  28. 1245670.142861
    This paper presents key aspects of the quantum relativistic direct-action theory that underlies the Relativistic Transactional Interpretation. It notes some crucial ways in which traditional interpretations of the direct-action theory have impeded progress in developing its quantum counterpart. Specifically, (1) the so-called ‘light tight box’ condition is re-examined and it is shown that the quantum version of this condition is much less restrictive than has long been assumed; and (2) the notion of a ‘real photon’ is disambiguated and revised to take into account that real (on-shell) photons are indeed both emitted and absorbed and therefore have finite lifetimes. Also discussed is the manner in which real, physical non-unitarity naturally arises in the quantum direct-action theory of fields, such that the measurement transition can be clearly defined from within the theory, without reference to external observers and without any need to modify quantum theory itself. It is shown that field quantization arises from the non-unitary interaction.
    Found 2 weeks ago on PhilSci Archive
  29. 1245741.142874
    Bayesian epistemology has struggled with the problem of regularity: how to deal with events that in classical probability have zero probability. While the cases most discussed in the literature, such as infinite sequences of coin tosses or continuous spinners, do not actually come up in scientific practice, there are cases that do come up in science. I shall argue that these cases can be resolved without leaving the realm of classical probability, by choosing a probability measure that preserves “enough” regularity. This approach also provides a resolution to the McGrew, McGrew and Vestrum normalization problem for the fine-tuning argument.
    Found 2 weeks ago on PhilSci Archive
  30. 1312722.142888
    The universality assumption (“U”) that quantum wave states only evolve by linear or unitary dynamics has led to a variety of paradoxes in the foundations of physics. U is not directly supported by empirical evidence but is rather an inference from data obtained from microscopic systems. The inference of U conflicts with empirical observations of macroscopic systems, giving rise to the century-old measurement problem and subjecting the inference of U to a higher standard of proof, the burden of which lies with its proponents. This burden remains unmet because the intentional choice by scientists to perform interference experiments that only probe the microscopic realm disqualifies the resulting data from supporting an inference that wave states always evolve linearly in the macroscopic realm. Further, the nature of the physical world creates an asymptotic size limit above which interference experiments, and verification of U in the realm in which it causes the measurement problem, seem impossible for all practical purposes if nevertheless possible in principle. This apparent natural limit serves as evidence against an inference of U, providing a further hurdle to the proponent’s currently unmet burden of proof. The measurement problem should never have arisen because the inference of U is entirely unfounded, logically and empirically.
    Found 2 weeks, 1 day ago on PhilPapers