Philosophical Progress and blog posts found on 10 August 20202020-08-10T23:59:00Z2020-08-10T23:59:00ZPhilosophical,2020-08-10://<b>Eliot Michaelson, Andreas Stokke: <a href="">Lying, Deception, and Epistemic Advantage</a></b> (pdf, 8852 words)<br /> <div>In this situation, uttering (1a) is to lie while uttering (1b) is not. Crucially, (1a) is something the speaker believes (indeed knows) to be false, whereas (1b) is something she believes to be true. Yet both utterances are aimed at the same thing: deceiving the hearer into believing that the speaker has not been opening the mail.</div><br /> <b>Malte Willer, Christopher Kennedy: <a href="">Assertion, Expression, Experience</a></b> (pdf, 14727 words)<br /> <div>It has been frequently observed in the literature that assertions of plain sentences containing predicates like fun and frightening give rise to an acquaintance inference: they imply that the speaker has first-hand knowledge of the item under consideration. The goal of this paper is to develop and defend a broadly expressivist explanation of this phenomenon: acquaintance inferences arise because plain sentences containing subjective predicates are designed to express distinguished kinds of attitudes that differ from beliefs in that they can only be acquired by undergoing certain experiences. Its guiding hypothesis is that natural language predicate expressions lexically specify what it takes for their use to be properly “grounded” in a speaker’s state of mind: what state of mind a speaker must be in for a predication to be in accordance with the norms governing assertion. The resulting framework accounts for a range of data surrounding the acquaintance inference as well as for striking parallels between the evidential requirements on subjective predicate uses and the kind of considerations that fuel motivational internalism about the language of morals. A discussion of how the story can be implemented compositionally and of how it compares with other proposals currently on the market is provided.</div><br /> <b>Timothy O'Connor: <a href="">Emergent Properties</a></b> (html, 16337 words)<br /> <div>[<i>Editor's Note: The following new entry by Timothy O’Connor replaces the former entry on this topic by the previous authors.</i>] The world appears to contain diverse kinds of objects and systems—planets, tornadoes, trees, ant colonies, and human persons, to name but a few—characterized by distinctive features and behaviors. This casual impression is deepened by the success of the special sciences, with their distinctive taxonomies and laws characterizing astronomical, meteorological, chemical, botanical, biological, and psychological processes, among others. But there’s a twist, for part of the success of the special sciences reflects an effective consensus that the features of the composed entities they treat do not “float free” of features and configurations of their components, but are rather in some way(s) dependent on them.</div><br /> <b>Uriah Kriegel: <a href="">Sketch for a Theory of the History of Philosophy</a></b> (pdf, 8680 words)<br /> <div>My aims in this essay are two. First (§§1-4), I want to get clear on the very idea of a theory of the history of philosophy, the idea of an overarching account of the evolution of philosophical reflection since the inception of written philosophy. And secondly (§§5-8), I want to actually sketch such a global theory of the history of philosophy, which I call the <i>two-streams theory</i>.</div><br /> <b>M-Phi: <a href="">The Accuracy Dominance Argument for Conditionalization without the Additivity assumption</a></b> (html, 1250 words)<br /> <div>Last week, I explained how you can give an accuracy dominance argument for Probabilism without assuming that your inaccuracy measures are additive -- that is, without assuming that the inaccuracy of a whole credence function is obtained by adding up the inaccuracy of all the individual credences that it assigns. &hellip;</div><br /> <b>Alexander Pruss's Blog: <a href="">Do "one thought too many" objections work?</a></b> (html, 512 words)<br /> <div>Consider “one thought too many” objections in ethics, on which certain considerations that objectively favor an action are nonetheless a “thought too many”, and it is better to act without them. Examples given in the literature involve using consequentialist reasoning when saving one’s spouse, or visiting a sick friend because of duty. &hellip;</div><br /> Articles and blog posts found on 08 August 20202020-08-08T23:59:00Z2020-08-08T23:59:00ZPhilosophical,2020-08-08://<b>Andreas Stöckel, Terrence C. Stewart, Chris Eliasmith: <a href="">A Biologically Plausible Spiking Neural Model of Eyeblink Conditioning in the Cerebellum</a></b> (pdf, 4824 words)<br /> <div>The cerebellum is classically described in terms of its role in motor control. Recent evidence suggests that the cerebellum supports a wide variety of functions, including timing-related cognitive tasks and perceptual prediction. Correspondingly, deciphering cerebellar function may be important to advance our understanding of cognitive processes. In this paper, we build a model of eyeblink conditioning, an extensively studied low-level function of the cerebellum. Building such a model is of particular interest, since, as of now, it remains unclear how exactly the cerebellum manages to learn and reproduce the precise timings observed in eyeblink conditioning that are potentially exploited by cognitive processes as well. We employ recent advances in large-scale neural network modeling to build a biologically plausible spiking neural network based on the cerebellar microcircuitry. We compare our simulation results to neurophysiological data and demonstrate how the recurrent Granule-Golgi subnetwork could generate the dynamics representations required for triggering motor trajectories in the Purkinje cell layer. Our model is capable of reproducing key properties of eyeblink conditioning, while generating neurophysiological data that could be experimentally verified.</div><br /> <b>Benjamin Hale: <a href="">The Principle of Common But Differentiated Responsibilities: Origins and Scope</a></b> (pdf, 1773 words)<br /> <div>The principle of ‘common but differentiated responsibility’ evolved from the notion of the ‘common heritage of mankind’ and is a manifestation of general principles of equity in international law. The principle recognises historical differences in the contributions of developed and developing States to global environmental problems, and differences in their respective economic and technical capacity to tackle these problems. Despite their common responsibilities, important differences exist between the stated responsibilities of developed and developing countries. The <i>Rio Declaration</i> states: “In view of the different contributions to global environmental degradation, States have common but differentiated responsibilities. The developed countries acknowledge the responsibility that they bear in the international pursuit of sustainable development in view of the pressures their societies place on the global environment and of the technologies and financial resources they command.” Similar language exists in the <i>Framework Convention on Climate Change</i>; parties should act to protect the climate system “on the basis of equality and in accordance with their common but differentiated responsibilities and respective capabilities.” The principle of common but differentiated responsibility includes two fundamental elements. The first concerns the common responsibility of States for the protection of the environment, or parts of it, at the national, regional and global levels. The second concerns the need to take into account the different circumstances, particularly each State’s contribution to the evolution of a particular problem and its ability to prevent, reduce and control the threat.</div><br /> <b>David Ripley: <a href="">Strong normalization in core type theory</a></b> (pdf, 6856 words)<br /> <div>This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus bears a close relation to the →<i>,</i> ¬ fragment of core logic, and so is called ‘core type theory’. This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus is similar to the simply-typed lambda calculus with an empty type, but with a twist. The simply-typed lambda calculus with an empty type bears a close relation to the →<i>, ⊥</i> fragment of intuitionistic logic ([Howard, ; Scherer, 2017; Sørensen and Urzyczyn, 2006]); the calculus to be presented here bears a similar relation to the →<i>,</i> ¬ fragment of a logic known as <i>core logic</i>. Because of this connection, I’ll call the calculus <i>core type theory</i>.</div><br /> <b>Fabian Schlotterbeck: <a href="">Representational complexity and pragmatics cause the monotonicity effect</a></b> (pdf, 5293 words)<br /> <div>Psycholinguistic studies have repeatedly demonstrated that downward entailing (DE) quantifiers are more difficult to process than upward entailing (UE) ones. We contribute to the current debate on cognitive processes causing the monotonicity effect by testing predictions about the underlying processes derived from two competing theoretical proposals: two-step and pragmatic processing models. We model reaction times and accuracy from two verification experiments (a sentence-picture and a purely linguistic verification task), using the diffusion decision model (DDM). In both experiments, verification of UE quantifier more than half was compared to verification of DE quantifier fewer than half. Our analyses revealed the same pattern of results across tasks: Both non-decision times and drift rates, two of the free model parameters of the DDM, were affected by the monotonicity manipulation. Thus, our modeling results support both two-step (prediction: non-decision time is affected) and pragmatic processing models (prediction: drift rate is affected).</div><br /> <b>Franz Huber: <a href="">Belief and Counterfactuals. A Study in Means—End Philosophy</a></b> (pdf, 11097 words)<br /> <div>[177] <i>Shenoy, Prakash P.</i> (1991), <i>On Spohn’s</i> Rule <i>for</i> Revision <i>of</i> Beliefs. <i>International Journal of Approximate Reasoning</i> 5, 149-181. [178] <i>Spirtes,</i> Peter &amp; <i>Glymour,</i> Clark &amp; <i>Scheines,</i> Richard (2000), <i>Causation,</i></div><br /> <b>Grace Paterson, David Ripley, Andrew Tedder: <a href="">Qua Solution, 0-Qua Has Problems: A Response to Beall and Henderson</a></b> (pdf, 2578 words)<br /> <div><b></b>We present an objection to Beall and Henderson’s recent paper defending a solution to the fundamental problem of conciliar Christology using qua or secundum clauses. We argue that certain claims the acceptance/rejection of which distinguish the Conciliar Christian from others fail to so distinguish on Beall and Henderson’s 0- Qua view. This is because on their 0-Qua account, these claims are either acceptable both to Conciliar Christians as well as those who are not Conciliar Christians or because they are acceptable to neither.</div><br /> <b>J. Barkley Rosser: <a href="">Observations on Computability, Uncertainty, and Technology</a></b> (doc, 5495 words)<br /> <div>Inspired by work of Stefano Zambelli on these topics, this paper the complex nature of the relation between technology and computability. This involves reconsidering the role of computational complexity in economics and then applying this to a particular formulation of the nature of technology as conceived within the Sraffian framework. A crucial element of this is to expand the concept of technique clusters. This allows for understanding that the set of possible techniques is of a higher cardinality of infinity than that of the points on a wage-profit frontier. This is associated with potentially deep discontinuities in production functions and a higher form of uncertainty involved in technological change and growth.</div><br /> <b>Pablo Cobreros, David Ripley, Robert van Rooij: <a href="">Inferences and Metainferences in ST</a></b> (pdf, 8990 words)<br /> <div>In a recent paper, Barrio, Tajer and Rosenblatt establish a correspondence between metainferences holding in the strict-tolerant logic of transparent truth ST and inferences holding in the logic of paradox LP . They argue that LP is ST ’s external logic and they question whether ST ’s solution to the semantic paradoxes is fundamentally different from LP ’s. Here we establish that by parity of reasoning, ST can be related to LP ’s dual logic K3 . We clarify the distinction between internal and external logic and argue that while ST ’s nonclassicality can be granted, its self-dual character does not tie it to LP more closely than to K3 .</div><br /> Articles and blog posts found on 07 August 20202020-08-07T23:59:00Z2020-08-07T23:59:00ZPhilosophical,2020-08-07://<b>M-Phi: <a href="">Accuracy without Additivity</a></b> (html, 1101 words)<br /> <div>For a PDF of this post, see here.One of the central arguments in accuracy-first epistemology -- the one that gets the project off the ground, I think -- is the accuracy-dominance argument for Probabilism. &hellip;</div><br /> <b>Alexander Pruss's Blog: <a href="">Virtues as pocket oracles</a></b> (html, 554 words)<br /> <div>Consider three claims: Virtues when fully developed make it possible to see what is the right thing to do without conscious deliberation. Acting on fully developed virtues is the best way to act. Acting on a pocket oracle, which simply tells you in each case what is to be done, misses out on something important in our action. &hellip;</div><br /> <b>Alexander Pruss's Blog: <a href="">"For the common good"</a></b> (html, 646 words)<br /> <div>Aquinas thinks that for something to be a law, it must be “for the common good” (in addition to satisfying other conditions). Otherwise, the legislation (as we might still call it) is not really a law, and does not morally require obedience except to avoid chaos. &hellip;</div><br /> <b>Alexander Pruss's Blog: <a href="">A value asymmetry in double effect reasoning</a></b> (html, 702 words)<br /> <div>The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects. &hellip;</div><br /> Articles and blog posts found on 06 August 20202020-08-06T23:59:00Z2020-08-06T23:59:00ZPhilosophical,2020-08-06://<b>John L. Bell: <a href="">The bibliography paradox revisited</a></b> (pdf, 394 words)<br /> <div>There is a well-known version of Russell's paradox concerning the <i>bibliography of all bibliographies which fail to list themselves</i>. The usual analysis of this paradox leads to the conclusion that such a bibliography is self-contradictory and so therefore cannot exist. However, as we show, a more searching analysis leads to a rather different conclusion.</div><br /> <b>Nicole Sandra-Yaffa Dumont, Chris Eliasmith: <a href="">Accurate representation for spatial cognition using grid cells</a></b> (pdf, 5005 words)<br /> <div>Spatial cognition relies on an internal map-like representation of space provided by hippocampal place cells, which in turn are thought to rely on grid cells as a basis. Spatial Semantic Pointers (SSP) have been introduced as a way to represent continuous spaces and positions via the activity of a spiking neural network. In this work, we further develop SSP representation to replicate the firing patterns of grid cells. This adds biological realism to the SSP representation and links biological findings with a larger theoretical framework for representing concepts. Furthermore, replicating grid cell activity with SSPs results in greater accuracy when constructing place cells.Improved accuracy is a result of grid cells forming the optimal basis for decoding positions and place cell output. Our results have implications for modelling spatial cognition and more general cognitive representations over continuous variables.</div><br /> <b>Peter Duggins, Dominik Krzemi, Chris Eliasmith: <a href="">A spiking neuron model of inferential decision making: Urgency, uncertainty, and the speed-accuracy tradeoff</a></b> (pdf, 5546 words)<br /> <div>Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.</div><br /> <b>Alexander Pruss's Blog: <a href="">Humdrum cases of double effect reasoning</a></b> (html, 250 words)<br /> <div>While the Principle of Double Effect is mostly discussed in the literature in connection with very bad effects, typically death, that trigger deontic concerns, lately philosophers (e.g., Masek) have been noting that double effect reasoning can be important in much more humdrum situations. &hellip;</div><br /> <b>The Splintered Mind: <a href="">It's Not Hard to Be Morally Excellent; You Just Choose Not To Be So</a></b> (html, 1131 words)<br /> <div>I find it surprising that so many people seem to disagree. Maybe we're primed to disagree because it's a convenient excuse for our moral mediocrity. "Gosh," you say, "I do sure wish I could be morally excellent. &hellip;</div><br /> <b>wo's weblog: <a href="">Dispositions, intrinsicality, and the problem of fit</a></b> (html, 652 words)<br /> <div>Dispositions, intrinsicality, and the problem of fit Posted on Thursday, 06 Aug 2020 In chapter 3 of The Powers Metaphysic, Neil Williams presents a nice problem for dispositionalists: the "problem of fit". &hellip;</div><br /> Articles and blog posts found on 05 August 20202020-08-05T23:59:00Z2020-08-05T23:59:00ZPhilosophical,2020-08-05://<b>Andy Egan: <a href="">Fragmented models of belief</a></b> (pdf, 12197 words)<br /> <div>This paper is primarily an advertisement for a research program, and for some particular, so far under-explored research questions within that research program. It’s an advertisement for the program of constructing fragmented models of subjects’ propositional attitudes, and theorizing about and by means of such models. I’ll aim to do two things: First, motivate a fragmentationist research program by identifying a cluster of problems that such a research program is well-positioned to address or resolve. Second, identify what I take to be some of the challenges and research questions that the fragmentationist program will need to address, and where the space of possible answers is not yet well-charted.</div><br /> <b>J. Dmitri Gallow: <a href="">Two-Dimensional Deference</a></b> (pdf, 25289 words)<br /> <div>Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn’t defer to your doctor by aligning your credence in the <i>de se</i> thought ‘I am sick’ with the doctor’s credence in that same <i>de se</i> thought. Nor should you defer to the objective chances by setting your credence in the thought ‘The actual winner wins’ equal to the objective chance that the actual winner wins. Here, I generalize principles of expert deference to handles these kinds of problem cases.</div><br /> <b>Jonathan Jenkins Ichikawa: <a href="">Presupposition and Consent</a></b> (pdf, 13343 words)<br /> <div>I argue that ‘consent’ language presupposes that the contemplated action is or would be at someone else’s behest. When one does something for another reason — for example, when one elects independently to do something, or when one accepts an invitation to do something — it is linguistically inappropriate to describe the actor as ‘consenting’ to it; but it is also inappropriate to describe them as ‘not consenting’ to it. A consequence of this idea is that ‘consent’ is poorly suited to play its canonical central role in contemporary sexual ethics. But this does not mean that nonconsensual sex can be morally permissible. Consent language, I’ll suggest, carries the conventional presupposition that that which is or might be consented to is at someone else’s behest. One implication will be a new kind of support for feminist critiques of consent theory in sexual ethics.</div><br /> <b>Michal Masny: <a href="">Friedman on suspended judgment</a></b> (pdf, 8738 words)<br /> <div>In a recent series of papers, Jane Friedman argues that suspended judgment is a sui generis first-order attitude, with a question (rather than a proposition) as its content. In this paper, I offer a critique of Friedman’s project. I begin by responding to her arguments against reductive higher-order propositional accounts of suspended judgment, and thus undercut the negative case for her own view. Further, I raise worries about the details of her positive account, and in particular about her claim that one suspends judgment about some matter if and only if one inquires into this matter. Subsequently, I use conclusions drawn from the preceding discussion to offer a tentative account: S suspends judgment about p iff (i) S believes that she neither believes nor disbelieves that p, (ii) S neither believes nor disbelieves that p, and (iii) S intends to judge that p or not-p.</div><br /> <b>Alexander Pruss's Blog: <a href="">Label independence and lotteries</a></b> (html, 595 words)<br /> <div>Suppose we have a countably infinite fair lottery, in John Norton’s sense of label independence: in other words, probabilities are not changed by any relabeling—i.e., any permutation—of tickets. In classical probability, it’s easy to generate a contradiction from the above assumptions, given the simple assumption that there is at least one set A of tickets that has a well-defined probability (i.e., that the probability that the winning ticket is from A is well-defined) and that has the property that both A and its complement are infinite. &hellip;</div><br /> <b>Azimuth: <a href="">Open Systems in Classical Mechanics</a></b> (html, 835 words)<br /> <div>Here’s a paper on categories where the morphisms are open physical systems, and composing them describes gluing these systems together: • John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics. &hellip;</div><br /> Articles and blog posts found on 04 August 20202020-08-04T23:59:00Z2020-08-04T23:59:00ZPhilosophical,2020-08-04://<b>: <a href="">The Standard of Correctness and the Ontology of Depiction</a></b> (pdf, 8228 words)<br /> <div>This paper develops Richard Wollheim’s claim that the proper appreciation of a picture involves not only enjoying a seeing-in experience but also abiding by a standard of correctness. While scholars have so far focused on what fixes the standard, thereby discussing the alternative between intentions and causal mechanisms, the paper focuses on what the standard does, that is, establishing which kinds, individuals, features and standpoints are relevant to the understanding of pictures. It is argued that, while standards concerning kinds, individuals and features can be relevant also to ordinary perception, standards concerning standpoints are specific to pictorial experience. Drawing on all this, the paper proposes an ontology of depiction according to which a picture is constituted by both its visual appearance and its standard of correctness.</div><br /> <b>: <a href="">The Fate of Explanatory Reasoning in the Age of Big Data</a></b> (pdf, 10924 words)<br /> <div>In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In section 3, I argue that these methodological claims are in tension with a prominent account of scientific method, often called “Inference to the Best Explanation” (IBE). Later on, in section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely the argument due to Roche and Sober (2013) that “explanatoriness is evidentially irrelevant”. This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In section 4, I consider some extant responses to this argument, especially that of Climenhaga (2017). In section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected <i>De Divinatione</i>, I formulate what I call the “Ciceronian Causal-nomological Requirement”, (CCR), which states roughly that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations”, chance correlations which we should not rely upon for predictive inference. In section 7, I offer some concluding remarks.</div><br /> <b>C. Thi Nguyen: <a href="">Précis of <i>Games: Agency as Art</i></a></b> (pdf, 2253 words)<br /> <div>Games are a distinctive form of art — and very different from many traditional arts. Games work in the medium of agency. Game designers don’t just tell stories or create environments. They tell us what our abilities will be in the game. They set our motivations, by setting the scoring system and specifying the win-conditions. Game designers sculpt temporary agencies for us to occupy. And when we play games, we adopt these designed agencies, submerging ourselves in them, and taking on their specified ends for a while.</div><br /> <b>Carl Brusse: <a href="">Manipulation and Dishonest Signals</a></b> (pdf, 2021 words)<br /> <div>Dishonest signals are displays, calls, or performances that would ordinarily convey certain information about some state of the world, but where the signal being sent does not correspond to the true state. Manipulation is the sending of signals in a way that takes advantage of default receiver responses to such signals, to influence their behavior in ways favorable to the sender. Manipulative signals are often dishonest, and dishonest signals are often manipulative, though this not need be the case. Some theorists have defined signaling in such a way that evolutionarily reinforced signals are essentially manipulative.</div><br /> <b>Christoph Helmig, Carlos Steel: <a href="">Proclus</a></b> (html, 19494 words)<br /> <div>Proclus of Athens (*412–485 C.E.) was the most authoritative philosopher of late antiquity and played a crucial role in the transmission of Platonic philosophy from antiquity to the Middle Ages. For almost fifty years, he was head or ‘successor’ (<i>diadochos</i>, sc. of Plato) of the Platonic ‘Academy’ in Athens. Being an exceptionally productive writer, he composed commentaries on Aristotle, Euclid and Plato, systematic treatises in all disciplines of philosophy as it was at that time (metaphysics and theology, physics, astronomy, mathematics, ethics) and exegetical works on traditions of religious wisdom (Orphism and Chaldaean Oracles).</div><br /> <b>Derek Baker: <a href="">If You’re Quasi-Explaning, You’re Quasi-Losing</a></b> (pdf, 9856 words)<br /> <div>The giving and requesting of explanations is central to normative practice. When we tell children that they must act in certain ways, they often ask why, and often we are able to answer them. Sentences like ‘Kicking dogs is wrong because it hurts them’, and ‘You should eat your vegetables because they’re healthy’, are meaningful and ubiquitous.</div><br /> <b>G. S. Ciepielewski, E. Okon, D. Sudarsky: <a href="">On Superdeterministic Rejections of Settings Independence</a></b> (pdf, 15728 words)<br /> <div>Relying on some auxiliary assumptions, usually considered mild, Bell’s theorem proves that no local theory can reproduce all the predictions of quantum mechanics. In this work, we introduce a fully local, <i>superdeterministic</i> model that, by explicitly violating <i>settings independence</i>—one of these auxiliary assumptions, requiring statistical independence between measurement settings and systems to be measured—is able to reproduce all the predictions of quantum mechanics. Moreover, we show that, contrary to widespread expectations, our model can break settings independence without an initial state that is too complex to handle, without visibly losing all explanatory power and without outright nullifying all of experimental science. Still, we argue that our model is unnecessarily complicated and does not offer true advantages over its non-local competitors. We conclude that, while our model does not appear to be a viable contender to their non-local counterparts, it provides the ideal framework to advance the debate over violations of statistical independence via the superdeterministic route.</div><br /> <b>Gregory Currie: <a href="">Style and the agency in art</a></b> (pdf, 8492 words)<br /> <div>Art works are artefacts and, like all artefacts, are the product of agency. How important is that for our engagement with them? For many artefacts, agency hardly matters. The paperclips on my desk perform their function without me having to think of them as the outputs of agency, though I might on occasion admire their design. But for those artefacts we categorise as works of art, the connection is important: if I treat something as art I need to see how it manifests the choices, preferences, actions and sensibilities of the maker. I am not asked to see it simply as a record of those things. The work is not valuable merely as a conduit to the qualities of the maker; it has final value and not merely instrumental value. Its value depends on its relation to the maker; in Korsgaard’s terms it is value that is final and extrinsic.</div><br /> <b>Itzhak Gilboa, Stefania Minardi, Fan Wang: <a href="">Consumption of Values</a></b> (pdf, 21333 words)<br /> <div>Consumption decisions are partly in‡uenced by values and ideologies. Consumers care about global warming as well as about child labor and fair trade. Incorporating values into the consumer’s utility function will often violate monotonicity, in case consumption hurts cherished values in a way that isn’t offset by the hedonic bene…ts of material consumption. We distinguish between intrinsic and instrumental values, and argue that the former tend to introduce discontinuities near zero. For example, a vegetarian’s preferences would be discontinuous near zero amount of animal meat. We axiomatize a utility representation that captures such preferences and discuss the measurability of the degree to which consumers care about such values.</div><br /> <b>James Woodward: <a href="">Sketch of some themes for a pragmatist philosophy of science</a></b> (doc, 22220 words)<br /> <div>This paper sketches, in a very partial and preliminary way, an approach to philosophy of science that I believe has some important affinities with philosophical positions that are often regarded as versions of “pragmatism”. However, pragmatism in both its classical and more modern forms has taken on many different commitments. I will be endorsing some of these and rejecting others—in fact, I will suggest that some elements prominent in some recent formulations of pragmatism are quite contrary in spirit to a genuine pragmatism. Among the elements I will retain from many if not all varieties of pragmatism are an emphasis on what is useful, where this is understood in a means/ends framework, a rejection of spectator theories of knowledge and skepticism about certain ways of thinking about “representation” in science. Also skepticism about ambitious forms of metaphysics. Elements found in some previous versions of pragmatism that I will reject include proposals to understand (or replace) truth with some notion of community assent and skepticism about causal and physical modality. It is</div><br /> <b>Jie Gao: <a href="">Credal sensitivism: threshold vs. credence-one</a></b> (pdf, 8833 words)<br /> <div>According to an increasingly popular view in epistemology and philosophy of mind, beliefs are sensitive to contextual factors such as practical factors and salient error possibilities. A prominent version of this view, called credal sensitivism, holds that the context-sensitivity of belief is due to the context-sensitivity of degrees of belief or credence. Credal sensitivism comes in two variants: while credence-one sensitivism (COS) holds that maximal confidence (credence one) is necessary for belief, threshold credal sensitivism (TCS) holds that belief consists in having credence above some threshold, where this threshold doesn’t require maximal confidence. In this paper, I argue that COS has difficulties in accounting for three important features about belief: i) the compatibility between believing <i>p</i> and assigning non-zero credence to certain error possibilities that one takes to entail not-<i>p</i>, ii) the fact that outright beliefs can occur in different strengths, and iii) beliefs held by unconscious subjects. I also argue that TCS can easily avoid these problems. Finally, I consider an alleged advantage of COS over TCS in terms of explaining beliefs about lotteries. I argue that lottery cases are rather more problematic for COS than TCS. In conclusion, TCS is the most plausible version of credal sensitivitism.</div><br /> <b>Katie Steele, H. Orri Stefánsson: <a href="">Belief Revision for Growing Awareness</a></b> (pdf, 10105 words)<br /> <div>The Bayesian maxim for rational learning could be described as <i>conservative change</i> from one probabilistic belief or <i>credence</i> function to another in response to new information. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of <i>conditionalisation</i>). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting.</div><br /> <b>Kevin D. Hoover: <a href="">The Discovery of Long-Run Causal Order: A Preliminary Investigation</a></b> (pdf, 14357 words)<br /> <div><b></b>The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.</div><br /> <b>Salvatore Florio, David Nicolas: <a href="">Plurals and Mereology</a></b> (pdf, 14892 words)<br /> <div>In linguistics, the dominant approach to the semantics of plurals appeals to mereology. However, this approach has received strong criticisms from philosophical logicians who subscribe to an alternative framework based on plural logic. In the first part of the article, we offer a precise characterization of the mereological approach and the semantic background in which the debate can be meaningfully reconstructed. In the second part, we deal with the criticisms and assess their logical, linguistic, and philosophical significance. We identify four main objections and show how each can be addressed. Finally, we compare the strengths and shortcomings of the mereological approach and plural logic. Our conclusion is that the former remains a viable and well-motivated framework for the analysis of plurals.</div><br /> <b>Samuele Iaquinto, Claudio Calosi: <a href="">Is the World a Heap of Quantum Fragments?</a></b> (pdf, 4959 words)<br /> <div>Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: (i) the quantum state is a complete description of a physical system; (ii) the quantum (superposition) state is grounded in its terms, and (iii) the superposition terms are themselves grounded in local goings-on about the system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism.</div><br />