The New England Journal of Medicine NEJM announced new guidelines for authors for statistical reporting yesterday*. The ASA describes the change as “in response to the ASA Statement on P-values and Statistical Significance and subsequent The American Statistician special issue on statistical inference” (ASA I and II, in my abbreviation). …
The dispute between defenders and opponents of extended cognition (EC) has come to a dead end as no agreement on what the mark of the cognitive is could be found. Recently, many authors, therefore, have pursued a different strategy: they focus on the notion of constitution rather than the notion of cognition to determine whether constituents of cognitive phenomena can be external to the brain. One common strategy is to apply the new mechanists’ mutual manipulability account (MM). In this paper, I will analyze whether this strategy can be successful. Thereby, I will focus on David Kaplan’s (2012) version of this strategy. It will turn out that MM alone is insufficient for answering the question whether EC is true or not. What I call the Challenge of Trivial Extendedness arises due to the fact that mechanisms for cognitive behaviors are extended in a way that nobody would want to count as cases of EC. I will argue that this challenge can be met by adding a further necessary condition: cognitive constituents of mechanisms satisfy MM and they are what I call behavior unspecific.
According to Danièle Moyal-Sharrock, Wittgenstein’s On Certainty presents a theory of hinges, and hinges have a role to play in a foundationalist epistemology (2013, this journal). Michael Williams (2005) and Annalisa Coliva (2013, this journal) have claimed that the hinges are not suitable to play such a role as they are not shared universally. Moyal-Sharrock has replied that a subset of the hinges is suitable to play such a role: the “universal” hinges, an account of which she developed in her 2004 book Understanding on Certainty (2013, this journal). I argue that for Moyal-Sharrock’s reply to be sustained, she must construe the set of universal hinges much more narrowly than she does currently. For instance, Moyal-Sharrock claims that “I have a brain” is a universal hinge, which consigns people who know nothing about brains to stand outside the bounds of sense. I also provide a novel way of thinking about the universal hinges, which I argue is better textually motivated than Moyal-Sharrock’s own way, and which provides a set of hinges more suitable to play a role in foundationalist epistemology.
Lewis on magnetism: Reply to Janssen-Lauret and Macbride
Posted on Friday, 19 Jul 2019
In my 2014 paper "Against Magnetism", I
argued that the meta-semantics Lewis defended in "Putnam's Paradox" and pp.45-49
of "New Work" is (a) unattractive, (b) does not fit what Lewis wrote about
meta-semantics elsewhere, and (c) was never Lewis's considered view. …
On a Humean metaphysics, energy conservation implies a vast conspiracy in the arrangement of things throughout spacetime, somewhat like this:
Wherever there is a change in energy in one region there is a corresponding balancing change in another region. …
There is long standing agreement both among philosophers and linguists that the term ‘counterfactual conditional’ is misleading if not a misnomer. Speakers of both non-past subjunctive (or ‘would’ ) conditionals and past subjunctive (or ‘would have’ ) conditionals need not convey counterfactuality. The relationship between the conditionals in question and the counterfactuality of their antecedents is thus not one of presupposing. It is one of conversationally implicating. This paper provides a thorough examination of the arguments against the presupposition view as applied to past subjunctive conditionals and finds none of them conclusive. All the relevant linguistic data, it is shown, are compatible with the assumption that past subjunctive conditionals presuppose the falsity of their antecedents. This finding is not only interesting on its own. It is of vital importance both to whether we should consider antecedent counterfactuality to be part of the conventional meaning of the conditionals in question and to whether there is a deep difference between indicative and subjective conditionals.
It has seemed, to many, that there is an important connection between the ways in which some theoretical posits explain our observations, and our reasons for being ontologically committed to those posits. One way to spell out this connection is in terms of what has become known as the explanatory criterion of ontological commitment. This is, roughly, the view that we ought to posit only those entities that are indispensable to our best explanations. The motivation for a criterion such as this is clear: it aims to rule out commitment to ‘ontologically dubious’ entities—entities such as undetectable fairies at the bottom of one’s garden. The explanatory criterion is sometimes framed as a fairly strong thesis: that we ought (epistemically) to posit all and only those entities that are indispensable to the best available explanations of our observations.
According to a nowadays widely discussed analysis by Itamar Pitowsky, the theoretical problems of QT are originated from two ‘dogmas’: the first forbidding the use of the notion of measurement in the fundamental axioms of the theory; the second imposing an interpretation of the quantum state as representing a system’s objectively possessed properties and evolution. In this paper I argue that, contrarily to Pitowsky analysis, depriving the quantum state of its ontological commitment is not sufficient to solve the conceptual issues that affect the foundations of QT.
Convergent and divergent thought are promoted as key constructs of creativity. Convergent thought is defined and measured in terms of the ability to perform on tasks where there is one correct solution, and divergent thought is defined and measured in terms of the ability to generate multiple solutions. However, these characterizations of convergent and divergent thought presents inconsistencies, and do not capture the reiterative processing, or ‘honing’ of an idea that characterizes creative cognition. Research on formal models of concepts and their interactions suggests that different creative outputs may be projections of the same underlying idea at different phases of a honing process. This leads us to redefine convergent thought as thought in which the relevant concepts are considered from conventional contexts, and divergent thought as thought in which they are considered from unconventional contexts. Implications for the assessment of creativity are discussed.
Although Darwinian models are rampant in the social sciences, social scientists do not face the problem that motivated Darwin’s theory of natural selection: the problem of explaining how lineages evolve despite that any traits they acquire are regularly discarded at the end of the lifetime of the individuals that acquired them. While the rationale for framing culture as an evolutionary process is correct, it does not follow that culture is a Darwinian or selectionist process, or that population genetics provides viable starting points for modeling cultural change. This paper lays out step-by-step arguments as to why a selectionist approach to cultural evolution is inappropriate, focusing on the lack of randomness, and lack of a self-assembly code. It summarizes an alternative evolutionary approach to culture: self-other reorganization via context-driven actualization of potential.
Much has been said about Moore’s proof of the external world, but the notion of proof that Moore employs has been largely overlooked. I suspect that most have either found nothing wrong with it, or they have thought it somehow irrelevant to whether the proof serves its anti-skeptical purpose. I show, however, that Moore’s notion of proof is highly problematic. For instance, it trivializes in the sense that any known proposition is provable. This undermines Moore’s proof as he conceives it since it introduces a skeptical regress that he goes at length to resist. I go on to consider various revisions of Moore’s notion of proof and finally settle on one that I think is adequate for Moore’s purposes and faithful to what he says concerning immediate knowledge.
The paper discusses two contemporary views about the foundation of statistical mechanics and deterministic probabilities in physics: one that regards a measure on the initial macro-region of the universe as a probability measure that is part of the Humean best system of laws (Mentaculus) and another that relates it to the concept of typicality. The first view is tied to Lewis’ Principal Principle, the second to a version of Cournot’s principle. We will defend the typicality view and address open questions about typicality and the status of typicality measures.
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
Recent work on the epistemology of moral deference suggests that moral knowledge must derive from a knower’s own ability in a way that knowledge acquired easily through testimony need not. This paper transposes this idea to the collective level, and in doing so, shows how two leading accounts of collective knowledge, the joint acceptance account and the distributed account, would be best positioned to countenance group-level moral knowledge as knowledge creditable to group-level ability. The upshot is that we uncover some hitherto unnoticed puzzles to do with defeat in collective moral epistemology, puzzles which reveal collective moral knowledge to be surprisingly fragile vis-à-vis higher-order defeat compared to individual-level moral knowledge. A consequence of this disanalogy is that more work needs done if non-skeptical collective moral epistemology is to hold water.
In his 1961 paper, “Irreversibility and Heat Generation in the Computing Process,” Rolf Landauer speculated that there exists a fundamental link between heat generation in computing devices and the computational logic in use. According to Landauer, this heating effect is the result of a connection between the logic of computation and the fundamental laws of thermodynamics. The minimum heat generated by computation, he argued, is fixed by rules independent of its physical implementation. The limits are fixed by the logic and are the same no matter the hardware, or the way in which the logic is implemented. His analysis became the foundation for both a new literature, termed “the thermodynamics of computation” by Charles Bennett, and a new physical law, Landauer’s principle.
Suppose I promise my class to grade all the weekly homework within three days. In week four, I fail and am late with grading. If the content of my promise was simply the proposition
that I grade all the homework within three days,
then after week four, then no matter how speedy I am with grading the homework, proposition (1) is just plain false. …
Imagine two objects, M and H, where M has the intrinsic causal power of emitting some sort of a pulse once per minute and H has the intrinsic causal power of pulsing once per hour, and suppose M and H are causally separated from the rest of the universe. …
Radical Embodied Cognitive Science (REC) tries to understand as much cognition as it can without positing contentful mental entities. Thus, in one prominent formulation, REC claims that content is involved neither in visual perception nor in any more elementary form of cognition. Arguments for REC tend to rely heavily on considerations of ontological parsimony, with authors frequently pointing to the difficulty of explaining content in naturalistically acceptable terms. However, many classic concerns about the difficulty of naturalizing content likewise threaten the credentials of intentionality, which even advocates of REC take to be a fundamental feature of cognition. In particular, concerns about the explanatory role of content and about indeterminacy can be run on accounts of intentionality as well. Issues about explanation can be avoided, intriguingly if uncomfortably, by dramatically reconceptualizing or even renouncing the idea that intentionality can explain. As for indeterminacy, Daniel Hutto and Erik Myin point the way toward a response, appropriating an idea from Ruth Millikan. I take it a step further, arguing that attention to the ways that beliefs’ effects on behavior are modulated by background beliefs can help illuminate the facts that underlie their intentionality and content.
Let me begin with an admission: I am neither a Neo-Kantian myself nor a historian of philosophy. I became aware of Cassirer’s work through my search for precedents for the kind of structural realism that Ladyman was developing, as captured in the slogan ‘The world is structure’. As is now well-known, this differs from Worrall’s form of structural realism in that the latter maintains ‘All that we know is structure’ and in his early writings, Worrall followed Poincaré in his insistence that the nature of the world, beyond this structure, was unknown to us. Subsequently he adopted a kind of agnosticism with regard to this ‘nature’ but in that earlier form we find certain Kantian resonances, which is not surprising of course, given its ancestry in Poincaré’s work. One might initially think that the neo-Kantian would find Ladyman’s collapse of ‘nature’ into ‘structure’ to be unfortunate but, of course, if one takes ‘the world’ of the realist to be the phenomenal world, with the noumena taken negatively and not regarded as the world of determinate but unknowable objects, in the way that Worrall conceives of it (and here I recognise that I am stepping into a minefield!) then there may not be such a chasm between these two views as might at first appear.
Many people are drawn to the Prioritarian view that "Benefiting people matters more the worse off these people are." (Parfit 1997, 213) Importantly, this is not just the (utilitarian-compatible) idea that many goods have diminishing marginal value, so that better-off people are likely to benefit less than worse-off people from a certain amount of material goods. …
We reexamine some of the classic problems connected with the use of cardinal utility functions in decision theory, and discuss Patrick Suppes’ contributions to this field in light of a reinterpretation we propose for these problems. We analytically decompose the doctrine of ordinal-ism, which only accepts ordinal utility functions, and distinguish between several doctrines of cardinalism, depending on what components of ordinalism they specifically reject. We identify Suppes’ doctrine with the major deviation from ordinalism that conceives of utility functions as representing preference differences, while being nonetheless empirically related to choices. We highlight the originality, promises and limits of this choice-based cardinalism.
In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs to be re-evaluated accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes.
Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and those for a society relying on mechanisms that are not necessarily controllable. The difficulty by companies in regulating the spread of misinformation, as well as those by authorities to protect task-workers managed by a software infrastructure, could be just some of the effects of this technological paradigm.
Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept learning, we show that it is easier to infer the meaning of complex concepts than that of simple concepts.
Although the interest about emergence has grown during the last years, there does not seem to be consensus on whether it is a non-trivial, interesting notion and whether the concept of reduction is relevant to its characterization. Another key issue is whether emergence should be understood as an epistemic notion or if there is a plausible ontological concept of emergence. The aim of this work is to propose an epistemic notion of contextual emergence on the basis of which one may tackle those issues.
From the point of view of cognitive development, the present paper by Bart Geurts is highly relevant, welcome and timely. It speaks to a fundamental puzzle in developmental pragmatics that used to be seen as such, then was considered to be resolved by many researchers, but may return nowadays with its full puzzling force.
At around their third birthday, children begin to enforce social norms on others impersonally, often using generic normative language, but little is known about the developmental building blocks of this abstract norm understanding. Here, we investigate whether even toddlers show signs of enforcing on others interpersonally how “we” do things. In an initial dyad, 18-month-old infants learnt a simple game-like action from an adult. In two experiments, the adult either engaged infants in a normative interactive activity (stressing that this is the way “we” do it) or, as a non-normative control, marked the same action as idiosyncratic, based on individual preference. In a test dyad, infants had the opportunity to spontaneously intervene when a puppet partner performed an alternative action. Infants intervened, corrected, and directed the puppet more in the normative than in the non-normative conditions. These findings suggest that, during the second year of life, infants develop second-personal normative expectations about their partner’s behavior (“You should do X!”) in social interactions, thus making an important step toward
I argue that our best science supports the rationalist idea that, independent of reasoning, emotions aren’t integral to moral judgment. There’s ample evidence that ordinary moral cognition often involves conscious and unconscious reasoning about an action’s outcomes and the agent’s role in bringing them about. Emotions can aid in moral reasoning by, for example, drawing one’s attention to such information. However, there is no compelling evidence for the decidedly sentimentalist claim that mere feelings are causally necessary or sufficient for making a moral judgment or for treating norms as distinctively moral. I conclude that, even if moral cognition is largely driven by automatic intuitions, these shouldn’t be mistaken for emotions or their non-cognitive components. Non-cognitive elements in our psychology may be required for normal moral development and motivation but not necessarily for mature moral judgment.
Consider an item x with a half-life of one hour. Then over the period of an hour, it has a 50% chance of decaying, over the period of a second it only has a 0.02% chance of decaying. Imagine that x has no way of changing except by decaying, and that x is causally isolated from all outside influences. …
Socialism is a rich tradition of political thought and practice, the
history of which contains a vast number of views and theories, often
differing in many of their conceptual, empirical, and normative
commitments. In his 1924 Dictionary of Socialism, Angelo
Rappoport canvassed no fewer than forty definitions of socialism,
telling his readers in the book’s preface that “there are
many mansions in the House of Socialism” (Rappoport 1924: v,
34–41). To take even a relatively restricted subset of socialist
thought, Leszek Kołakowski could fill over 1,300 pages in his
magisterial survey of Main Currents of Marxism
(Kołakowski 1978 ).