Causality plays an important role in medieval philosophical writing:
the dominant genre of medieval academic writing was the commentary on
an authoritative work, very often a work of Aristotle. Of the works of
Aristotle thus commented on, the Physics plays a central
role. Other of Aristotle’s scientific works – On the
Heavens and the Earth, On Generation and Corruption,
and, of course, the Metaphysics – are also significant
for the study of causation: so there is a rather daunting body of work
to survey. One might, though, be tempted to argue that this concentration on
causality is simply an effect of reading Aristotle, but this would be
Symposium on Del Pinal and Spaulding, “Conceptual Centrality and Implicit Bias” Robert Briscoe April 23, 2018 Mind & Language Symposia / Philosophy of Mind / Psychology / Social CognitionI’m very glad to announce our latest Mind & Language symposium on Guillermo Del Pinal and Shannon Spaulding’s “Conceptual Centrality and Implicit Bias” from the journal’s February 2018 issue. …
Recent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays in general do not traverse null geodesics. Having presented this result, we discuss its philosophical significance, both for the clock hypothesis (and, in particular, a recent purported proof thereof for light clocks), and for the operational meaning of the metric field.
In the obituary of her mentor Bill Hamilton, the American entomologist and evolutionary biologist Marlene Zuk wrote that the difference between Hamilton and everyone else was “not the quality of his ideas, but their sheer abundance” (Zuk 2000). The proportion of his ideas that were actually good was about the same as anyone else, “the difference between Bill and most other people was that he had a total of over one hundred ideas, with the result that at least ten of them were brilliant, whereas the rest of us have only four or five ideas as long as we live, with the result that none of them are”. Hamilton indeed had many good ideas. Over the years he made substantial contributions to the study of the origin of sex, genetic conflicts, and the evolution of senescence (Ågren 2013). His best idea, and the one that bears his name, is about the evolution of social behaviour, especially altruism. Hamilton’s Rule, and the related concepts of inclusive fitness and kin selection, have been the bedrock of the study of social evolution for the past half century (Figure 1).
This article uses psychological and neural theories to illuminate the use of analogies in literary allegories. It shows how new theories of neural representation, encompassing both cognitive and emotional aspects, have the potential to make sense of many kinds of literary comparisons including allegories. The main text analyzed is George Orwell’s Animal Farm, whose effectiveness is discussed using the multiconstraint theory of analogy supplemented with observations about neural functioning.
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced.
Automated geometry theorem provers start with logic-based formulations of Euclid’s axioms and postulates, and often assume the Cartesian coordinate representation of geometry. That is not how the ancient mathematicians started: for them the axioms and postulates were deep discoveries, not arbitrary postulates. What sorts of reasoning machinery could the ancient mathematicians, and other intelligent species (e.g. crows and squirrels), have used for spatial reasoning? “Diagrams in minds” perhaps? How did natural selection produce such machinery?
George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
The Mathematical Analysis of Logic (1847)
The Laws of Thought (1854).
Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non- Von Neumann computational models they use share many characteristics with biological computation.
A simple argument proposes a direct link between realism about quantum mechanics and one kind of metaphysical holism: if elementary quantum theory is at least approximately true, then there are entangled systems with intrinsic whole states for which the intrinsic properties and spatiotemporal arrangements of salient subsystem parts do not suffice.
Think of a pointillist painting: hundreds of tiny pixels depicting a leafy scene. Each leaf is a constellation of primary colors, some expertly proportioned and arranged dots of red, yellow, and blue paint. These pixels are mutually independent: the color at one does not depend on or constrain the decoration anywhere else. Collectively, though, they determine all the contents of our painting: duplicate the geometry of the canvas and the pointy distribution of pigments and we thereby duplicate the whole integrated scene.
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.
In the framework of Brans—Dicke theory, a cosmological model regarding the expanding universe has been formulated by considering an inter—conversion of matter and dark energy. A function of time has been incorporated into the expression of the density of matter to account for the non—conservation of the matter content of the universe. This function is proportional to the matter content of the universe. Its functional form is determined by using empirical expressions of the scale factor and the scalar field in field equations. This scale factor has been chosen to generate a signature flip of the deceleration parameter with time. The matter content is found to decrease with time monotonically, indicating a conversion of matter into dark energy. This study leads us to the expressions of the proportions of matter and dark energy of the universe. Dependence of various cosmological parameters upon the matter content has been explored.
How is it possible that models from game theory, which are typically highly idealised, can be harnessed for designing institutions through which we interact? I argue that game theory assumes that social interactions have a specific structure, which is uncovered with the help of directed graphs. The graphs make explicit how game theory encodes counterfactual information in natural collections of its models and can therefore be used to track how model-interventions change model-outcomes. For model-interventions to inform real-world design requires the truth of a causal hypothesis, namely that structural relations specified in a model approximate causal relations in the target interaction; or in other words, that the directed graph can be interpreted causally. In order to increase their confidence in this hypothesis, market designers complement their models with natural and laboratory experiments, and computational methods. Throughout the paper, the reform of a matching market for medical residents provides a case study for my proposed view, which hasn’t been previously considered in the philosophy of science.
Thermodynamics makes definite predictions about the thermal behavior of macroscopic systems in and out of equilibrium. Statistical mechanics aims to derive this behavior from the dynamics and statistics of the atoms and molecules making up these systems. A key element in this derivation is the large number of microscopic degrees of freedom of macroscopic systems. Therefore, the extension of thermodynamic concepts, such as entropy, to small (nano) systems raises many questions. Here we shall reexamine various definitions of entropy for nonequilibrium systems, large and small. These include thermodynamic (hydrodynamic), Boltzmann, and Gibbs-Shannon entropies. We shall argue that, despite its common use, the last is not an appropriate physical entropy for such systems, either isolated or in contact with thermal reservoirs: physical entropies should depend on the microstate of the system, not on a subjective probability distribution. To square this point of view with experimental results of Bechhoefer we shall argue that the Gibbs-Shannon entropy of a nano particle in a thermal fluid should be interpreted as the Boltzmann entropy of a dilute gas of Brownian particles in the fluid.
We discuss an article by Steven Weinberg  expressing his discontent with the usual ways to understand quantum mechanics. We examine the two solutions that he considers and criticizes and propose another one, which he does not discuss, the pilot wave theory or Bohmian mechanics, for which his criticisms do not apply.
Scientific research is almost always conducted by communities of scientists of varying size and complexity. Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project. Scientists manage to do this without a central authority allocating them to different projects. Thanks largely to the pioneering studies of Philip Kitcher and Michael Strevens , understanding this self-organization has become an important area of research in the philosophy of science.
Recently the first protective measurement has been realized in experiment [Nature Phys. 13, 1191 (2017)], which can measure the expectation value of an observable from a single quantum system. This raises an important and pressing issue of whether protective measurement implies the reality of the wave function. If the answer is yes, this will improve the influential PBR theorem [Nature Phys. 8, 475 (2012)] by removing auxiliary assumptions, and help settle the issue about the nature of the wave function. In this paper, we demonstrate that this is indeed the case. It is shown that a ψ-epistemic model and quantum mechanics have different predictions about the variance of the result of a Zeno-type protective measurement with finite N .
It is difficult for the metaphysician to not be fascinated by Stephen Hawking’s question, ‘What is it that breathes fire into the equations and makes a universe for them to govern?’ (Hawking, 1988, p. 174). Like a Tuscan countryside in the eyes of a painter, this statement inspires quite the stream of consciousness, at least in my idiosyncratic mind. For one thing, Hawking’s wording sounds as if abstract entities provide push and pull to the universe. Why would the equations govern anything, rather than merely describing how events tend to unfold? Objections aside though, I like Hawking’s question because it makes me wonder, given the mathematical nature of fundamental physical theories, what, in the realm of concreta, the lofty equations are describing. And, in another blip of consciousness, I am reminded of my Russellian monist friends, who would perhaps see, in Hawking’s question, the related question: how do we know what is ontologically fundamental, if science just details the nomological-causal structure of the world, and remains silent about its underlying categorical properties? Not quite like the rich hues of Tuscany at sunset, but alas, the mathematical nature of physics intrigues me.
This study investigated the development of intuitions about which properties are associated with the brain and which are associated with the body. A sample of 60 children aged 6, 8, and 10 years, as well a sample of 20 adults, were told about a brain transplant between two individuals and were asked about where certain properties resided after the transplant. Adults and older children construed the characteristics associated with fine-motor behaviour, culpability, social contract and best friendships as transferring with the brain. Characteristics associated with gross-motor behaviour, physical/biological properties, ownership and familial relationships were more likely to be seen as remaining with the body. Domain-based explanations for this pattern of results are discussed. Copyright © 2011 John Wiley & Sons, Ltd.
Ongoing empirical discoveries in molecular biology have generated novel conceptual challenges and perspectives. Philosophers of biology have reacted to these trends when investigating the practice of molecular biology and contributed to scientific debates on methodological and conceptual matters. This article reviews some major philosophical issues in molecular biology. First, philosophical accounts of mechanistic explanation yield a notion of explanation in the context of molecular biology that does not have to rely on laws of nature and comports well with molecular discovery. Second, reductionism continues to be debated and increasingly be rejected by scientists. Philosophers have likewise moved away from reduction toward integration across fields or integrative explanations covering several levels of organization. Third, although the gene concept has undergone substantial transformation and even fragmentation, it still enjoys widespread use by molecular biologists, which has prompted philosophers to understand the empirical reasons for this. At the same time, it has been argued the notion of ‘genetic information’ is largely an empty metaphor, which generates the illusion of explanatory understanding without offering an adequate explanation of molecular and developmental mechanisms.
Many viewers of time travel movies and readers of time travel fiction see loops where there are none. The loops they think are there are persistent cognitive illusions. In what follows I explain why they are illusions and how the illusion arises.https://www.philosophicalprogress.org/sourcesadmin
This article discusses the following issues about space and time: whether they are absolute or relative, whether they depend on minds, what their topological and metrical structures may be, McTaggart’s argument against the reality of time, the ensuing split between static and dynamic theories of time, problems with presentism, and the possibility of time travel. Our opening questions are posed in the following query from Kant: What, then, are space and time? Are they real existences? Are they only determinations or relations of things, yet such as would belong to things even if they were not intuited?
Karl Popper argued in 1974 that evolutionary theory contains no testable laws and is therefore a metaphysical research program. Four years later, he said that he had changed his mind. Here we seek to understand Popper’s initial position and his subsequent retraction. We argue, contrary to Popper’s own assessment, that he did not change his mind at all about the substance of his original claim. We also explore how Popper ’s views have ramifications for contemporary discussion of the nature of laws and the structure of evolutionary theory.
The rise of medically unexplained conditions like fibromyalgia and chronic fatigue syndrome in the United States looks remarkably similar to the explosion of neurasthenia diagnoses in the late nineteenth century. In this paper, I argue the historical connection between neurasthenia and today’s medically unexplained conditions hinges largely on the uncritical acceptance of naturalism in medicine. I show how this cultural acceptance shapes the way in which we interpret and make sense of nervous distress while, at the same time, neglecting the unique social and historical forces that continue to produce it. I draw on the methods of hermeneutic philosophy to expose the limits of naturalism and forward an account of health and illness that acknowledges the extent to which we are always embedded in contexts of meaning that determine how we experience and understand our suffering.
A central question for philosophical psychology is which mental faculties form natural kinds. There is hot debate over the kind status of faculties as diverse as consciousness, seeing, concepts, emotions, constancy and the senses. In this paper, I take emotions and concepts as my main focus, and argue that questions over the kind status of these faculties are complicated by the undeservedly overlooked fact that natural kinds are indeterminate in certain ways. I will show that indeterminacy issues have led to an impasse in the debates over emotions and concepts. I examine possible ways to resolve this impasse, and argue against one of them. I then suggest a different method, which places more emphasis on a close analysis of predictive and explanatory practices in psychology. I argue that when we apply this method, a new position emerges: that it is indeterminate whether concepts or emotions are natural kinds. They are neither determinately natural kinds, nor determinately not natural kinds. Along the way, we will see that natural kinds have been put to two completely different theoretical uses, which are often been blurred together, and that they are ill-suited to fulfil one of them.
You may very well know the Five Books website, where a wide-ranging cast of contributors are asked “to make book recommendations in their area of work and explain their choices in an interview”. The recommendations are often quirky, sometimes even slightly bizarre, but rarely without interest. …
In this paper, I argue that the “positive argument” for Constructive Empiricism (CE), according to which CE “makes better sense of science, and of scientific activity, than realism does” (van Fraassen 1980, 73), is an Inference to the Best Explanation (IBE). But constructive empiricists are critical of IBE, and thus they have to be critical of their own “positive argument” for CE. If my argument is sound, then constructive empiricists are in the awkward position of having to reject their own “positive argument” for CE by their own lights.
Within a few decades it is likely that gene editing technologies will become increasingly viable, safe, and cheap. As scientists uncover the genetic basis for heritable personality traits, including different cognitive styles, parents will face hard choices. Some of these traits will involve trade-offs from the standpoint of the individual's welfare, while others will involve trade-offs between what is best for each and what is good for all. A simple example is extraversion, which positively correlates with subjective well-being and increased sociality, but which negatively correlates with academic performance. Another example is neuroticism, which can lead to increased achievement but also a greater risk of anxiety and depression. Although we think we should generally defer to the informed choices of parents about what kinds of children to create, we argue that decisions to manipulate polygenic personality traits will be much more ethically complicated than choosing our children’s eye color or hair type. We end by defending the principle of regulatory parsimony, which holds that when legislation is necessary to prevent serious harms we should aim for simple laws that apply to all, rather than micro-managing parental choices that shape the cognitive traits of their children.
Is phenomenal consciousness constitutively related to cognitive access? Despite being a fundamental issue for any science of consciousness, its empirical study faces a severe methodological puzzle. Recent years have seen numerous attempts to address this puzzle, either in practice, by offering evidence for a positive or negative answer, or in principle, by proposing a framework for eventual resolution. The present paper critically considers these endeavours, including partial-report, metacognitive and no-report paradigms, as well as the theoretical proposal that we can make progress by studying phenomenal consciousness as a natural kind. It is argued that the methodological puzzle remains obdurately with us and that, for now, we must adopt an attitude of humility towards the phenomenal.