A high heritability estimate usually corresponds to a situation where trait variation is largely caused by genetic variation. However, in some cases of gene-environment covariance, causal intuitions about the sources of trait difference can vary, leading experts to disagree as to how the heritability estimate should be interpreted. We argue that the source of contention for these cases is an inconsistency in the interpretation of the concepts ‘genotype’, ‘phenotype’ and ‘environment’. We propose an interpretation of these terms under which trait variance initially caused by genetic variance is subsumed into a heritability for all cases of gene-environment covariance.
The argument against mind-‐body identity theory in Naming and Necessity is directed against a theory advocated in Place (1956), Smart (1963), Lewis (1966), and Armstrong (1968). Their psycho-‐physical identity theory attempted to vindicate the reality of mental processes by identifying pains, sensations, and consciousness itself with brain states and processes. It arose in reaction to phenomenalism and behaviorism, the latter in both its scientific form, illustrated by B.F. Skinner, and its philosophical or “logical” form, illustrated by Gilbert Ryle. Early versions didn’t specify which brain states and processes were identical with pain states, sensation states, or consciousness. That was a job for neuroscientists. The philosophical job was to defeat conceptual objections to the possibility that any such identification could be correct and to articulate the explanatory advantages of incorporating the mental into physical science.
The issue of independent evidence is of central importance to hypothesis testing in evolutionary biology. Suppose you wanted to test the hypothesis that long fur is an adaptation to cold climate and short fur is an adaptation to warm climate. You look at 20 hour species; 10 live in a cold climate and have long fur, and 10 live in a warm climate and have short fur. Is there any reason to think that the data do not confirm the adaptive hypothesis? One worry is that the species in each group resemble each other merely because they inherited their fur length from a common ancestor of the group (and that the temperatures experienced by ancestors and descendants are similar). This influence of ancestor on descendant is often called phylogenetic inertia (e.g., see Harvey and l’ngel 1991).
There are many reasons for objecting to quantifying the ‘proof beyond reasonable doubt’ standard of criminal law as a percentage probability. They are divided into ethical and policy reasons, on the one hand, and reasons arising from the nature of logical probabilities, on the other. It is argued that these reasons are substantial and suggest that the criminal standard of proof should not be given a precise number. But those reasons do not rule out a minimal imprecise number. ‘Well above 80%’ is suggested as a standard, implying that any attempt by a prosecutor or jury to take the ‘proof beyond reasonable doubt’ standard to be 80% or less should be ruled out as a matter of law.
Some vegetative state patients show fMRI responses similar to those of healthy controls when instructed to perform mental imagery tasks. Many authors have argued that this provides evidence that such patients are in fact conscious, as response to commands requires intentional agency. I argue for an alternative reading, on which responsive patients have a deficit similar to that seen in severe forms of akinetic mutism. Akinetic mutism is marked by the inability to form and maintain intentions to act. Responsive patients are likely still conscious. However, the route to this conclusion does not support attributions of intentional agency. I argue that aspects of consciousness, rather than broad diagnostic categories, are the more appropriate target of empirical investigation. Investigating aspects of consciousness provides a better method for investigating profound disorders of consciousness.
The aim of the paper is to understand what is involved in the claim that a mental state in general and love in particular, is based on reasons. Love, like many other mental states, can be evaluated in various ways: it can be considered appropriate, deserved, enriching, perverse, destructive etc. but this does not mean that love is based on reasons. In this paper I present and defend a test that a mental state has to satisfy if it is to count as based on reasons. This test will be used to construct a new argument in favour of Frankfurt's position that love is not based on reasons.
Vagueness and precision alike are characteristics which can only belong to a representation, of which language is an example. They have to do with the relation between a representation and that which it represents. Apart from representation…there can be no such thing as vagueness or precision… Moreover, David Lewis asserts: “The only intelligible account of vagueness locates it in our thought and language.” And most philosophers nowadays agree that all vagueness is a feature of representations, and, in particular, a feature of language or thought.
Charlie Dunbar Broad (1887–1971) was an English philosopher who for
the most part of his life was associated with Trinity College,
Cambridge. Broad’s early interests were in science and
mathematics. Despite being successful in these he came to believe that
he would never be a first-rate scientist, and turned to philosophy. Broad’s interests were exceptionally wide-ranging. He devoted his
philosophical acuity to the mind-body problem, the nature of
perception, memory, introspection, and the unconscious, to the nature
of space, time and causation. He also wrote extensively on the
philosophy of probability and induction, ethics, the history of
philosophy and the philosophy of religion.
After a sketch of the optimism and high aspirations of History and Philosophy of Science when I first joined the field in the mid 1960s, I go on to describe the disastrous impact of "the strong programme" and social constructivism in history and sociology of science. Despite Alan Sokal's brilliant spoof article, and the "science wars" that flared up partly as a result, the whole field of Science and Technology Studies (STS) is still adversely affected by social constructivist ideas. I then go on to spell out how in my view STS ought to develop. It is, to begin with, vitally important to recognize the profoundly problematic character of the aims of science. There are substantial, influential and highly problematic metaphysical, value and political assumptions built into these aims. Once this is appreciated, it becomes clear that we need a new kind of science which subjects problematic aims - problematic assumptions inherent in these aims - to sustained imaginative and critical scrutiny as an integral part of science itself. This needs to be done in an attempt to improve the aims and methods of science as science proceeds. The upshot is that science, STS, and the relationship between the two, are all transformed. STS becomes an integral part of science itself. And becomes a part of an urgently needed campaign to transform universities so that they become devoted to helping humanity create a wiser world.
Kantian philosophy of space, time and gravity is significantly affected in three ways by particle physics. First, particle physics deflects Schlick’s General Relativity-based critique of synthetic a priori knowledge. Schlick argued that since geometry was not synthetic a priori, nothing was—a key step toward logical empiricism. Particle physics suggests a Kant-friendlier theory of space-time and gravity presumably approximating General Relativity arbitrarily well, massive spin-2 gravity, while retaining a flat space-time geometry that is indirectly observable at large distances. The theory’s roots include Seeliger and Neumann in the 1890s and Einstein in 1917 as well as 1920s-30s physics. Such theories have seen renewed scientific attention since 2000 and especially since 2010 due to breakthroughs addressing early 1970s technical difficulties.
Résumé : Cet article cherche à montrer comment la pratique mathématique, particulièrement celle admettant des représentations visuelles, peut conduire à de nouveau résultats mathématiques. L’argumentation est basée sur l’étude du cas d’un domaine des mathématiques relativement récent et prometteur: la théorie géométrique des groupes. L’article discute comment la représentation des groupes par les graphes de Cayley rendit possible la découverte de nouvelles propriétés géométriques de groupes. Abstract: The paper aims to show how mathematical practice, in particular with visual representations can lead to new mathematical results. The argument is based on a case study from a relatively recent and promising mathematical subject—geometric group theory. The paper discusses how the representation of groups by Cayley graphs made possible to discover new geometric properties of groups.
The idea that a serious threat to scientific realism comes from unconceived alternatives has been proposed by van Fraassen, Sklar, Stanford and Wray among others. Peter Lipton’s critique of this threat from underconsideration is examined briefly in terms of its logic and its applicability to the case of space-time and particle physics. The example of space-time and particle physics indicates a generic heuristic for quantitative sciences for constructing potentially serious cases of underdetermination, involving one-parameter family of rivals Tm (m real and small) that work as a team rather than as a single rival against default theory T . In important examples this new parameter has a physical meaning (e.g., particle mass) and makes a crucial conceptual difference, shrinking the symmetry group and in some case putting gauge freedom, formal indeterminism vs. determinism, the presence of the hole argument, etc., at risk. Methodologies akin to eliminative induction or tempered subjective Bayesianism are more demonstrably reliable than the custom of attending only to “our best theory”: they can lead either to a serious rivalry or to improved arguments for the favorite theory. The example of General Relativity (massless spin 2 in particle physics terminology) vs. massive spin 2 gravity, a recent topic in the physics literature, is discussed. Arguably the General Relativity and philosophy literatures have ignored the most serious rival to General Relativity.
The slogan ‘Evidence of evidence is evidence’ may sound plausible, but what it means is far from clear. It has often been applied to connect evidence in the current situation to evidence in another situation. The relevant link between situations may be diachronic (White 2006: 538): is present evidence of past or future evidence of something present evidence of that thing? Alternatively, the link may be interpersonal (Feldman 2007: 208): is evidence for me of evidence for you of something evidence for me of that thing? Such interperspectival links have been discussed because they can destabilize inter-perspectival disagreements. In their own right they have become the topic of a lively recent debate (Fitelson 2012, Feldman 2014, Roche 2014, Tal and Comesaña 2014).
Okasha, in *Evolution and the Levels of Selection*, convincingly argues that two rival statistical decompositions of covariance, namely contextual analysis and the neighbour approach, are better causal decompositions than the hierarchical Price approach. However, he claims that this result cannot be generalized in the special case of soft selection and argues that the Price approach represents in this case a better option. He provides several arguments to substantiate this claim. In this paper, I demonstrate that these arguments are flawed and argue that neither the Price equation nor the contextual and neighbour partitionings sensu Okasha are adequate causal decompositions in cases of soft selection. The Price partitioning is generally unable to detect cross-level by-products and this naturally also applies to soft selection. Both contextual and neighbour partitionings violate the fundamental principle of determinism that the same cause always produces the same effect. I argue that a fourth partitioning widely used in the contemporary social sciences, under the generic term of ‘hierarchical linear model’ and related to contextual analysis understood broadly, addresses the shortcomings of the three other partitionings and thus represents a better causal decomposition.
For evolution by natural selection to occur it is classically admitted that the three ingredients of variation, difference in fitness and heredity are necessary and sufficient. In this paper, I show using simple individual-based models, that evolution by natural selection can occur in populations of entities in which neither heredity nor reproduction are present. Furthermore, I demonstrate by complexifying these models that both reproduction and heredity are predictable Darwinian products (i.e. complex adaptations) of populations initially lacking these two properties but in which new variation is introduced via mutations. Later on, I show that replicators are not necessary for evolution by natural selection, but rather the ultimate product of such processes of adaptation. Finally, I assess the value of these models in three relevant domains for Darwinian evolution.
The religious phenomenon is a complex one in many respects. In recent years an increasing number of theories on the origin and evolution of religion have been put forward. Each one of these theories rests on a Darwinian framework but there is a lot of disagreement about which bits of the framework account best for the evolution of religion. Is religion primarily a by-product of some adaptation? Is it itself an adaptation, and if it is, does it benefi ciate individuals or groups? In this chapter, I review a number of theories that link religion to cooperation and show that these theories, contrary to what is often suggested in the literature, are not mutually exclusive. As I present each theory, I delineate an integrative framework that allows distinguishing the explanandum of each theory. Once this is done, it becomes clear that some theories provide good explanations for the origin of religion but not so good explanations for its maintenance and vice versa. Similarly some explanations are good explanations for the evolution of religious individual level traits but not so good explanations for traits hard to defi ne at the individual level. I suggest that to fully understand the religious phenomenon, integrating in a systematic way the different theories and the data is a more successful approach.
In this paper, I identify two major problems with the model of evolutionary transitions in individuality (ETIs) developed by Michod and colleagues, and extended by Okasha, commonly referred to as the “export-of-fitness view”. First, it applies the concepts of viability and fertility inconsistently across levels of selection. This leads Michod to claim that once an ETI is complete, lower-level entities composing higher-level individuals have nil fitness. I argue that this claim is mistaken, propose a correct way to translate the concepts of viability and fertility from one level to the other and show that once an ETI is complete, neither viability nor fertility of the lower level entities is nil. Second, the export-of-fitness view does not sufficiently take the parameter of time into account when estimating fitness across levels of selection. As a result fitness is measured over different periods of time at each level. This ultimately means that fitness is measured in different environmental conditions at each level and misleads Okasha into making the claim that the two levels are ontologically distinct levels of selection. I show that once fitness is measured over the same period of time across levels, the claim about two levels of selection can only be an epistemic one.
In this critical notice to Robert Wright’s The Evolution of God, we focus on the question of whether Wright’s God is one which can be said to be an adaptation in a well defined sense. Thus we evaluate the likelihood of different models of adaptive evolution of cultural ideas in their different levels of selection. Our result is an emphasis on the plurality of mechanisms that may lead to adaptation. By way of conclusion we assess epistemologically some of Wright’s more controversial claims concerning the directionality of evolution and moral progress.
Altruism is one of the most studied topics in theoretical evolutionary biology. The debate surrounding the evolution of altruism has generally focused on the conditions under which altruism can evolve and whether it is better explained by kin selection or multilevel selection. This debate has occupied the forefront of the stage and left behind a number of equally important questions. One of them, which is the subject of this paper, is whether the word “selection” in “kin selection” and “multilevel selection” necessarily refers to “evolution by natural selection”. I show, using a simple individual-centered model, that once clear conditions for natural selection and altruism are specified, one can distinguish two kinds of evolution of altruism, only one of which corresponds to the evolution of altruism by natural selection, the other resulting from other evolutionary processes.
Drift is often characterized in statistical terms. Yet such a purely statistical characterization is ambiguous for it can accept multiple physical interpretations. Because of this ambiguity it is important to distinguish what sorts of processes can lead to this statistical phenomenon. After presenting a physical interpretation of drift originating from the most popular interpretation of fitness, namely the propensity interpretation, I propose a different one starting from an analysis of the concept of drift made by Godfrey- Smith. Further on, I show how my interpretation relates to previous attempts to make sense of the notion of expected value in deterministic setups. The upshot of my analysis is a physical conception of drift that is compatible with both a deterministic and indeterministic world.
Reputation monitoring and the punishment of cheats are thought to be crucial to the viability and maintenance of human cooperation in large groups of non-kin. However, since the cost of policing moral norms must fall to those in the group, policing is itself a public good subject to exploitation by free riders. Recently, it has been suggested that belief in supernatural monitoring and punishment may discourage individuals from violating established moral norms and so facilitate human cooperation. Here we use cross-cultural survey data from a global sample of 87 countries to show that beliefs about two related sources of supernatural monitoring and punishment — God and the afterlife — independently predict respondents' assessment of the justifiability of a range of moral transgressions. This relationship holds even after controlling for frequency of religious participation, country of origin, religious denomination and level of education. As well as corroborating experimental work, our findings suggest that, across cultural and religious backgrounds, beliefs about the permissibility of moral transgressions are tied to beliefs about supernatural monitoring and punishment, supporting arguments that these beliefs may be important promoters of cooperation in human groups. © 2011 Elsevier Inc. All rights reserved.
Hospital in Boston, Massachusetts. That report described the extraordinary surgery, immediately after birth, made possible by the use of computer—aided presurgical planning.1 The media picked up the story, and a first page arti— cle appeared in The New York Times on the same day.2 Two days earlier, on 8 August, conjoined twins were born in Malta in a case that stirred even more media attention. Eventually they underwent surgical separation in the Unit— ed Kingdom against the parents’ wishes.3
Reading Michael Fox’s "Animal Liberation: A Critique” in this issue was a chastening experience. In the past, when reading the complaints of authors that their critics have misunderstood them, I have tended to believe that some, at least, of the fault must lie with the author. If he has been misunderstood, he must have failed to make his views clear. Now that Fox’s article puts me in the position of complaining author, I wonder if my previous reactions were fair. I cannot find any obscurities in Animal Liberation which could have led Fox to his extraordinary presentation of "my" position.
The demiurge flipped a fair coin. If it landed heads, he created 100 people, of whom 10 had a birthmark on their back. If it landed tails, he created 10 people, of whom 9 had a birthmark on their back. …
In What a Plant Knows, Daniel Chamowitz reports what plant biologists apparently have known for a long time: although plants generally stay in one place (they’re sessile), they actively negotiate their environments. …
Suppose insects are conscious. There are at least about a billion insects per human being. So, if insects are conscious, we should be surprised to find ourselves not being an insect. But if insects are not conscious, there is no surprise there. …
Most of the themes are very well known, so I mention only a lesser known point. Fisher(1955) is criticizing Neyman and Pearson’s 1933 paper as having called his work an example of “inductive behavior”. …
Plato’s Sophist and Statesman use a notion of a model (paradeigma) quite different from the one with which we are familiar from dialogues like the Phaedo, Parmenides, and Timaeus. In those dialogues a paradeigma is a separate Form, an abstract perfect particular, whose nature is exhausted by its own character. Its participants are conceived as likenesses or images of it: they share with the Form the same character, but they also fall short of it because they exemplify not only that character but also its opposite. Mundane beautiful objects are plagued by various sorts of relativity—Helen is beautiful compared to other women, but not beautiful compared to a goddess; she is beautiful in her physical appearance, but not in her soul or her actions; she is beautiful in your eyes, but not in mine, and so on. The Form of the Beautiful, which is supposed to explain her beauty, is simply and unqualifiedly beautiful (Symp. 210e5-211d1).
We Explore Consequences of the View That to Know a Proposition Your Rational Credence in the Proposition Must Exceed a Certain Threshold. In Other Words, to Know Something You Must Have Evidence That Makes Rational a High Credence in It. We Relate Such a Threshold View to Dorr Et Al.’S (Philosophical Studies 170(2):277–287, 2014) Argument Against the Principle They Call Fair Coins: ‘‘If You Know a Coin Won’T Land Tails, Then You Know It Won’T Be Flipped.’’ They Argue for Rejecting Fair Coins Because It Leads to a Pervasive Skepticism About Knowledge of the Future. We Argue That the Threshold View of Evidence and Knowledge Gives Independent Grounds to Reject Fair Coins.
According to a number of theorists (Arpaly 2002, 2003; Arpaly and Schroeder 2014; Markovits 2010), a morally right action has moral worth if and only if it is performed for the right reasons, which are the reasons for which it is right, or the right-making features of the action. I have referred to morally worthy actions as “praiseworthy actions”, though, as we will see, perhaps “esteem-worthy actions” would be more precise, if one were to use Kantian terminology.