The concept of wild food does not play a significant role in contemporary nutritional science and it is seldom regarded as a salient feature within standard dietary guidelines. The knowledge systems of wild edible taxa are indeed at risk of disappearing. However, recent scholarship in ethnobotany, field biology, and philosophy demonstrated the crucial role of wild foods for food biodiversity and food security. The knowledge of how to use and consume wild foods is not only a means to deliver high-end culinary offerings, but also a way to foster alternative models of consumption. Our aim in this paper is to provide a conceptual framework for wild foods, which can account for diversified wild food ontologies. In the first section of the paper, we survey the main conception of wild foods provided in the literature, what we call the Nature View. We argue that this view falls short of capturing characteristics that are core to a sound account of wilderness in a culinary sense. In the second part of the paper, we provide the foundation for an improved model of wild food, which can countenance multiple dimensions and degrees characterizing wilderness in the culinary world. In the third part of the paper we argue that thanks to a more nuanced ontological analysis, the gradient framework can serve ethnobiologists, philosophers, scientists, and policymakers to represent and negotiate theoretical conflicts on the nature of wild food.
When I’m hungry, I try to seek some food, namely an object that is edible and that can feed me and preferably it has to be tasty. It seems a very easy task to find it for there is an alleged natural boundary between what counts as food and what does not. I can naturally pinpoint that boundary. Nevertheless, at a closer inspection, such boundary turns out to be suspicious: a roasted human being is both edible and nutritious, and someone may even find it tasty, and yet it can be hardly considered as food. Likewise, a rotten food item is neither edible, nor nutritious and however it can be sometimes considered as food, such as marcescent cheese. Our aim in this paper is to nail down the different conceptions which regulate our conception of what is a food and then come up with a proper definition. We set forth four different stances: a biological one, i.e., food is what holds certain natural properties, an individual one, i.e., food is what can be eaten by at least one person, an authority one, i.e., food is what is considered so by an authority, and a social one. i.e., food is what is institutionally recognized as food.
Trust is important, but it is also dangerous. It is important because
it allows us to depend on others—for love, for advice, for help
with our plumbing, or what have you—especially when we know that
no outside force compels them to give us these things. But trust also
involves the risk that people we trust will not pull through for us,
since if there were some guarantee they would pull through, then we
would have no need to trust
them.[ 1 ]
Trust is therefore dangerous. What we risk while trusting is the loss
of valuable things that we entrust to others, including our
self-respect perhaps, which can be shattered by the betrayal of our
The principle of ‘common but differentiated responsibility’ evolved from the notion of the ‘common heritage of mankind’ and is a manifestation of general principles of equity in international law. The principle recognises historical differences in the contributions of developed and developing States to global environmental problems, and differences in their respective economic and technical capacity to tackle these problems. Despite their common responsibilities, important differences exist between the stated responsibilities of developed and developing countries. The Rio Declaration states: “In view of the different contributions to global environmental degradation, States have common but differentiated responsibilities. The developed countries acknowledge the responsibility that they bear in the international pursuit of sustainable development in view of the pressures their societies place on the global environment and of the technologies and financial resources they command.” Similar language exists in the Framework Convention on Climate Change; parties should act to protect the climate system “on the basis of equality and in accordance with their common but differentiated responsibilities and respective capabilities.” The principle of common but differentiated responsibility includes two fundamental elements. The first concerns the common responsibility of States for the protection of the environment, or parts of it, at the national, regional and global levels. The second concerns the need to take into account the different circumstances, particularly each State’s contribution to the evolution of a particular problem and its ability to prevent, reduce and control the threat.
The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects. …
I find it surprising that so many people seem to disagree. Maybe we're primed to disagree because it's a convenient excuse for our moral mediocrity. "Gosh," you say, "I do sure wish I could be morally excellent. …
I argue that ‘consent’ language presupposes that the contemplated action is or would be at someone else’s behest. When one does something for another reason — for example, when one elects independently to do something, or when one accepts an invitation to do something — it is linguistically inappropriate to describe the actor as ‘consenting’ to it; but it is also inappropriate to describe them as ‘not consenting’ to it. A consequence of this idea is that ‘consent’ is poorly suited to play its canonical central role in contemporary sexual ethics. But this does not mean that nonconsensual sex can be morally permissible. Consent language, I’ll suggest, carries the conventional presupposition that that which is or might be consented to is at someone else’s behest. One implication will be a new kind of support for feminist critiques of consent theory in sexual ethics.
Games are a distinctive form of art — and very different from many traditional arts. Games work in the medium of agency. Game designers don’t just tell stories or create environments. They tell us what our abilities will be in the game. They set our motivations, by setting the scoring system and specifying the win-conditions. Game designers sculpt temporary agencies for us to occupy. And when we play games, we adopt these designed agencies, submerging ourselves in them, and taking on their specified ends for a while.
The giving and requesting of explanations is central to normative practice. When we tell children that they must act in certain ways, they often ask why, and often we are able to answer them. Sentences like ‘Kicking dogs is wrong because it hurts them’, and ‘You should eat your vegetables because they’re healthy’, are meaningful and ubiquitous.
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures.
6. We desire love as a function of the relational nature of our being. Ontologically, we are not complete or sufficient unto ourselves. We do not and cannot provide the 'space' (both physical and emotional) we must occupy in order to be what and as we are.
This article sheds light on a response to experimental philosophy that has not yet received enough attention: the reflection defense. According to proponents of this defense, judgments about philosophical cases are relevant only when they are the product of careful, nuanced, and conceptually rigorous reflection. We argue that the reflection defense is misguided: We present five studies (N>1800) showing that people make the same judgments when they are primed to engage in careful reflection as they do in the conditions standardly used by experimental philosophers.
This paper argues that while the classical, essentialist conception of identity is appealing due to its simplicity, it does not adequately capture the complexity of professional or individual identity. The appeal to essentialism in librarianship contributes to some serious problems for the profession, such as exclusion and homogeneity in the workplace, high attrition rates of minority librarians, exploitation and alienation of an underrepresented workforce, as well as stereotyping. This paper examines the theoretical landscape with regard to the identity question and proposes a more fitting alternative to essentialism, namely the relational conception of identity, and engages in a philosophical argument for the adoption of the relational account as a theoretical grounding for an understanding of the complex, fluid, and emergent nature of librarian identity within our dynamic profession.
To develop a theoretical framework for drawing moral distinctions between instances of sexual misconduct, I defend the “Ameliorative View” of consent, according to which there are three possibilities for what effect, if any, consent has: “fully valid consent” eliminates a wronging, “fully invalid consent” has no normative effect, and “partially valid consent” has an ameliorative effect on a wronging in the respect that it makes the wronging less grave. I motivate the view by proposing a solution to the problem of characterizing the moral effect of consent that is given in response to minor coercion.
Catherine Herfeld: Professor List, what comes to your mind when someone refers to rational choice theory? What do you take rational choice theory to be? Christian List: When students ask me to define rational choice theory, I usually tell them that it is a cluster of theories, which subsumes individual decision theory, game theory, and social choice theory. I take rational choice theory to be not a single theory but a label for a whole field. In the same way, if you refer to economic theory, that is not a single theory either, but a whole discipline, which subsumes a number of different, specific theories. I am actually very ecumenical in my use of the label ‘rational choice theory’. I am also happy to say that rational choice theory in this broad sense subsumes various psychologically informed theories, including theories of boundedly rational choice. We should not define rational choice theory too narrowly, and we definitely shouldn’t tie it too closely to the traditional idea of homo economicus.
In reasoning, we consider our reasons. When reasoning terminates in an action or a belief, we act or believe for the reasons that our reasoning took into account. These claims seem near platitudinous. But does reasoning involve a sensitivity to reasons that exist quite independently of the deliberation of rational agents? Or is it rather that the facts we take into consideration in reasoning are reasons because they are the premises of good reasoning? Proponents of the ‘reasoning view’ endorse the platitudes and answer the second question in the affirmative. That is to say, they both analyze reasons as premises of good reasoning and explain the normativity of reasons by appeal to their role in good reasoning. The aim of this paper is to cast doubt on the reasoning view, not by addressing the latter, explanatory claim directly, but by providing counterexamples to the alleged platitudes and the corresponding analysis of reasons, counterexamples in which premises of good reasoning towards φ-ing are not reasons to φ.
Psychologists frequently use response time to study cognitive processes, but response time may also be a part of the commonsense psychology that allows us to make inferences about other agents’ mental processes. We present evidence that by age six, children expect that solutions to a complex problem can be produced quickly if already memorized, but not if they need to be solved for the first time. We suggest that children could use response times to evaluate agents’ competence and expertise, as well as to assess the value and relevance of information.
I argue that the science of the soul only covers sublunary living things. Aristotle cannot properly ascribe ψυχή to unmoved movers since they do not have any capacities that are distinct from their activities or any matter to be structured. Heavenly bodies do not have souls in the way that mortal living things do, because their matter is not subject to alteration or generation. These beings do not fit into the hierarchy of soul powers that Aristotle relies on to provide unity to ψυχή. Their living consists in their activities, not in having a capacity for activity.
Have you ever disagreed with your government’s stance about some significant social, political, economic, or even philosophical issue? For example: Healthcare policy? Response to a pandemic? Gender inequality? Structural racism? Drilling in the Arctic? Fracking? Approving or vetoing a military intervention in a foreign country? Transgender rights? Exiting some multi-national political alliance (for instance, the European Union)? The building of a 20 billion dollar wall? We’re guessing the answer is most likely ’yes’.
Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. …
There are three leading theories of normativity: teleology, deontology, and virtue theory. All three types of normative theory countenance values, norms and virtues. What they disagree on is the order of explanation. Teleology takes values to be the fundamental normative kind and explains norms and virtues in terms of them. Deontology takes norms to be the fundamental normative kind and explains value and virtues in terms of them. And, finally, virtue theory takes virtues to be the fundamental normative kind and explains norms and values in terms of them.
Both scientists and philosophers of science have recently emphasized the importance of promoting transparency in science. For scientists, transparency is a way to promote reproducibility, progress, and trust in research. For philosophers of science, transparency can help address the value-ladenness of scientific research in a responsible way. Nevertheless, the concept of transparency is a complex one. Scientists can be transparent about many different things, for many different reasons, on behalf of many different stakeholders. This paper proposes a taxonomy that clarifies the major dimensions along which approaches to transparency can vary. By doing so, it provides several insights that philosophers and other science-studies scholars can pursue. In particular, it helps address common objections to pursuing transparency in science, it clarifies major forms of transparency, and it suggests avenues for further research on this topic.
Just as different pairs of shoes are useful for different occasions, different masks are useful for different occasions. Here's my collection. Category 1: Likely to be significantly protective
1.1. 3M 6300 half-face mask with 2091 P100 filters
Summary: Extremely protective for inhalation. …
This post about epistemic in justice and implicit bias by Susanna Siegel is the third post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). …
This paper is a clarification and development of my interpretation of Sartre’s theory of bad faith in response to Ronald Santoni’s sophisticated critique, published in the same issue. Santoni rightly points out that the central claim of my interpretation is that bad faith is a fundamental project manifested in all our other projects. This paper therefore begins with a clarification of Sartre’s conception of a project, followed by an explanation of his claim that one project is fundamental, grounding an elucidation of the idea that bad faith is a fundamental project. The paper then uses this to address the central themes of Santoni’s critique of my interpretation. I argue that Sartre does not consider us to be ontologically and congenitally disposed to bad faith. The prevalence of bad faith is explained, on my reading of Sartre, by the social pressure to conform to it, which is inherent in the project itself. Santoni is right that this cannot really explain the prevalence of bad faith, but this is a problem with Sartre’s theory, not a problem for my interpretation of it. I then defend my claim that Sartre’s notion of seriousness is merely a strategy of bad faith by outlining an alternative strategy that Sartre does not consider. Finally, I argue that Sartre is right to deny that bad faith is an inherently cynical project, even though it is manipulative and self-serving, and even though it can be cynically motivated.
This post about embodied cognition and implicit bias by Céline Leboeuf is the second post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). …
In this paper, I explore and probe Joseph Carens’ remarks, in his recent book The Ethics of Immigration, on the immigration status of foreign convicted criminals who have served their sentence, and who wish either to immigrate into our country or who are already here. Carens rejects deportation when it is not called for by considerations of national security, and agrees that considerations of public order can justify barring convicted foreign criminals from entering the country. I broadly agree with his arguments against deportation: my remarks in this respect are clarificatory and exploratory as much as anything else. But (I argue) both his argument for open borders and his scepticism with respect to radical cosmopolitanism are in tension with his claim that past criminal convictions can act as a bar to entry.
This chapter offers an account of the role and place of jus post bellum within just war theory and highlights avenues of inquiry on the aftermath of war that have been largely ignored. The author discusses recent arguments to the effect that jus ad bellum and jus in bello exhaust just war theory and that jus post bellum, far from being a key member of the family, in fact does much better as an outsider. The author claims, on the contrary, that there is ample space for jus post bellum within just war theory; in partial agreement with those arguments, however, the author agrees that a full account of the ethics of war’s aftermath must also draw on other fields of normative inquiry and fleshes out in greater details connections and disconnections between jus post bellum on the one hand and the other two jura on the other.
Causation can be inferred by two distinct patterns of reasoning, each requiring a distinct experimental design. Common, non-statistical causal inference is associated with controlled experiments in basic biomedical research. Statistical inference is associated with Randomized Controlled Trials in clinical research. The main difference between the two patterns of inference hinges on the satisfaction of a comparability requirement, which is in turn dictated by the nature of the objects of study, namely homogeneous vs. heterogeneous populations of biological systems. This distinction entails that the objection according to which randomized experiments fail to provide better evidence for causation because randomization cannot guarantee comparability is mistaken. As far as the validity of the statistical inference is concerned, randomization is not required in order to ensure comparability, but rather to prevent systematic bias which may compromise the accuracy of the intervention.
What is wrong with colonialism? The standard—albeit often implicit— answer to this question has been that colonialism was wrong because it violated the territorial rights of indigenous peoples, where territorial rights were grounded on acquisition theories. Recently, the standard view has come under attack: according to critics, acquisition based accounts do not provide solid theoretical grounds to condemn colonial relations. Indeed, historically they were used to justify colonialism. Various alternative accounts of the wrong of colonialism have been developed. According to some, colonialism involved a violation of territorial rights grounded on legitimate state theory. Others reject all explanations of colonialism’s wrongfulness based on territorial rights, and argue that colonial practices were wrong because they departed from ideals of economic, social, and political association. In this article we articulate and defend the standard view against critics: colonialism involved a procedural wrong; this wrong is not the violation of standards of equality and reciprocity, but the violation of territorial rights; and the best foundation for such territorial rights is acquisition based, not legitimacy based. We argue that this issue is not just of historical interest, it has relevant implications for the normative evaluation of contemporary inequalities.