The concept of wild food does not play a significant role in contemporary nutritional science and it is seldom regarded as a salient feature within standard dietary guidelines. The knowledge systems of wild edible taxa are indeed at risk of disappearing. However, recent scholarship in ethnobotany, field biology, and philosophy demonstrated the crucial role of wild foods for food biodiversity and food security. The knowledge of how to use and consume wild foods is not only a means to deliver high-end culinary offerings, but also a way to foster alternative models of consumption. Our aim in this paper is to provide a conceptual framework for wild foods, which can account for diversified wild food ontologies. In the first section of the paper, we survey the main conception of wild foods provided in the literature, what we call the Nature View. We argue that this view falls short of capturing characteristics that are core to a sound account of wilderness in a culinary sense. In the second part of the paper, we provide the foundation for an improved model of wild food, which can countenance multiple dimensions and degrees characterizing wilderness in the culinary world. In the third part of the paper we argue that thanks to a more nuanced ontological analysis, the gradient framework can serve ethnobiologists, philosophers, scientists, and policymakers to represent and negotiate theoretical conflicts on the nature of wild food.
When I’m hungry, I try to seek some food, namely an object that is edible and that can feed me and preferably it has to be tasty. It seems a very easy task to find it for there is an alleged natural boundary between what counts as food and what does not. I can naturally pinpoint that boundary. Nevertheless, at a closer inspection, such boundary turns out to be suspicious: a roasted human being is both edible and nutritious, and someone may even find it tasty, and yet it can be hardly considered as food. Likewise, a rotten food item is neither edible, nor nutritious and however it can be sometimes considered as food, such as marcescent cheese. Our aim in this paper is to nail down the different conceptions which regulate our conception of what is a food and then come up with a proper definition. We set forth four different stances: a biological one, i.e., food is what holds certain natural properties, an individual one, i.e., food is what can be eaten by at least one person, an authority one, i.e., food is what is considered so by an authority, and a social one. i.e., food is what is institutionally recognized as food.
Trust is important, but it is also dangerous. It is important because
it allows us to depend on others—for love, for advice, for help
with our plumbing, or what have you—especially when we know that
no outside force compels them to give us these things. But trust also
involves the risk that people we trust will not pull through for us,
since if there were some guarantee they would pull through, then we
would have no need to trust
them.[ 1 ]
Trust is therefore dangerous. What we risk while trusting is the loss
of valuable things that we entrust to others, including our
self-respect perhaps, which can be shattered by the betrayal of our
The principle of ‘common but differentiated responsibility’ evolved from the notion of the ‘common heritage of mankind’ and is a manifestation of general principles of equity in international law. The principle recognises historical differences in the contributions of developed and developing States to global environmental problems, and differences in their respective economic and technical capacity to tackle these problems. Despite their common responsibilities, important differences exist between the stated responsibilities of developed and developing countries. The Rio Declaration states: “In view of the different contributions to global environmental degradation, States have common but differentiated responsibilities. The developed countries acknowledge the responsibility that they bear in the international pursuit of sustainable development in view of the pressures their societies place on the global environment and of the technologies and financial resources they command.” Similar language exists in the Framework Convention on Climate Change; parties should act to protect the climate system “on the basis of equality and in accordance with their common but differentiated responsibilities and respective capabilities.” The principle of common but differentiated responsibility includes two fundamental elements. The first concerns the common responsibility of States for the protection of the environment, or parts of it, at the national, regional and global levels. The second concerns the need to take into account the different circumstances, particularly each State’s contribution to the evolution of a particular problem and its ability to prevent, reduce and control the threat.
Inspired by work of Stefano Zambelli on these topics, this paper the complex nature of the relation between technology and computability. This involves reconsidering the role of computational complexity in economics and then applying this to a particular formulation of the nature of technology as conceived within the Sraffian framework. A crucial element of this is to expand the concept of technique clusters. This allows for understanding that the set of possible techniques is of a higher cardinality of infinity than that of the points on a wage-profit frontier. This is associated with potentially deep discontinuities in production functions and a higher form of uncertainty involved in technological change and growth.
The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects. …
I find it surprising that so many people seem to disagree. Maybe we're primed to disagree because it's a convenient excuse for our moral mediocrity. "Gosh," you say, "I do sure wish I could be morally excellent. …
Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
I argue that ‘consent’ language presupposes that the contemplated action is or would be at someone else’s behest. When one does something for another reason — for example, when one elects independently to do something, or when one accepts an invitation to do something — it is linguistically inappropriate to describe the actor as ‘consenting’ to it; but it is also inappropriate to describe them as ‘not consenting’ to it. A consequence of this idea is that ‘consent’ is poorly suited to play its canonical central role in contemporary sexual ethics. But this does not mean that nonconsensual sex can be morally permissible. Consent language, I’ll suggest, carries the conventional presupposition that that which is or might be consented to is at someone else’s behest. One implication will be a new kind of support for feminist critiques of consent theory in sexual ethics.
Art works are artefacts and, like all artefacts, are the product of agency. How important is that for our engagement with them? For many artefacts, agency hardly matters. The paperclips on my desk perform their function without me having to think of them as the outputs of agency, though I might on occasion admire their design. But for those artefacts we categorise as works of art, the connection is important: if I treat something as art I need to see how it manifests the choices, preferences, actions and sensibilities of the maker. I am not asked to see it simply as a record of those things. The work is not valuable merely as a conduit to the qualities of the maker; it has final value and not merely instrumental value. Its value depends on its relation to the maker; in Korsgaard’s terms it is value that is final and extrinsic.
Games are a distinctive form of art — and very different from many traditional arts. Games work in the medium of agency. Game designers don’t just tell stories or create environments. They tell us what our abilities will be in the game. They set our motivations, by setting the scoring system and specifying the win-conditions. Game designers sculpt temporary agencies for us to occupy. And when we play games, we adopt these designed agencies, submerging ourselves in them, and taking on their specified ends for a while.
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures.
6. We desire love as a function of the relational nature of our being. Ontologically, we are not complete or sufficient unto ourselves. We do not and cannot provide the 'space' (both physical and emotional) we must occupy in order to be what and as we are.
This article sheds light on a response to experimental philosophy that has not yet received enough attention: the reflection defense. According to proponents of this defense, judgments about philosophical cases are relevant only when they are the product of careful, nuanced, and conceptually rigorous reflection. We argue that the reflection defense is misguided: We present five studies (N>1800) showing that people make the same judgments when they are primed to engage in careful reflection as they do in the conditions standardly used by experimental philosophers.
This paper argues that while the classical, essentialist conception of identity is appealing due to its simplicity, it does not adequately capture the complexity of professional or individual identity. The appeal to essentialism in librarianship contributes to some serious problems for the profession, such as exclusion and homogeneity in the workplace, high attrition rates of minority librarians, exploitation and alienation of an underrepresented workforce, as well as stereotyping. This paper examines the theoretical landscape with regard to the identity question and proposes a more fitting alternative to essentialism, namely the relational conception of identity, and engages in a philosophical argument for the adoption of the relational account as a theoretical grounding for an understanding of the complex, fluid, and emergent nature of librarian identity within our dynamic profession.
To develop a theoretical framework for drawing moral distinctions between instances of sexual misconduct, I defend the “Ameliorative View” of consent, according to which there are three possibilities for what effect, if any, consent has: “fully valid consent” eliminates a wronging, “fully invalid consent” has no normative effect, and “partially valid consent” has an ameliorative effect on a wronging in the respect that it makes the wronging less grave. I motivate the view by proposing a solution to the problem of characterizing the moral effect of consent that is given in response to minor coercion.
Psychologists frequently use response time to study cognitive processes, but response time may also be a part of the commonsense psychology that allows us to make inferences about other agents’ mental processes. We present evidence that by age six, children expect that solutions to a complex problem can be produced quickly if already memorized, but not if they need to be solved for the first time. We suggest that children could use response times to evaluate agents’ competence and expertise, as well as to assess the value and relevance of information.
Have you ever disagreed with your government’s stance about some significant social, political, economic, or even philosophical issue? For example: Healthcare policy? Response to a pandemic? Gender inequality? Structural racism? Drilling in the Arctic? Fracking? Approving or vetoing a military intervention in a foreign country? Transgender rights? Exiting some multi-national political alliance (for instance, the European Union)? The building of a 20 billion dollar wall? We’re guessing the answer is most likely ’yes’.
Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. …
This post about epistemic in justice and implicit bias by Kathy Puddifoot and Jules Holroyd is the fourth and final post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). …
The movement toward scientific literacy aims to cultivate a public able to make informed decisions about science in their own lives (e.g., personal health, sustainable practices, &c.) and their support of social policies for themselves, rather than passively accepting information they are given. Many people continue learning about science — its discoveries, nature, ramifications on society, and so on — through generalist media sources such as newspapers. What are they apt to learn from such sources? This paper examines the ways in which print journalism (sampled from three prominent newspapers) in the 2010s presents science — investigating, in particular, to what extent these sources attend to the methodology or the social–institutional processes by which particular results come about. We make a case for the significance of this question in connection with the public’s understanding and trust of science.
Both scientists and philosophers of science have recently emphasized the importance of promoting transparency in science. For scientists, transparency is a way to promote reproducibility, progress, and trust in research. For philosophers of science, transparency can help address the value-ladenness of scientific research in a responsible way. Nevertheless, the concept of transparency is a complex one. Scientists can be transparent about many different things, for many different reasons, on behalf of many different stakeholders. This paper proposes a taxonomy that clarifies the major dimensions along which approaches to transparency can vary. By doing so, it provides several insights that philosophers and other science-studies scholars can pursue. In particular, it helps address common objections to pursuing transparency in science, it clarifies major forms of transparency, and it suggests avenues for further research on this topic.
Philosophers of science are increasingly interested in engaging with scientific communities, policymakers, and members of the public; however, the nature of this engagement has not been systematically examined. Instead of delineating a specific kind of engaged philosophy of science, as previous accounts have done, this paper draws on literature from outside the discipline to develop a framework for analyzing different forms of broadly engaged philosophy of science according to two key dimensions: social interaction and epistemic integration. Clarifying the many forms of engagement available to philosophers of science can advance future scholarship on engagement and promote more strategic engagement efforts.
Just as different pairs of shoes are useful for different occasions, different masks are useful for different occasions. Here's my collection. Category 1: Likely to be significantly protective
1.1. 3M 6300 half-face mask with 2091 P100 filters
Summary: Extremely protective for inhalation. …
This post about epistemic in justice and implicit bias by Susanna Siegel is the third post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). …
This paper is a clarification and development of my interpretation of Sartre’s theory of bad faith in response to Ronald Santoni’s sophisticated critique, published in the same issue. Santoni rightly points out that the central claim of my interpretation is that bad faith is a fundamental project manifested in all our other projects. This paper therefore begins with a clarification of Sartre’s conception of a project, followed by an explanation of his claim that one project is fundamental, grounding an elucidation of the idea that bad faith is a fundamental project. The paper then uses this to address the central themes of Santoni’s critique of my interpretation. I argue that Sartre does not consider us to be ontologically and congenitally disposed to bad faith. The prevalence of bad faith is explained, on my reading of Sartre, by the social pressure to conform to it, which is inherent in the project itself. Santoni is right that this cannot really explain the prevalence of bad faith, but this is a problem with Sartre’s theory, not a problem for my interpretation of it. I then defend my claim that Sartre’s notion of seriousness is merely a strategy of bad faith by outlining an alternative strategy that Sartre does not consider. Finally, I argue that Sartre is right to deny that bad faith is an inherently cynical project, even though it is manipulative and self-serving, and even though it can be cynically motivated.
This post about embodied cognition and implicit bias by Céline Leboeuf is the second post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). …
In this paper, I explore and probe Joseph Carens’ remarks, in his recent book The Ethics of Immigration, on the immigration status of foreign convicted criminals who have served their sentence, and who wish either to immigrate into our country or who are already here. Carens rejects deportation when it is not called for by considerations of national security, and agrees that considerations of public order can justify barring convicted foreign criminals from entering the country. I broadly agree with his arguments against deportation: my remarks in this respect are clarificatory and exploratory as much as anything else. But (I argue) both his argument for open borders and his scepticism with respect to radical cosmopolitanism are in tension with his claim that past criminal convictions can act as a bar to entry.
This chapter offers an account of the role and place of jus post bellum within just war theory and highlights avenues of inquiry on the aftermath of war that have been largely ignored. The author discusses recent arguments to the effect that jus ad bellum and jus in bello exhaust just war theory and that jus post bellum, far from being a key member of the family, in fact does much better as an outsider. The author claims, on the contrary, that there is ample space for jus post bellum within just war theory; in partial agreement with those arguments, however, the author agrees that a full account of the ethics of war’s aftermath must also draw on other fields of normative inquiry and fleshes out in greater details connections and disconnections between jus post bellum on the one hand and the other two jura on the other.
Martin Buber (1878–1965) was a prolific author, scholar, literary
translator, and political activist whose writings—mostly in
German and Hebrew—ranged from Jewish mysticism to social
philosophy, biblical studies, religious phenomenology, philosophical
anthropology, education, politics, and art. Most famous among his
philosophical writings is the short but powerful book I and
Thou (1923) where our relation to others is considered as
twofold. The I-it relation prevails between subjects and
objects of thought and action; the I-Thou relation, on the
other hand, obtains in encounters between subjects that exceed the
range of the Cartesian subject-object relation.