Suppose that we are in an infinite Euclidean space, and that a rocket accelerates in such a way that in the first 30 minutes its speed doubles, in the next 15 minutes it doubles again, in the next 7.5 minutes it doubles, and so on. …
Parfit sometimes suggested that Act Consequentialism might best be understood, not as a moral theory, but as an external rival to morality. The implicit thought seems to be that morality involves some essentially social mode of thought (perhaps concerned with public codes, or norms of praise and blame, or what principles you'd want everyone to accept), AC does not depend upon any such mode of thought, but it nonetheless offers an account of what you really ought (rather than just "morally ought") to do.I'm not a fan of that conception of morality, largely because it seems to devalue it, depriving morality of its interest and significance -- if it is not what you really ought to do, then who cares what you "morally ought" to do? …
Successful communication depends not only on our knowledge of language, but also our knowledge of context. If a speaker utters the sentence “he is going to get burnt,” we will have to rely on our knowledge of the context in order to grasp what proposition they are trying to express. If there is a mutually salient individual in front of us, whose trousers have just caught on fire, then we will know that this salient individual is the intended referent. If we were talking about our mutual friend Frank, and somebody has just asked how Frank’s latest business deal is going, it will be clear that Frank is the intended referent. Two completely different propositions are expressed in these situations, and without contextual knowledge, we would not be able to tell which proposition was expressed.
The concept of wild food does not play a significant role in contemporary nutritional science and it is seldom regarded as a salient feature within standard dietary guidelines. The knowledge systems of wild edible taxa are indeed at risk of disappearing. However, recent scholarship in ethnobotany, field biology, and philosophy demonstrated the crucial role of wild foods for food biodiversity and food security. The knowledge of how to use and consume wild foods is not only a means to deliver high-end culinary offerings, but also a way to foster alternative models of consumption. Our aim in this paper is to provide a conceptual framework for wild foods, which can account for diversified wild food ontologies. In the first section of the paper, we survey the main conception of wild foods provided in the literature, what we call the Nature View. We argue that this view falls short of capturing characteristics that are core to a sound account of wilderness in a culinary sense. In the second part of the paper, we provide the foundation for an improved model of wild food, which can countenance multiple dimensions and degrees characterizing wilderness in the culinary world. In the third part of the paper we argue that thanks to a more nuanced ontological analysis, the gradient framework can serve ethnobiologists, philosophers, scientists, and policymakers to represent and negotiate theoretical conflicts on the nature of wild food.
When I’m hungry, I try to seek some food, namely an object that is edible and that can feed me and preferably it has to be tasty. It seems a very easy task to find it for there is an alleged natural boundary between what counts as food and what does not. I can naturally pinpoint that boundary. Nevertheless, at a closer inspection, such boundary turns out to be suspicious: a roasted human being is both edible and nutritious, and someone may even find it tasty, and yet it can be hardly considered as food. Likewise, a rotten food item is neither edible, nor nutritious and however it can be sometimes considered as food, such as marcescent cheese. Our aim in this paper is to nail down the different conceptions which regulate our conception of what is a food and then come up with a proper definition. We set forth four different stances: a biological one, i.e., food is what holds certain natural properties, an individual one, i.e., food is what can be eaten by at least one person, an authority one, i.e., food is what is considered so by an authority, and a social one. i.e., food is what is institutionally recognized as food.
Quantum Field Theory (QFT) is the mathematical and conceptual
framework for contemporary elementary particle physics. It is also a
framework used in other areas of theoretical physics, such as
condensed matter physics and statistical mechanics. In a rather
informal sense QFT is the extension of quantum mechanics (QM), dealing
with particles, over to fields, i.e. systems with an infinite number
of degrees of freedom. (See the entry on
quantum mechanics.) In the last decade QFT has become a more widely discussed
topic in philosophy of science, with questions ranging from
methodology and semantics to ontology.
Trust is important, but it is also dangerous. It is important because
it allows us to depend on others—for love, for advice, for help
with our plumbing, or what have you—especially when we know that
no outside force compels them to give us these things. But trust also
involves the risk that people we trust will not pull through for us,
since if there were some guarantee they would pull through, then we
would have no need to trust
them.[ 1 ]
Trust is therefore dangerous. What we risk while trusting is the loss
of valuable things that we entrust to others, including our
self-respect perhaps, which can be shattered by the betrayal of our
Neurophysiology and neuroanatomy limit the set of possible computations that can be performed in a brain circuit. Although detailed data on individual brain microcircuits is available in the literature, cognitive modellers seldom take these constraints into account. One reason for this is the intrinsic complexity of accounting for mechanisms when describing function. In this paper, we present multiple extensions to the Neural Engineering Framework that simplify the integration of low-level constraints such as Dale’s principle and spatially constrained connectivity into high-level, functional models. We apply these techniques to a recent model of temporal representation in the Granule-Golgi microcircuit in the cerebellum, extending it towards higher degrees of biological plausibility. We perform a series of experiments to analyze the impact of these changes on a functional level. The results demonstrate that our chosen functional description can indeed be mapped onto the target microcircuit under biological constraints. Further, we gain insights into why these parameters are as observed by examining the effects of parameter changes. While the circuit discussed here only describes a small section of the brain, we hope that this work inspires similar attempts of bridging low-level biological detail and high-level function. To encourage the adoption of our methods, we published the software developed for building our model as an open-source library.
In this situation, uttering (1a) is to lie while uttering (1b) is not. Crucially, (1a) is something the speaker believes (indeed knows) to be false, whereas (1b) is something she believes to be true. Yet both utterances are aimed at the same thing: deceiving the hearer into believing that the speaker has not been opening the mail.
[Editor's Note: The following new entry by Timothy O’Connor replaces the
on this topic by the previous authors.]
The world appears to contain diverse kinds of objects and
systems—planets, tornadoes, trees, ant colonies, and human
persons, to name but a few—characterized by distinctive features
and behaviors. This casual impression is deepened by the success of
the special sciences, with their distinctive taxonomies and laws
characterizing astronomical, meteorological, chemical, botanical,
biological, and psychological processes, among others. But
there’s a twist, for part of the success of the special sciences
reflects an effective consensus that the features of the composed
entities they treat do not “float free” of features and
configurations of their components, but are rather in some way(s)
dependent on them.
Consider “one thought too many” objections in ethics, on which certain considerations that objectively favor an action are nonetheless a “thought too many”, and it is better to act without them. Examples given in the literature involve using consequentialist reasoning when saving one’s spouse, or visiting a sick friend because of duty. …
My aims in this essay are two. First (§§1-4), I want to get clear on the very idea of a theory of the history of philosophy, the idea of an overarching account of the evolution of philosophical reflection since the inception of written philosophy. And secondly (§§5-8), I want to actually sketch such a global theory of the history of philosophy, which I call the two-streams theory.
It has been frequently observed in the literature that assertions of plain sentences containing predicates like fun and frightening give rise to an acquaintance inference: they imply that the speaker has first-hand knowledge of the item under consideration. The goal of this paper is to develop and defend a broadly expressivist explanation of this phenomenon: acquaintance inferences arise because plain sentences containing subjective predicates are designed to express distinguished kinds of attitudes that differ from beliefs in that they can only be acquired by undergoing certain experiences. Its guiding hypothesis is that natural language predicate expressions lexically specify what it takes for their use to be properly “grounded” in a speaker’s state of mind: what state of mind a speaker must be in for a predication to be in accordance with the norms governing assertion. The resulting framework accounts for a range of data surrounding the acquaintance inference as well as for striking parallels between the evidential requirements on subjective predicate uses and the kind of considerations that fuel motivational internalism about the language of morals. A discussion of how the story can be implemented compositionally and of how it compares with other proposals currently on the market is provided.
Last week, I explained how you can give an accuracy dominance argument for Probabilism without assuming that your inaccuracy measures are additive -- that is, without assuming that the inaccuracy of a whole credence function is obtained by adding up the inaccuracy of all the individual credences that it assigns. …
Psycholinguistic studies have repeatedly demonstrated that downward entailing (DE) quantifiers are more difficult to process than upward entailing (UE) ones. We contribute to the current debate on cognitive processes causing the monotonicity effect by testing predictions about the underlying processes derived from two competing theoretical proposals: two-step and pragmatic processing models. We model reaction times and accuracy from two verification experiments (a sentence-picture and a purely linguistic verification task), using the diffusion decision model (DDM). In both experiments, verification of UE quantifier more than half was compared to verification of DE quantifier fewer than half. Our analyses revealed the same pattern of results across tasks: Both non-decision times and drift rates, two of the free model parameters of the DDM, were affected by the monotonicity manipulation. Thus, our modeling results support both two-step (prediction: non-decision time is affected) and pragmatic processing models (prediction: drift rate is affected).
The principle of ‘common but differentiated responsibility’ evolved from the notion of the ‘common heritage of mankind’ and is a manifestation of general principles of equity in international law. The principle recognises historical differences in the contributions of developed and developing States to global environmental problems, and differences in their respective economic and technical capacity to tackle these problems. Despite their common responsibilities, important differences exist between the stated responsibilities of developed and developing countries. The Rio Declaration states: “In view of the different contributions to global environmental degradation, States have common but differentiated responsibilities. The developed countries acknowledge the responsibility that they bear in the international pursuit of sustainable development in view of the pressures their societies place on the global environment and of the technologies and financial resources they command.” Similar language exists in the Framework Convention on Climate Change; parties should act to protect the climate system “on the basis of equality and in accordance with their common but differentiated responsibilities and respective capabilities.” The principle of common but differentiated responsibility includes two fundamental elements. The first concerns the common responsibility of States for the protection of the environment, or parts of it, at the national, regional and global levels. The second concerns the need to take into account the different circumstances, particularly each State’s contribution to the evolution of a particular problem and its ability to prevent, reduce and control the threat.
 Shenoy, Prakash P. (1991), On Spohn’s Rule for Revision of Beliefs. International Journal of Approximate Reasoning 5, 149-181.  Spirtes, Peter & Glymour, Clark & Scheines, Richard (2000), Causation,
The cerebellum is classically described in terms of its role in motor control. Recent evidence suggests that the cerebellum supports a wide variety of functions, including timing-related cognitive tasks and perceptual prediction. Correspondingly, deciphering cerebellar function may be important to advance our understanding of cognitive processes. In this paper, we build a model of eyeblink conditioning, an extensively studied low-level function of the cerebellum. Building such a model is of particular interest, since, as of now, it remains unclear how exactly the cerebellum manages to learn and reproduce the precise timings observed in eyeblink conditioning that are potentially exploited by cognitive processes as well. We employ recent advances in large-scale neural network modeling to build a biologically plausible spiking neural network based on the cerebellar microcircuitry. We compare our simulation results to neurophysiological data and demonstrate how the recurrent Granule-Golgi subnetwork could generate the dynamics representations required for triggering motor trajectories in the Purkinje cell layer. Our model is capable of reproducing key properties of eyeblink conditioning, while generating neurophysiological data that could be experimentally verified.
Inspired by work of Stefano Zambelli on these topics, this paper the complex nature of the relation between technology and computability. This involves reconsidering the role of computational complexity in economics and then applying this to a particular formulation of the nature of technology as conceived within the Sraffian framework. A crucial element of this is to expand the concept of technique clusters. This allows for understanding that the set of possible techniques is of a higher cardinality of infinity than that of the points on a wage-profit frontier. This is associated with potentially deep discontinuities in production functions and a higher form of uncertainty involved in technological change and growth.
In a recent paper, Barrio, Tajer and Rosenblatt establish a correspondence between metainferences holding in the strict-tolerant logic of transparent truth ST and inferences holding in the logic of paradox LP . They argue that LP is ST ’s external logic and they question whether ST ’s solution to the semantic paradoxes is fundamentally different from LP ’s. Here we establish that by parity of reasoning, ST can be related to LP ’s dual logic K3 . We clarify the distinction between internal and external logic and argue that while ST ’s nonclassicality can be granted, its self-dual character does not tie it to LP more closely than to K3 .
We present an objection to Beall and Henderson’s recent paper defending a solution to the fundamental problem of conciliar Christology using qua or secundum clauses. We argue that certain claims the acceptance/rejection of which distinguish the Conciliar Christian from others fail to so distinguish on Beall and Henderson’s 0- Qua view. This is because on their 0-Qua account, these claims are either acceptable both to Conciliar Christians as well as those who are not Conciliar Christians or because they are acceptable to neither.
This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus bears a close relation to the →, ¬ fragment of core logic, and so is called ‘core type theory’. This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus is similar to the simply-typed lambda calculus with an empty type, but with a twist. The simply-typed lambda calculus with an empty type bears a close relation to the →, ⊥ fragment of intuitionistic logic ([Howard, ; Scherer, 2017; Sørensen and Urzyczyn, 2006]); the calculus to be presented here bears a similar relation to the →, ¬ fragment of a logic known as core logic. Because of this connection, I’ll call the calculus core type theory.
Consider three claims:
Virtues when fully developed make it possible to see what is the right thing to do without conscious deliberation. Acting on fully developed virtues is the best way to act. Acting on a pocket oracle, which simply tells you in each case what is to be done, misses out on something important in our action. …
Aquinas thinks that for something to be a law, it must be “for the common good” (in addition to satisfying other conditions). Otherwise, the legislation (as we might still call it) is not really a law, and does not morally require obedience except to avoid chaos. …
The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects. …
For a PDF of this post, see here.One of the central arguments in accuracy-first epistemology -- the one that gets the project off the ground, I think -- is the accuracy-dominance argument for Probabilism. …
While the Principle of Double Effect is mostly discussed in the literature in connection with very bad effects, typically death, that trigger deontic concerns, lately philosophers (e.g., Masek) have been noting that double effect reasoning can be important in much more humdrum situations. …
I find it surprising that so many people seem to disagree. Maybe we're primed to disagree because it's a convenient excuse for our moral mediocrity. "Gosh," you say, "I do sure wish I could be morally excellent. …
Dispositions, intrinsicality, and the problem of fit
Posted on Thursday, 06 Aug 2020
In chapter 3 of The Powers Metaphysic, Neil Williams presents a nice problem for dispositionalists: the "problem of fit". …
There is a well-known version of Russell's paradox concerning the bibliography of all bibliographies which fail to list themselves. The usual analysis of this paradox leads to the conclusion that such a bibliography is self-contradictory and so therefore cannot exist. However, as we show, a more searching analysis leads to a rather different conclusion.