Epistemic diversity is the ability or possibility of producing diverse and rich epistemic apparati to make sense of the world around us. In this paper we discuss whether, and to what extent, different conceptions of knowledge – notably as ‘justified true belief’ and as ‘distributed and embodied cognition’ – hinder or foster epistemic diversity. We then link this discussion to the widespread move in science and philosophy towards monolingual disciplinary environments. We argue that English, despite all appearance, is no Lingua Franca, and we give reasons why epistemic diversity is also deeply hindered is monolingual contexts. Finally, we sketch a proposal for multilingual academia where epistemic diversity is thereby fostered.
In this series of posts, I will raise some issues for the logical pluralism of Beall & Restall (hereafter 'B&R') - a much-discussed, topic-revivifying view in the philosophy of logic. My study of their view was prompted by Mark Colyvan, whose course on Philosophy of Logic at Sydney Uni I'm helping to teach this year. …
[Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. …
The Fine-Tuning Argument claims that the life-permitting ranges of various parameters are so narrow that, absent theism, we should be surprised that the parameters fall into those ranges. The normalizability objection is that if a parameter ξ can take any real value, then any finite life-permitting range of values of ξ counts as a “narrow range”, since every finite range is an infinitesimal portion of the full range from −∞ to ∞. …
Let’s say that we want prior probabilities for data that can be encoded as a countably infinite binary sequence. Generalized Solomonoff priors work as follows: We have a language L (in the original setting, it’ll be based on Turing machines) and we generate random descriptions in L in a canonical way (e.g., add an end-of-string symbol to L and randomly and independently generate symbols until you hit the end-of-string symbol, and then conditionalize on the string uniquely describing an infinite binary sequence). …
How to serve two epistemic masters
Posted on Thursday, 23 May 2019
2018 paper, J. Dmitri Gallow shows that it is difficult to combine
multiple deference principles. The argument is a little complicated,
but the basic idea is surprisingly simple. …
In the Tractatus Wittgenstein argued that there are metaphysical truths. But these are ineffable, for metaphysical sentences try to say what can only be shown. Accordingly, they are pseudo-propositions because they are ill-formed. In the Investigations he no longer thought that metaphysical propositions are pseudo-propositions, but argued that they are either nonsense or norms of descriptions. Popper criticized Wittgenstein’s ideas and argued that metaphysical truths are effable. Yet it is by now clear that he misunderstood Wittgenstein’s arguments (namely that metaphysical propositions are ill-formed because they employ unbound variables) and misguidedly thought that Wittgenstein used the principle of verification for distinguishing empirical propositions from metaphysical propositions. Because Popper developed his philosophy in part as a critique of Wittgenstein’s philosophy, this invites the question of whether these misunderstandings have consequences for his own philosophy. I discuss this question and argue that Popper’s attempt to distinguish metaphysics and science with the aid of a criterion of testability is from Wittgenstein’s perspective misguided. The main problem facing Popper’s philosophy is that alleged metaphysical propositions are not theoretical propositions but rules for descriptions (in the misleading guise of empirical propositions). If Wittgenstein’s ideas are correct, then metaphysical problems are not scientific but grammatical problems which can only be resolved through conceptual investigations.
One of the central philosophical debates prompted by general relativity concerns the status of the metric field. A number of philosophers have argued that the metric field should no longer be regarded as part of the background arena in which physical fields evolve; it should be regarded as a physical field itself. Earman and Norton write, for example, that the metric tensor in general relativity ‘incorporates the gravitational field and thus, like other physical fields, carries energy and momentum’.1 Indeed, they baldly claim that according to general relativity ‘geometric structures, such as the metric tensor, are clearly physical fields in spacetime’.2 On such a view, spacetime itself— considered independently of matter—has no metrical properties, and the mathematical object that best represents spacetime is a bare topological manifold. As Rovelli puts the idea: ‘the metric/gravitational field has acquired most, if not all, the attributes that have characterized matter (as opposed to spacetime) from Descartes to Feynman...
It is widely held that consciousness is a maximal property—a property F such that, “roughly, … large parts of an F are not themselves F.”. Naturalists have used maximality, for instance, to respond to Merricks’ worry that on naturalism, if Alice is conscious, so is Alice minus a finger, as they both have a brain sufficient for consciousness (see previous link). …
Fuchs and Peres (2000) claimed that standard Quantum Mechanics needs no interpretation. In this essay, I show the flaws of the arguments presented in support to this thesis. Specifically, it will be claimed that the authors conflate QM with Quantum Bayesianism (QBism) - the most prominent subjective formulation of quantum theory; thus, they endorse a specific interpretation of the quantum formalism. Secondly, I will explain the main reasons for which QBism should not be considered a physical theory, being it concerned exclusively with agents’ beliefs and silent about the physics of the quantum regime. Consequently, the solutions to the quantum puzzles provided by this approach cannot be satisfactory from a physical perspective. In the third place, I evaluate Fuchs and Peres arguments contra the non-standard interpretations of QM, showing again the fragility of their claims. Finally, it will be stressed the importance of the interpretational work in the context of quantum theory.
Within the context of the Quine-Putnam indispensability argument, one discussion about the status of mathematics is concerned with the ‘Enhanced Indispensability Argument’, which makes explicit in what way mathematics is supposed to be indispensable in science, namely explanatory. If there are genuine mathematical explanations of empirical phenomena, an argument for mathematical platonism could be extracted by using inference to the best explanation. The best explanation of the primeness of the life cycles of Periodical Cicadas is genuinely mathematical, according to Baker (2005, 2009). Furthermore, the result is then also used to strengthen the platonist position (e.g. Baker 2017a). We pick up the circularity problem brought up by Leng (2005) and Bangu (2008). We will argue that Baker’s attempt to solve this problem fails, if Hume’s Principle is analytic. We will also provide the opponent of the Enhanced Indispensability Argument with the so-called ‘interpretability strategy’, which can be used to come up with alternative explanations in case Hume’s Principle is non-analytic.
Suppose that you have been invited to attend an ex-partner’s
wedding and that the best thing you can do is accept the invitation
and be pleasant at the wedding. But, suppose furthermore that if you
do accept the invitation, you’ll freely decide to get inebriated
at the wedding and ruin it for everyone, which would be the worst
outcome. The second best thing to do would be to simply decline the
invitation. In light of these facts, should you accept or decline the
invitation? (Zimmerman 2006: 153). The answer to this question hinges
on the actualism/possibilism debate in ethics, which concerns the
relationship between an agent’s free actions and her moral
My grad student Joe Moeller is talking at the 4th Symposium on Compositional Structures this Thursday! He’ll talk about his work with Christina Vasilakopolou, a postdoc here at U.C. Riverside. Together they created a monoidal version of a fundamental construction in category theory: the Grothendieck construction! …
The uneducated person blames others for their failures; those who have just begun to be instructed blame themselves; those whose learning is complete blame neither others nor themselves.1 So says Epictetus, spelling out one tenet of Stoic thought: that blame, whether of oneself or another, has no place in a life wisely lived. To blame is unhealthy and dispensable. This tenet long endeared me to Stoicism. For I was, for many years, what Peter Graham calls a ‘blame sceptic’. That is not to say that I resiled from blaming. Rather, I blamed and then reproached myself for doing so. Since reproaching entails blaming, I thereby compounded my felony. And then, reproaching myself for compounding my felony, I compounded it some more.
In a decade of important work, Stephen Smith has marshaled a number of arguments against what he calls ‘the duty view’ of damages awards in private law. The duty view (which might more revealingly have been called ‘the existing duty view’) is the view according to which ‘damage[s] awards confirm existing legal duties to pay damages.’ Generously, I am credited with advancing ‘the most plausible’ version of the duty view, namely the ‘inchoate duty view’ according to which the court makes determinate, by its award, what was up to then an indeterminate legal duty. Smith and I agree, at least arguendo, that by its award the court fixes the amount that the defendant now has a duty to pay. I merely add: ‘and now has a duty to have paid’. This is the addition that Smith rejects.
I am delighted to announce the next symposium in our series on articles from Neuroscience of Consciousness. Neuroscience of Consciousness is an interdisciplinary journal focused on the philosophy and science of consciousness, and gladly accepts submissions from both philosophers and scientists working in this fascinating field.We have two types of symposia. …
Yesterday, at the invitation of a student, I did a Marian pilgrimage to Walsingham. If you have a chance to go, go. It’s worth it for spiritual reasons. But here I want to reflect on a metaphysics of time question, related to the experience of participating in this venerable institution. …
There has been an ongoing debate about whether desires are beliefs. Call the claim that they are the desire-as-belief thesis (DAB). This paper sets out to impugn the two versions of DAB that have enjoyed the most support in the philosophical literature: the guise of the good and the guise of reasons accounts. According to the guise of the good version of DAB, the desire to j is identical to the belief that j is good. According to the guise of reasons version of DAB, the desire to j is identical to the belief that one has a normative reason to j. My paper presents a pair of objections to DAB: the first specifically targets the guise of reasons account defended by Alex Gregory, while the second aims to undermine DAB more generally.
Plausibly—though there are some set-theoretic worries that require some care if the language is rich enough—for a fixed language, there are only countably many situations we can describe. Consequently, we only need to do Bayesian epistemology for countably many events. …
The three central tenets of traditional Bayesian epistemology are these:
Precision Your doxastic state at a given time is represented by a credence function, $c$, which takes each proposition $X$ about which you have an opinion and returns a single numerical value, $c(X)$, that measures the strength of your belief in $X$. …
The combination of panpsychism and priority monism leads to priority cosmopsychism, the view that the consciousness of individual sentient creatures is derivative of an underlying cosmic consciousness. It has been suggested that contemporary priority cosmopsychism parallels central ideas in the Advaita Vedānta tradition. The paper offers a critical evaluation of this claim. It argues that the Advaitic account of consciousness cannot be characterized as an instance of priority cosmopsychism, points out the differences between the two views, and suggests an alternative positioning of the Advaitic canon within the contemporary debate on monism and panpsychism.
It is almost unanimously accepted in the moral luck literature that Kant denies resultant moral luck—that is, he denies that the lucky consequence of a person’s action can affect how much praise or blame she deserves. Philosophers often point to the famous good will passage at the beginning of the Groundwork to justify this claim. I argue, however, that this passage does not support Kant’s denial of resultant moral luck. Subsequently, I argue that Kant allows agents to be morally responsible for certain kinds of lucky consequences. Even so, I argue that it is unclear whether Kant ultimately endorses resultant moral luck. The reason is that Kant does not write enough on moral responsibility for consequences to determine definitively whether he thinks that the lucky consequence for which an agent is morally responsible can add to her degree of praiseworthiness or blameworthiness. The clear upshot, however, is that Kant does not deny resultant moral luck.
Need considerations play an important role in empirically informed theories of distributive justice. We propose a concept of need-based justice that is related to social participation and provide an ethical measurement of need-based justice. The β-ε-index satisfies the need-principle, monotonicity, sensitivity, transfer and several ‘technical’ axioms. A numerical example is given.
Curiously, people assign less punishment to a person who attempts and fails to harm somebody if their intended victim happens to suffer the harm for coincidental reasons. This “blame blocking” effect provides an important evidence in support of the two-process model of moral judgment (Cushman, 2008). Yet, recent proposals suggest that it might be due to an unintended interpretation of the dependent measure in cases of coincidental harm (Prochownik, 2017; also Malle, Guglielmo, & Monroe, 2014). If so, this would deprive the two-process model of an important source of empirical support. We report and discuss results that speak against this alternative account.
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration.
The standard Catholic view of tubal pregnancy is that it is permissible to remove the tube with the child. The idea seems to be that the danger to the mother comes from the potential rupture of the tube, and hence removal of the tube is removal of that which poses the danger, and the death of the child is a non-intended side-effect, with the action justified by double effect. …
Art can be addressed, not just to individuals, but to groups. Art can even be part of how groups think to themselves – how they keep a grip on their values over time. I focus on monuments as a case study. Monuments, I claim, can function as a commitment to a group value, for the sake of long-term action guidance. Art can function here where charters and mission statements cannot, precisely because of art’s powers to capture subtlety and emotion. In particular, art can serve as the vessel for group emotions, by making emotional content sufficiently public so as to be the object of a group commitment. Art enables groups to guide themselves with values too subtle to be codified.
‘Knowledge-how’ is the knowledge you have when you know how to do something. For example, when you know how to dance the tango, or solve a certain equation, or ride a bike, etc. Influenced by Ryle (1949), the traditional view of knowledge-how had two components: (1) a negative claim (anti-intellectualism) that knowledge-how is not any kind of knowledge-that (or any other propositional attitude state); and (2) a positive claim (abilitism or dispositionalism) that knowledge-how is some kind of ability or complex dispositional state. This traditional Rylean view was, for a long time, a largely unquestioned feature of philosophical orthodoxy. There were occasional challenges to the traditional view but these challenges generated little sustained debate, and did not seriously threaten the orthodox status of Ryleanism.
We argue that comparative psychologists have been too quick to jump to metacognitive interpretations of their data. We examine two such cases in some detail. One concerns so-called “uncertainty monitoring” behavior, which we show to be better explained in terms for first-order estimates of risk. The other concerns informational search, which we argue is better explained in terms of a first-order curiosity-like motivation that directs questions at the environment.