Conspiracy theorists believe that powerful agents are conspiring to achieve their nefarious aims and also to orchestrate a cover-up. People who suffer from impostor syndrome believe that they are not talented enough for the professional positions they find themselves in, and that they risk being revealed as inadequate. These are quite different outlooks on reality, and there is no reason to think that they are mutually reinforcing. Nevertheless, there are intriguing parallels between the patterns of trust and distrust which underpin both conspiracy theorising and impostor thinking. In both cases subjects distrust standard sources of information, instead regarding themselves as especially insightful into the underlying facts of the matter. In both cases, seemingly-anomalous data takes on special significance. And in both cases, the content of belief dictates the epistemic behaviour of the believer. This paper explores these parallels, to suggest new avenues of research into both conspiracy theorising and impostor syndrome, including questions about whether impostor syndrome inevitably involves a personal failure of rationality, and issues about how, if at all, it is possible to convince others to abandon either conspiracy theories or impostor attitudes.
People are described as suffering from impostor syndrome when they feel that their external markers of success are unwarranted, and fear being revealed as a fraud. Impostor syndrome is commonly framed as a troubling individual pathology, to be overcome through self-help strategies or therapy. But in many situations an individual’s impostor attitudes can be epistemically justified, even if they are factually mistaken: hostile social environments can create epistemic obstacles to self-knowledge. The concept of impostor syndrome prevalent in popular culture needs greater critical scrutiny, as does its source, the concept of impostor phenomenon which features in psychological research.
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.
Are there nonexistent objects, i.e., objects that do not exist? Some examples
often cited are: Zeus, Pegasus, Sherlock Holmes, Vulcan (the hypothetical planet postulated by the 19th century astronomer Le Verrier), the perpetual
motion machine, the golden mountain, the fountain of youth, the round
square, etc. Some important philosophers have thought that the very
concept of a nonexistent object is contradictory (Hume) or logically
ill-formed (Kant, Frege), while others (Leibniz, Meinong, the Russell
of Principles of Mathematics) have embraced it
wholeheartedly. One of the reasons why there are doubts about the concept of a
nonexistent object is this: to be able to say truly of an object that
it doesn’t exist, it seems that one has to presuppose that it exists,
for doesn’t a thing have to exist if we are to make a true claim about
Zombies in philosophy are imaginary creatures designed to illuminate
problems about consciousness and its relation to the physical world. Unlike the ones in films or witchcraft, they are exactly like us in
all physical respects but without conscious experiences: by definition
there is ‘nothing it is like’ to be a zombie. Yet zombies
behave just like us, and some even spend a lot of time discussing
consciousness. Few people, if any, think zombies actually exist. But many hold that
they are at least conceivable, and some that they are possible. It
seems that if zombies really are possible, then physicalism is false
and some kind of dualism is true.
Although widely and commonly confused with republicanism, civic
humanism forms a separate and distinct phenomenon in the history of
Western political thought. Republicanism is a political philosophy
that defends a concept of freedom as non-domination, and identifies
the institutions that protect it (Pettit 1999). In particular,
republicanism stands against two alternative theories of politics. The
first is despotism, especially as manifested in any form of one-man
rule; a republic is self-governing, and so are its denizens. The
second is liberalism, which posits the primacy of the autonomous
individual vis-à-vis public order and government; the
republican values civic engagement in order to realize a form of
liberty achievable only in and through the community.
We standardly evaluate counterfactuals and abilities in temporally asymmetric terms—by keeping the past fixed and holding the future open. Only future events depend counterfactually on what happens now. Past events do not. Conversely, past events are relevant to what abilities one has now in a way that future events are not. Lewis, Sider and others continue to evaluate counterfactuals and abilities in temporally asymmetric terms, even in cases of backwards time travel. I’ll argue that we need more temporally neutral methods. The past shouldn’t always be held fixed, because backwards time travel requires backwards counterfactual dependence. Future events should sometimes be held fixed, because they’re in the causal history of the past, and agents have evidence of them independently of their decisions now. We need temporally neutral methods to maintain connections between causation, counterfactuals and evidence, and if counterfactuals are used to explain the temporal asymmetry of causation.
Our main result so far is a characterization of Schnorr randomness and Martin-Lof randomness in terms of Lévy’s classical upwards convergence theorem in martingale theory. This is interesting philosophically because it suggests that randomness notions should be brought to bear on the interpretation of convergence to the truth results.
Suppose a dog lives forever. Assuming the dog stays roughly dog-sized, there is only a finite number of configurations of the dog’s matter (disregarding insignificant differences on the order of magnitude of a Planck length, say). …
Extended cognition theorists argue that cognitive processes constitutively depend on resources that are neither organically composed, nor located inside the bodily boundaries of the agent, provided certain conditions on the integration of those processes into the agent’s cognitive architecture are met. Epistemologists, however, worry that in so far as such cognitively integrated processes are epistemically relevant, agents could thus come to enjoy an untoward explosion of knowledge. This paper develops and defends an approach to cognitive integration—cluster-model functionalism—which finds application in both domains of inquiry, and which meets the challenge posed by putative cases of cognitive or epistemic bloat.
It is widely accepted that doxa, which plays a major role in Plato’s and Aristotle’s epistemologies, is the Ancient counterpart of belief. We argue against this consensus: doxa is not generic taking-to-be-true, but instead something closer to mere opinion. We then show that Plato shows little sign of interest in the generic notion of belief; it is Aristotle who systematically develops that notion, under the rubric of hupolêpsis (usually translated as ‘supposition’), a much-overlooked notion that is, we argue, central to his epistemology. We close by considering the significance of this development, outlining the shifts in epistemological concerns enabled by the birth of belief as a philosophical notion.
I will contrast the two main approaches to the foundations of statistical mechanics: the individualist (Boltzmannian) approach and the ensemblist approach (associated with Gibbs). I will indicate the virtues of each, and argue that the conflict between them is perhaps not as great as often imagined.
It is a staple of sermons on love that we are required to love our neighbor, not like them. I think this is true. But it seems to me that in many cases, perhaps even most cases, _dis_liking people is a moral flaw. …
Here is an interesting metaphysical thesis about mathematics: Σ10 alethic Platonism. According to Σ10 alethic Platonism, every sentence about arithmetic with only one unbounded existential quantifier (i.e., an existential quantifier that ranges over all natural numbers, rather than all the natural numbers up to some bound), i.e., every Σ10 sentence, has an objective truth value. …
Picture taken from William Murphy on Flickr
We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. …
Influenced by the renaissance of general relativity that came to pass in the 1950s, the character of cosmology fundamentally changed in the 1960s as it became a well-established empirical science. Although observations went to dominate its practice, extra-theoretical beliefs and principles reminiscent of methodological debates in the 1950s kept playing an important tacit role in cosmological considerations. Specifically, belief in cosmologies that modeled a “closed universe” based on Machian insights remained influential. The rise of the dark matter problem in the early 1970s serves to illustrate this hybrid methodological character of cosmological science.
In a recent paper [“Quantum Mechanics in a Time-Asymmetric Universe: On the Nature of the Initial Quantum State”, The British Journal for the Philosophy of Science, 2018], Chen uses density matrix realism to solve the puzzles of the arrow of time and the meaning of the quantum state. In this paper, I argue that density matrix realism is problematic, and in particular, it is inconsistent with the latest results about the reality of the wave function.
« “Quantum Computing and the Meaning of Life”
Can we reverse time to before this hypefest started? The purpose of this post is mostly just to signal-boost Konstantin Kakaes’s article in MIT Technology Review, entitled “No, scientists didn’t just ‘reverse time’ with a quantum computer.” The title pretty much says it all—but if you want more, you should read the piece, which includes the following droll quote from some guy calling himself “Director of the Quantum Information Center at the University of Texas at Austin”:
If you’re simulating a time-reversible process on your computer, then you can ‘reverse the direction of time’ by simply reversing the direction of your simulation. …
Ernest Sosa gave a lovely and fascinating talk yesterday at UC Riverside on the importance of "firsthand intuitive insight" in philosophy. It has me thinking about the extent to which we ought, or ought not, defer to ethical experts when we are otherwise inclined to disagree with their conclusions. …
In this essay, I discuss Dennett’s From Bacteria to Bach and Back: The Evolution of Minds (hereafter From Bacteria) and Godfrey Smith’s Other Minds: The Octopus and The Evolution of Intelligent Life (hereafter Other Minds) from a methodological perspective. I show that these both instantiate what I call ‘synthetic philosophy.’ They are both Darwinian philosophers of science who draw on each other’s work (with considerable mutual admiration). In what follows I first elaborate on synthetic philosophy in light of From Bacteria and Other Minds; I also explain my reasons for introducing the term; and I close by looking at the function of Darwinism in contemporary synthetic philosophy.
Over the last 20 years, the concept of natural selection has been highly debated in the philosophy of biology. Yet, most discussions on this topic have focused on the questions of whether natural selection is a causal process and whether it can be distinguished from drift. In this paper, I identify another sort of problem with respect to natural selection. I show that, in so far as a classical definition of fitness includes the transmission of a type between generations as part of the definition, it seems difficult to see how the fitness of an entity, following this definition, could be description independent. In fact, I show that by including type transmission as part of the definition of fitness, changing the grain at which the type of an entity is described can change the fitness of that entity. If fitness is not grain-of-description independent, this further propagates to the process of natural selection itself. I call this problem the ‘reference grain problem’. I show that it can be linked to the reference class problem in probability theory. I tentatively propose two solutions to it.
Born on the island of Martinique under French colonial rule, Frantz
Omar Fanon (1925–1961) was one of the most important writers in
black Atlantic theory in an age of anti-colonial liberation struggle. His work drew on a wide array of poetry, psychology, philosophy, and
political theory, and its influence across the global South has been
wide, deep, and enduring. In his lifetime, he published two key
original works: Black Skin, White Masks (Peau noire,
masques blancs) in 1952 and The Wretched of the Earth
(Les damnés de la terre) in 1961. Collections of essays,
A Dying Colonialism (L’an V de la révolution
Algérienne 1959) and Toward the African
Revolution (Pour la revolution Africaine), posthumously
published in 1964, round out a portrait of a radical thinker in
motion, moving from the Caribbean to Europe to North Africa to
sub-Saharan Africa and transforming his thinking at each stop.
A more polished version of this article appeared on Nautilus on 2019 February 28. This version has some more material. How I Learned to Stop Worrying and Love Algebraic Geometry
In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. …
The absolute pessimistic induction states that earlier theories, although successful, were abandoned, so current theories, although successful, will also be abandoned. By contrast, the relative pessimistic induction states that earlier theories, although superior to their predecessors, were discarded, so current theories, although superior to earlier theories, will also be discarded. Some pessimists would have us believe that the relative pessimistic induction avoids empirical progressivism. I argue, however, that it has the same problem as the absolute pessimistic induction, viz., either its premise is implausible or its conclusion does not probably follow from its premise.
Bell’s Theorem is the collective name for a family of
results, all of which involve the derivation, from a condition on
probability distributions inspired by considerations of local
causality, together with auxiliary assumptions usually thought of as
mild side-assumptions, of probabilistic predictions about the results
of spatially separated experiments that conflict, for appropriate
choices of quantum states and experiments, with quantum mechanical
predictions. These probabilistic predictions take the form of
inequalities that must be satisfied by correlations derived from any
theory satisfying the conditions of the proof, but which are violated,
under certain circumstances, by correlations calculated from quantum
According to epistemicists, there is a precise height which separates people who are tall from those who are not tall, though we can never know what it is. This view has struck many as preposterous, but it is harder to resist than one might think. For what seems most hard to accept about it—that vague words like ‘tall’ impose unknowable semantic boundaries—is also a commitment of alternative, nonclassical semantic theories. To resist epistemicism, we need two things: an argument that vague terms cannot impose unknowable semantic boundaries, and a sketch of a viable alternative—a theory of meaning that does without unknowable boundaries. I attempt to provide both.
The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once. …
I found this article, apparently by Ted Nordhaus and Alex Trembath, to be quite thought-provoking. At times it sinks too deep into the moment’s politics for my taste, given that the issues it raises will probably be confronting us for the whole 21st century. …
The Higgs naturalness principle served as the basis for the so far failed prediction that signatures of physics beyond the Standard Model (SM) would be discovered at the LHC. One influential formulation of the principle, which prohibits fine tuning of bare Standard Model (SM) parameters, rests on the assumption that a particular set of values for these parameters constitute the “fundamental parameters” of the theory, and serve to mathematically define the theory. On the other hand, an old argument by Wetterich suggests that fine tuning of bare parameters merely reflects an arbitrary, inconvenient choice of expansion parameters and that the choice of parameters in an EFT is therefore arbitrary. We argue that these two interpretations of Higgs fine tuning reflect distinct ways of formulating and interpreting effective field theories (EFTs) within the Wilsonian framework: the first takes an EFT to be defined by a single set of physical, fundamental bare parameters, while the second takes a Wilsonian EFT to be defined instead by a whole Wilsonian renormalization group (RG) trajectory, associated with a one-parameter class of physically equivalent parametrizations. From this latter perspective, no single parametrization constitutes the physically correct, fundamental parametrization of the theory, and the delicate cancellation between bare Higgs mass and quantum corrections appears as an eliminable artifact of the arbitrary, unphysical reference scale with respect to which the physical amplitudes of the theory are parametrized. While the notion of fundamental parameters is well motivated in the context of condensed matter field theory, we explain why it may be superfluous in the context of high energy physics.
While science is taken to differ from non-scientific activities in virtue of its methodology, metaphysics is usually defined in terms of its subject matter. However, many traditional questions of metaphysics are addressed in a variety of ways by science, making it difficult to demarcate metaphysics from science solely in terms of their subject matter. Are the methodologies of science and metaphysics sufficiently distinct to act as criteria of demarcation between the two? In this chapter we focus on several important overlaps in the methodologies used within science and metaphysics in order to argue that focusing solely on methodology is insufficient to offer a sharp demarcation between metaphysics and science, and consider the consequences of this for the wider relationship between science and metaphysics.