This paper aims to build a bridge between two areas of philosophical research, the structure of kinds and metaphysical modality. Our central thesis is that kinds typically involve super-explanatory properties, and that these properties lie behind all substantial cases of metaphysical necessity.
Consciousness raises a range of philosophical questions. We can distinguish between the How?, Where?, and What? questions. First, how does consciousness relate to other features of reality? Second, where are conscious phenomena located in reality? And, third, what is the nature of consciousness?
The rapid expansion of psychological research on unconscious processes has brought with it a similar expansion in philosophical discussions of what to make of these processes. One such discussion has asked whether we can be responsible for actions produced by unconscious processes – whether we should be praised or blamed for them. A venerable view holds conscious intentions to be necessary for agency and action. This straightforwardly rules out behaviours caused by unconscious processes as instances of responsibility-apt action. Levy (2014) provides perhaps the most comprehensive argument for this point, but it is often implicitly assumed. In fact, many arguments nominally aimed at supporting the claim that we can be responsible for automatically produced actions do so by tracing a relation between the unconscious process and some conscious process such that conscious processing is still really shouldering the weight (Wigley 2007, Levy and Bayne 2004) or advocating externalism about responsibility for unconscious action (Washington and Kelly 2016).
This is a critical discussion of Paul Humphreys’s fusion view of emergence, focusing on the basal loss feature of his ontology. The discussion yields some general morals for special science ontology. 1. Introduction. In a series of papers (1996, 1997a, 1997b, and 2000), Paul Humphreys presents an original vision of what special science ontology might be. Humphreys’s speculative proposal—call it fusion emergentism— is based on “taking singular interactions [‘fusions’] as the basis of one form of emergentism” (Humphreys 1996, 53). What is most distinctive in fusion emergentism is Humphreys’s property fusion operation, which takes property instances (at the ith level) and generates an emergent property instance (at the i ⫹ 1st level) with novel causal powers. When property instances at the generating ith level are fused, the individual property instances are destroyed and are nonindividuable within the emergent fusion existing at the i ⫹ 1st level. Call this the basal loss feature of fusion emergentism.
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: First, the 'purity' of a data model is not a measure of its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved 'vicariously', such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate (or inadequate) for particular purposes.
We assess the arguments for recognising functionally integrated multi-species consortia as genuine biological individuals, including cases of so-called ‘holobionts’. We provide two examples in which the same core biochemical processes that sustain life are distributed across a consortium of individuals of different species. Although the same chemistry features in both examples, proponents of the holobiont as unit of evolution would recognize one of the two cases as a multi-species individual whilst they would consider the other as a compelling case of ecological dependence between separate individuals. Some widely used arguments in support of the ‘holobiont’ concept apply equally to both cases, suggesting that those arguments have misidentified what is at stake when seeking to identify a new level of biological individuality. One important aspect of biological individuality is evolutionary individuality. In line with other work on the evolution of individuality, we show that our cases can be distinguished by focusing on the fitness alignment between the partners of the consortia. We conclude that much of the evidence currently presented for the ubiquity and importance of multi-species individuals is simply not to the point, at least unless the issue of biological individuality is firmly divorced from the question of evolutionary individuality.
In this paper I critically evaluate Reisman and Forber’s (Philos Sci 72(5):1113–1123, 2005) arguments that drift and natural selection are population-level causes of evolution based on what they call the manipulation condition. Although I agree that this condition is an important step for identifying causes for evolutionary change, it is insufficient. Following Woodward, I argue that the invariance of a relationship is another crucial parameter to take into consideration for causal explanations. Starting from Reisman and Forber’s example on drift and after having briefly presented the criterion of invariance, I show that once both the manipulation condition and the criterion of invariance are taken into account, drift, in this example, should better be understood as an individual-level rather than a population-level cause. Later, I concede that it is legitimate to interpret natural selection and drift as population-level causes when they rely on genuinely indeterministic events and some cases of frequency-dependent selection.
. Excerpts from the Preface:
The Statistics Wars:
Today’s “statistics wars” are fascinating: They are at once ancient and up to the minute. They reflect disagreements on one of the deepest, oldest, philosophical questions: How do humans learn about the world despite threats of error due to incomplete and variable data? …
The rise and fall of spectators performing “the wave” in a football stadium offers an analogy for how brain waves ripple across the cortex and lower brain. In both, the underlying actors (humans, neurons) serve multiple roles. First, in the stadium, each spectator dutifully passes along each wave to his neighbors. Second, any motivated spectator can initiate his own wave and enlist his neighbors’ support to broadcast it to the rest of the stadium. Third, a spectator can perceive incoming waves, and retain a memory of historical wave patterns (frequency and amplitude changes) in his local, private notebook. Fourth, a spectator can scour his library of existing notebooks (assuming he has these with him) to compare new incoming wave patterns with legacy patterns. Fifth, a spectator can assign himself a unique name within the stadium. Sixth, a spectator can broadcast (via waves) an inquiry to any other spectator in the stadium and receive a reply, addressing the other spectator by their unique name. Seventh, a spectator can train himself to learn more specifics and subtleties about his environment and make this skill available to any other spectator who requests it.
We show that combining two different hypothetical enhancements to quantum computation— namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
My student Brandon Coya has finished his thesis! • Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018. It’s about networks in engineering. …
« The stupidest story I ever wrote (it was a long flight)
PDQP/qpoly = ALL
I’ve put up a new paper. Unusually for me these days, it’s a very short and simple one (8 pages)—I should do more like this! Here’s the abstract:
We show that combining two different hypothetical enhancements to quantum computation—namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. …
All the legal maneuvers, the decades of recriminations, came down in the end to two ambiguous syllables. No one knew why old man Memeson had named his two kids “Laurel” and “Yanny,” or why his late wife had gone along with it. …
This chapter defends a (minimal) realist conception of progress in scientific understanding in the face of the ubiquitous plurality of perspectives in science. The argument turns on the counterfactual-dependence framework of explanation and understanding, which is illustrated and evidenced with reference to different explanations of the rainbow.
But the scanty wisdom of man, on entering into an affair which looks well at first, cannot discern the poison that is hidden in it, as I have said above of hectic fevers. Therefore, if he who rules a principality cannot recognize evils until they are upon him, he is not truly wise; and this insight is given to few. …
Among the philosophical disciplines transmitted to the Arabic and
Islamic world from the Greeks, metaphysics was of paramount
importance, as its pivotal role in the overall history of the
transmission of Greek thought into Arabic makes evident. The
beginnings of Arabic philosophy coincide with the production of the
first extensive translation of Aristotle’s Metaphysics,
within the circle of translators associated with the founder of Arabic
philosophy, al-Kindī. The so-called “early” or
“classical” phase of falsafa ends with the
largest commentary on the Metaphysics available in Western
philosophy, by Ibn Rushd (Averroes).
It is often thought that the vindication of experimental work lies in its capacity to be revelatory of natural systems. I challenge this idea by examining laboratory experiments in ecology. A central task of community ecology involves combining mathematical models and observational data to identify trophic interactions in natural systems. But many ecologists are also lab scientists: constructing microcosm or ‘bottle’ experiments, physically realizing the idealized circumstances described in mathematical models. What vindicates such ecological experiments? I argue that ‘extrapolationism’, the view that ecological lab work is valuable because it generates truths about natural systems, does not exhaust the epistemic value of such practices. Instead, bottle experiments also generate ‘understanding’ of both ecological dynamics and empirical tools. Some lab-work, then, aids theoretical understanding, as well as targeting hypotheses about nature.
It is often assumed that one couldn’t finitely specify a nonmeasurable set. In this post I will argue for two theses:
It is possible that someone finitely specifies a nonmeasurable set. It is possible that someone finitely specifies a nonmeasurable set and reasonably believes—and maybe even knows—that she is doing so. …
Zero provides a challenge for philosophers of mathematics with realist inclinations. On the one hand it is a bona fide number, yet on the other it is linked to ideas of nothingness and non-being. This paper provides an analysis of the epistemology and metaphysics of zero. We develop several constraints and then argue that a satisfactory account of zero can be obtained by integrating recent work in numerical cognition with a philosophical account of absence perception.
Each thing is fundamental. Not only is no thing any more or less real than any other, but no thing is prior to another in any robust ontological sense. Thus, no thing can explain the very existence of another, nor account for how another is what it is. I reach this surprising conclusion by undermining two important positions in contemporary metaphysics: hylomorphism and hierarchical views employing so-called building relations, such as grounding. The paper has three main parts. First, I observe hylomorphism is alleged by its proponents to solve various philosophical problems. However, I demonstrate, in light of a compelling account of explanation, that these problems are actually demands to explain what cannot be but inexplicable. Second, I show how my argument against hylomorphism illuminates an account of the essence of a thing, thereby providing insight into what it is to exist. This indicates what a thing, in the most general sense, must be and a correlative account of the structure in reality. Third, I argue that this account of structure is incompatible not only with hylomorphism, but also with any hierarchical view of reality. Although hylomorphism and the latter views are quite different, representing distinct philosophical traditions, I maintain they share untenable accounts of structure and fundamentality and so should be rejected on the same grounds.
Let us motivate this claim in more detail. Experimentation is a key element when characterizing simulation modeling , exactly because it occurs in two varieties. The first variety has been called theoretical model, computer, or numerical experiments. We prefer to call them simulation experiments. They are used to investigate the behavior of models. Clearly simulation offers new possibilities for conducting experiments of this sort and hence investigating models beyond what is tractable by theoretical analysis. We are interested in how simulation experiments function in simulation modeling. Importantly, relevant properties of simulation models can be known only by simulation experiments. There are two immediate and important consequences. First, simulation experiments are unavoidable in simulation modeling. Second, when researchers construct a model and want to find out how possible elaborations of the current version perform, they will have to conduct repeated experiments.
We provide a novel perspective on “regularity” as a property of representations of the Weyl algebra. We first critique a proposal by Halvorson [2004, “Complementarity of representations in quantum mechanics”, Studies in History and Philosophy of Modern Physics 35(1), pp. 45–56], who argues that the non-regular “position” and “momentum” representations of the Weyl algebra demonstrate that a quantum mechanical particle can have definite values for position or momentum, contrary to a widespread view. We show that there are obstacles to such an intepretation of non-regular representations. In Part II, we propose a justification for focusing on regular representations, pace Halvorson, by drawing on algebraic methods.
A critical survey of some attempts to define ‘computer’, beginning with some informal ones (from reference books, and definitions due to H. Simon, A.L. Samuel, and M. Davis), then critically evaluating those of three philosophers (J.R. Searle, P.J. Hayes, and G. Piccinini), and concluding with an examination of whether the brain and the universe are computers.
The sustained failure of efforts to design an infinite lottery machine using ordinary probabilistic randomizers is traced back to a problem familiar to set theorists: there are no constructive prescriptions for probabilistically non-measurable sets. Yet construction of such sets is required if we are to be able to read the result of an infinite lottery machine that is built from ordinary probabilistic randomizers. All such designs face a dilemma: they can provide an accessible (readable) result with probability zero; or an inaccessible result with probability greater than zero.
. “If a statistical analysis is clearly shown to be effective … it gains nothing from being … principled,” according to Terry Speed in an interesting IMS article (2016) that Harry Crane tweeted about a couple of days ago [i]. …
My thesis in this paper is a fairly simple one, and one, I believe, that is fairly simple to support on rational grounds – although I imagine it will prove controversial among some dedicated to a strictly ‘scientific’ understanding of life. But the thesis can be stated simply enough: A materialist interpretation of evolutionary theory cannot account for the subjective dimension of life, and, in particular, cannot account for that aspect of life of most concern to religion, its spiritual aspirations. Indeed, not only can it not account for desire of a spiritual sort, it cannot account for desire at all, not even the desire for physical survival, which it presupposes.
There exists a common view that for theories related by a ‘duality’, dual models typically may be taken ab initio to represent the same physical state of affairs, i.e. to correspond to the same possible world. We question this view, by drawing a parallel with the distinction between ‘interpretational’ and ‘motivational’ approaches to symmetries.
Recent literature on noncausal explanation raises the question as to whether explanatory monism, the thesis that all explanations submit to the same analysis, is true. The leading monist proposal holds that all explanations support change-relating counterfactuals. We provide several objections to this monist position.
Objects are central in visual, auditory, and tactual perception. But what counts as a perceptual object? I address this question via a structural unity schema, which specifies how a collection of parts must be arranged to compose an object for perception. On the theory I propose, perceptual objects are composed of parts that participate in causally sustained regularities. I argue that this theory falls out of a compelling account of the function of object perception, and illustrate its applications to multisensory perception. I also argue that the account avoids problems faced by standard views of visual and auditory objects.
I consider the issue of whether perceptual content is “rich” or “thin,” using the lens of psychosemantics. As a case study, I examine Neander’s (2017) recent psychosemantic theory of perceptual representations, which supports a thin view of perceptual content. I argue that the view faces difficulties, and that these difficulties trace directly to the component that makes it thin-friendly. I show that this sort of issue is not unique to Neander’s theory—it also arises for Dretske’s and Fodor’s accounts. I then articulate a more general challenge for any psychosemantic theorist seeking to retain a systematically thin view of perceptual content. I conclude that a viable psychosemantics of perception is unlikely to support the thin view.