-
53497.134662
Is the ideal of value neutrality in science (a) achievable, (b) desirable, and, (c) not detrimental? Alex van den Berg and Tay Jeong (2022) passionately defend the ideal of value neutrality. In this reply, I would like to fine-tune some of their arguments as well as refute others. While there seems to be a broad consensus among philosophers of science that value neutrality is not achievable, one could still defend it as an ideal to aspire to for the sciences (including social sciences). However, I argue that the ideal of value neutrality advanced by van den Berg and Jeong is detrimental, therefore not desirable. We should rather adjust our view of science towards scientific pluralism and perspectivism in combination with strategies to productively deal with values in science. The latter approach is, pace van den Berg and Jeong, more conducive to democracy and egalitarianism than the ideal of value neutrality.
-
166303.134805
Participatory and collaborative approaches in sustainability science and public health research contribute to co-producing evidence that can support interventions by involving diverse societal actors that range from individual citizens to entire communities. However, existing philosophical accounts of evidence are not adequate to deal with the kind of evidence generated and used in such approaches.
-
168722.134816
There is a profound lack of respect, tolerance, and empathy in contemporary politics. Within the past few decades, political opponents have steadily grown to dislike, distrust, fear, and loathe each other; moreover, members of polarized groups perceive one another as closed minded, arrogant, and immoral.1 However, new empirical research suggests that intellectual humility may be useful in bridging political divisions.2 For this reason, a growing number of psychologists and philosophers maintain that intellectual humility is an antidote to some of democracy’s ills.
-
342956.134824
Q: Jonathan Rauch’s Kindly Inquisitors is against cancel culture, and against punishing hate speech—sounds interesting. Tell me about it. A: It’s more broadly a defense of what Rauch calls “liberal science.” Liberal science “uses intellectual resources efficiently, it settles differences of opinion peacefully, and it inherently blocks the political manipulation of knowledge.” The alternatives don’t, or don’t do those things as well. …
-
399521.134831
In 2015 the Laser Interferometer Gravitational Wave Observatory (‘LIGO’), comprising observatories in Hanford, WA and Livingston, LA, detected gravitational waves for the first time. In the “discovery” paper the LIGO-Virgo Collaboration describe this event, “GW150914”, as the first “direct detection” of gravitational waves and the first “direct observation” of a binary black hole merger (Abbott et al. 2016, 061102–1). Prima facie, these are somewhat puzzling claims. First, there is something counter-intuitive about describing such a sophisticated experiment as a “direct” detection, insofar as this suggests that the procedure was simple or straightforward. Even strong gravitational waves produce only a tiny change in the length of the 4km interferometer arms.
-
550005.134839
In this paper, we consider two ways in which traditional approaches to testing lay moral theories have oversimplified our picture of moral psychology. Based on thought experiments (e.g., Foot 1967 and Thomson 1976) concerning the moral permissibility of certainly killing one to certainly saving five, psychological experiments (e.g., Cushman et al.
-
562572.134846
The concept of white ignorance refers to phenomena of not-knowing that are produced by and reinforce systems of white supremacist domination and exploitation. I distinguish two varieties of white ignorance, belief-based white ignorance and practice-based white ignorance. Belief-based white ignorance consists in an information deficit about systems of racist oppression. Practice-based white ignorance consists in unresponsiveness to the political agency of persons and groups subject to racist oppression. Drawing on the antebellum political thought of Black abolitionists Frederick Douglass and Harriet Jacobs, I contend that an antiracist politics that conceives of its epistemic task in terms of combating practice-based white ignorance offers a more promising frame for liberatory struggle. A focus on practice-based white ignorance calls for a distinctive form of humility that involves recognition of the limits of one ’s own political agency in relation to others, which is integral to democratic relations between free, equal, yet mutually dependent persons.
-
630219.134854
This paper discusses the role of data within scientific reasoning and as evidence for theoretical claims, arguing for the idea that data can yield theoretically grounded models and be inferred, predicted, or explained from/by such models. Contrary to Bogen and Woodward's skepticism regarding the feasibility and epistemic relevance of data-to-theory and theory-to-data inferences, we draw upon scientific artificial intelligence literature to advocate that: a) many models are routinely inferred and predicted from the data and routinely used to infer and predict data: b) such models can, at least in some contexts, play the role of theoretical device.
-
648806.134863
A couple of years ago I showed how to construct hyperreal finitely additive probabilities on infinite sets that satisfy certain symmetry constraints and have the Bayesian regularity property that every possible outcome has non-zero probability. …
-
662912.13487
D espite F. A. Hayek s apparent rejection of the very idea of social justice, this essay develops a theory of social justice from entirely Hayekian components. Hayek recognizes two concepts of social justice—local and holistic. Local social justice identifies principles that can be used to judge the justice of certain specific economic outcomes. Hayek rejects this conception of social justice on the grounds that specific economic outcomes are not created by moral agents, such that social justice judgments are a category mistake, like the idea of a “moral stone” (Hayek 1978, 78). But if one understands social justice as the principles that ought to govern the social order as a whole, as John Rawls ([1971] 1999) did, then Hayek is on board. Hayek agrees with Rawls that we cannot use contractarian principles to evaluate particular economic outcomes, and he supports Rawls’s attempt to identify the general principles that should govern social systems (Hayek 1978, 100).
-
662933.134877
This essay explores and criticizes Matteo Bonotti’s argument that parties and partisans in a publicly justified polity should appeal primarily, if not exclusively, to accessible justificatory reasons to fulfill their political duties. I argue that political parties should only support coercive policies if they rationally believe that the coercive law or policy in question can be publicly justified to those subject to the law or policy in terms of their own private—specifically intelligible—reasons. I then explore four practical differences between our two approaches. In contrast to Bonotti’s accessible reasons approach, the intelligibility approach (1) facilitates the provision of assurance between citizens and political officials, (2) requires that parties and partisans support fewer coercive policies, (3) allows more exemptions from generally applicable laws, and (4) facilitates logrolling and alliance formation.
-
745687.134885
Although fundamental arguments have been presented to support the value-laden nature of all scientific research, they appear to be difficult to apply to at least some cases of basic research in physics. I explain why this is the case. I argue that basic research in physics is, in a very specific sense, often value-laden to a lesser degree. To spell this out, I refer to the different signal-to-noise ratios that can be achieved in different fields of research. I also argue that having a very low degree of value-ladenness in the very specific sense that I identify does not mean that the research in question is not value-laden at all.
-
778659.134896
Richard Thaler and Cass Sunstein have defended “nudging” people into making better choices. A nudge, they claim, “is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.” A nudge, then, is a kind of intervention, but not a coercive one: “To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.” Examples abound, from automatic enrollment in savings programs to the deployment of housefly images in urinals to reduce spillover.
-
778800.134905
Democratic theorists have proposed a number of competing justifications for democratic order, but no theory has achieved a consensus. While expecting consensus may be unrealistic, I nonetheless contend that we can make progress in justifying democratic order by applying competing democratic theories to different stages of the democratic process. In particular, I argue that the selection of political officials should be governed in accord with aggregative democracy. This process should prize widespread participation, political equality, and proper preference aggregation. I then argue that the selection of public policies by political officials should be governed in accord with deliberative democracy. This process should prize high quality deliberation and political equality. A process democracy is a democracy that joins an aggregative process for selecting officials with a deliberative process for selecting policies. Democracy is justified and legitimate when it is structured in this way.
-
803384.134912
Bayesian epistemology is broadly concerned with providing norms for rational belief and learning using the mathematics of probability theory. But many authors have worried that the theory is too idealized to accurately describe real agents. In this paper I argue that Bayesian epistemology can describe more realistic agents while retaining sufficient generality by introducing ideas from a branch of mathematics called computable analysis. I call this program computable Bayesian epistemology. I situate this program by contrasting it with an ongoing debate about ideal versus bounded rationality. I then present foundational ideas from computable analysis and demonstrate their usefulness by proving the main result: on countably generated spaces there are no computable, finitely additive probability measures. On this basis I argue that bounded agents cannot have finitely additive credences, and so countable additivity is the appropriate norm of rationality. I conclude by discussing prospects for this research program.
-
827497.134919
What Is It Like to Be a (Rawlsian) Liberal? Some Comments on Alexandre Lefebvre’s Liberalism as a Way of Life
Liberalism is today mostly understood as a political doctrine that is essentially about how constitution and law should be designed to regulate the exercise of state power and to guarantee that institutions meet some fairness criteria. …
-
849981.134925
Number Nativism is the view that humans innately represent precise natural numbers. Despite a long and venerable history, it is often considered hopelessly out of touch with the empirical record. I argue that this is a mistake. After clarifying Number Nativism and distancing it from related conjectures, I distinguish three arguments which have been seen to refute the view. I argue that, while popular, two of these arguments miss the mark, and fail to place pressure on Number Nativism. Meanwhile, a third argument is best construed as a challenge: rather than refuting Number Nativism, it challenges its proponents to provide positive evidence for their thesis and show that this can be squared with apparent counterevidence. In response, I introduce psycholinguistic work on The Tolerance Principle (not yet considered in this context), propose that it is hard to make sense of without positing precise and innate representations of natural numbers, and argue that there is no obvious reason why these innate representations couldn’t serve as a basis for mature numeric conception.
-
955328.134933
- Author of Statistical methods in online A/B testing
- founder of Analytics-Toolkit.com
- statistics instructor at CXL Institute
In online experimentation, a.k.a. online A/B testing, one is primarily interested in estimating if and how different user experiences affect key business metrics such as average revenue per user. …
-
1033974.134943
No-lose theorems state that—no matter what the result of an experiment will be—there will be a relevant epistemic gain if the experiment is performed. Here I provide an analysis of such theorems, looking at examples from particle physics. I argue that no-lose theorems indicate the pursuitworthiness of experiments by partially decoupling the expected epistemic gain of an experiment from the ex-ante probability that the primarily intended outcome is achieved. While an experiment’s pursuitworthiness typically depends on the ex-ante probability that the intended outcome is realized, this is not the case if there is a no-lose theorem in place. I argue that this works only if (1) the theorem’s win condition is attainable with reasonable effort, (2) the theorem’s underlying assumptions are plausible, and (3) all potential experimental outcomes are epistemically relevant. I also explore the consequences of no-lose theorems for considerations of scientific pursuitworthiness. First, no-lose theorems can play an important role in assessing the risk associated with investing into a research project. Second, no-lose experiments can enhance scientists’ agreement about the pursuitworthiness of experiments. My analysis also shows that no-lose theorems can face a number of limitations in these contexts.
-
1066008.134955
A common worry about mathematical platonism is how we could know about an independent realm of mathematical facts. The same kind of worry arises for moral realism: if there are irreducible moral facts, how could we have access to them? …
-
1067257.134966
Political liberalism’s central commitments to recognizing reasonable pluralism and institutionalizing a substantive conception of justice are inconsistent. If reasonable pluralism applies to conceptions of justice as deeply as it applies to conceptions of the good, then some reasonable people will reject even many liberal conceptions of justice as unreasonable. If so, then imposing these conceptions of justice on citizens violates the liberal principle of legitimacy and related public justification requirements (Rawls 2005, xliv). Political liberal justice is thereby pitted against reasonable pluralism about conceptions of justice or justice pluralism . I argue that the inconsistency must be resolved in favor of justice pluralism; political liberals must abandon their commitment to institutionalizing a substantive conception of justice. My argument is not that political liberalism is false, but that it must change, and that the change will prove significant.
-
1067353.134974
OUR aim in this article is to argue that the public reason project, as initiated by John Rawls and others 25 years ago, is evolving into two distinct projects, with one having clear advantages over the other. The public reason literature is no longer an intramural debate between people with similar foundational commitments, but two new projects with fundamentally different goals and starting assumptions. These two projects derive from the resolution of a tension within Rawls’s thought, specifically the conflict between what Rawls called pro tanto justification and full justification. Pro tanto justification concerns the justification of a political conception of justice that “takes into account only political values,” such that the justification of a political conception is “complete” in that political values can be “suitably ordered, or balanced, so that those values alone give a reasonable answer by public reason to all or nearly all questions concerning constitutional essentials and basic justification.” But its very name suggests the possibility that pro tanto justification might be overridden by citizens’ comprehensive doctrines. Full justification follows, occurring when each citizen “accepts a political conception and fills out its justification by embedding it in some way into the citizen’s comprehensive doctrine as either true or reasonable.” That is, each citizen must figure out how to order or weigh political values against her nonpolitical values. A political conception itself “gives no guidance in such questions” because it does not address nonpolitical values.
-
1077165.134982
This chapter is interested in the epistemology of algorithms. As I intend to approach the topic, this is an issue about epistemic justification. Current approaches to justification emphasize the transparency of algorithms, which entails elucidating their internal mechanisms –such as functions and variables– and demonstrating how (or that) these produce outputs. Thus, the mode of justification through transparency is contingent on what can be shown about the algorithm and, in this sense, is internal to the algorithm. In contrast, I advocate for an externalist epistemology of algorithms that I term computational reliabilism (CR). While I have previously introduced and examined CR in the field of computer simulations ([42, 53, 4]), this chapter extends this reliabilist epistemology to encompass a broader spectrum of algorithms utilized in various scientific disciplines, with a particular emphasis on machine learning applications. At its core, CR posits that an algorithm’s output is justified if it is produced by a reliable algorithm. A reliable algorithm is one that has been specified, coded, used, and maintained utilizing reliability indicators. These reliability indicators stem from formal methods, algorithmic metrics, expert competencies, cultures of research, and other scientific endeavors. The primary aim of this chapter is to delineate the foundations of CR, explicate its operational mechanisms, and outline its potential as an externalist epistemology of algorithms.
-
1077199.134988
Open to Debate’s July debate between Peter Singer and Alice Crary, on the topic ‘Does the Effective Altruism Movement Get Giving Right?’, is now online. I was invited to participate as one of four “outside experts” to ask a question towards the end of the debate. …
-
1132723.134995
In this paper I investigate whether there are any cases in which it is rational for a person to hold inconsistent beliefs and, if there are, just what implications this might have for the theory of epistemic justification. A number of issues will crop up along the way – including the relation between justification and rationality, the nature of defeat, the possibility of epistemic dilemmas, the importance of positive epistemic duties, and the distinction between transitional and terminal attitudes.
-
1140431.135002
In this paper, I develop an explicitly normativist account of logic, according to which logic directs agents to means that are optimal relative to their ends and evaluates agents with respect to their cognitive performances. I hope to illuminate the nature of logic and advance debates about the normativity of logic by taking seriously the idea that logic is analogous to ethics, as suggested by Frege and Peirce. I describe and respond to anti-normativist arguments from Harman (1984, 1986), Russell (2020), and Tajer (2022).
-
1196221.135009
Does the visual system adapt to number? For more than fifteen years, most have assumed that the answer is an unambiguous “yes”. Against this prevailing orthodoxy, we recently took a critical look at the phenomenon, questioning its existence on both empirical and theoretical grounds, and providing an alternative explanation for extant results (the old news hypothesis). We subsequently received two critical responses. Burr, Anobile, and Arrighi rejected our critiques wholesale, arguing that the evidence for number adaptation remains overwhelming. Durgin questioned our old news hypothesis — preferring instead a theory about density adaptation he has championed for decades — but also highlighted several ways in which our arguments do pose serious challenges for proponents of number adaptation. Here, we reply to both. We first clarify our position regarding number adaptation. Then, we respond to our critics’ concerns, highlighting seven reasons why we remain skeptical about number adaptation. We conclude with some thoughts about where the debate may head from here.
-
1264751.135016
The standard definition of a gauge transformation in the constrained Hamiltonian formalism traces back to Dirac (1964): a gauge transformation is a transformation generated by an arbitrary combination of first-class constraints. On the basis of this definition, Dirac argued that one should extend the form of the Hamiltonian in order to include all of the gauge freedom. However, there have been some recent dissenters of Dirac’s view. Notably, Pitts (2014) argues that a first-class constraint can generate “a bad physical change” and therefore that extending the Hamiltonian in the way suggested by Dirac is unmotivated. In this paper, I use a geometric formulation of the constrained Hamiltonian formalism to argue that there is a flaw in the reasoning used by both sides of the debate, but that correct reasoning supports the standard definition and the extension to the Hamiltonian. In doing so, I clarify two conceptually different ways of understanding gauge transformations, and I pinpoint what it would take to deny that the standard definition is correct.
-
1264804.135024
There are myriad techniques industry actors use to shape the public understanding of science. While a naive view of this sort of influence might assume these techniques typically involve fraud and/or outright deception, the truth is more nuanced. The aim of this paper is to analyze one common technique where industry actors fund and share research that is accurate and (often) high quality, but nonetheless misleads the public on important matters of fact. The technique in question involves reshaping the causal understanding of some phenomenon with distracting information. We call this industrial distraction. We use case studies and causal models to illustrate how industrial distraction works, and how it can negatively impact belief and decision making even for rational learners. As we argue, this analysis is relevant to discussions about science policy, and also to philosophical and social scientific debates about how to define and understand misleading content.
-
1264845.135031
Theories of qualitative probability provide a justification for the use of numerical probabilities to represent an agent’s degrees of belief. If a qualitative probability relation satisfies a set of well-known axioms then there is a probability measure that is compatible with that relation. In the particular case of subjective probability this means that we have sufficient conditions for representing an agent as having probabilistic beliefs. But the classical results are not constructive; there is in no general method for calculating the compatible measure from the qualitative relation. To address this problem this paper introduces the theory of computable qualitative probability. I show that there is an algorithm that computes a probability measure from a qualitative relation in highly general circumstances. Moreover I show that given a natural computability requirement on the qualitative relation the resulting probability measure is also computable. Since computable probability is a growing interest in Bayesian epistemology this result provides a valuable interpretation of that notion.