This paper generalises Enelow (J Polit 43(4):1062–1089, 1981) and Lehtinen’s (Theory Decis 63(1):1–40, 2007b) model of strategic voting under amendment agendas by allowing any number of alternatives and any voting order. The generalisation enables studying utilitarian efficiencies in an incomplete information model with a large number of alternatives. Furthermore, it allows for studying how strategic voting affects path-dependence. Strategic voting increases utilitarian efficiency also when there are more than three alternatives. The existence of a Condorcet winner does not guarantee path-independence if the voters engage in strategic voting under incomplete information. A criterion for evaluating path-dependence, the degree of path-dependence, is proposed, and the generalised model is used to study how strategic voting affects it. When there is a Condorcet winner, strategic voting inevitably increases the degree of path-dependence, but when there is no Condorcet winner, strategic voting decreases path-dependence. Computer simulations show, however, that on average it increases the degree of path-dependence.
The most common argument against the use of rational choice models outside economics is that they make unrealistic assumptions about individual behavior. We argue that whether the falsity of assumptions matters in a given model depends on which factors are explanatorily relevant. Since the explanatory factors may vary from application to application, effective criticism of economic model building should be based on model-specific arguments showing how the result really depends on the false assumptions. However, some modeling results in imperialistic applications are relatively robust with respect to unrealistic assumptions.
Political science and economic science . . . make use of the same language, the same mode of abstraction, the same instruments of thought and the same method of reasoning. (Black 1998, 354) Proponents as well as opponents of economics imperialism agree that imperialism is a matter of unification; providing a unified framework for social scientific analysis. Uskali Mäki distinguishes between derivational and ontological unification and argues that the latter should serve as a constraint for the former. We explore whether, in the case of rational-choice political science, self-interested behavior can be seen as a common causal element and solution concepts as the common derivational element, and whether the former constraints the use of the latter. We find that this is not the case. Instead, what is common to economics and rational-choice political science is a set of research heuristics and a focus on institutions with similar structures and forms of organization.
This paper examines the welfare consequences of strategic voting under the Borda rule in a comparison of utilitarian efficiencies in simulated voting games under two behavioural assumptions: expected utility-maximising behaviour and sincere behaviour. Utilitarian efficiency is higher in the former than in the latter. Strategic voting increases utilitarian efficiency particularly if the distribution of preference intensities correlates with voter types. The Borda rule is shown to have two advantages: strategic voting is beneficial even if some but not all voter types engage in strategic behaviour, and even if the voters’ information is based on unreliable signals.
This paper reconsiders the discussion on ordinal utilities versus preference intensities in voting theory. It is shown by way of an example that arguments concerning observability and risk-attitudes that have been presented in favour of Arrow’s Independence of Irrelevant Alternatives (IIA), and against utilitarian evaluation, fail due to strategic voting. The failure of these two arguments is then used to justify utilitarian evaluation of outcomes in voting. Given a utilitarian viewpoint, it is then argued that strategy-proofness is not normatively acceptable. Social choice theory is criticised not just by showing that some of its most important conditions are not normatively acceptable, but also by showing that the very idea of imposing condition on social choice function under the assumption of sincere behaviour does not make much sense because satisfying a condition does not quarantee that a voting rule actually has the properties that the condition confers to it under sincere behaviour. IIA, the binary intensity IIA, and monotonicity are used as illustrations of this phenomenon.
For a PDF version of this post, see here.Many years ago, I was climbing Sgùrr na Banachdich with my friend Alex. It's a mountain in the Black Cuillin, a horseshoe of summits that surround Loch Coruisk at the southern end of the Isle of Skye. …
In a recent paper, Justin D’Ambrosio (2020) has offered an empirical argument in support of a negative solution to the puzzle of Macbeth’s dagger—namely, the question of whether, in the famous scene from Shakespeare’s play, Macbeth sees a dagger in front of him. D’Ambrosio’s strategy consists in showing that “seeing” is not an existence-neutral verb; that is, that the way it is used in ordinary language is not neutral with respect to whether its complement exists. In this paper, we offer an empirical argument in favor of an existence-neutral reading of “seeing”. In particular, we argue that existence-neutral readings are readily available to language users. We thus call into question D’Ambrosio’s argument for the claim that Macbeth does not see a dagger. According to our positive solution, Macbeth sees a dagger, even though there is not a dagger in front of him.
Effective altruism is based on a very simple idea: we should do the most good we can. Obeying the usual rules about not stealing, cheating, hurting, and killing is not enough, or at least not enough for those of us who have the good fortune to live in material comfort, who can feed, house, and clothe ourselves and our families and still have money or time to spare. …
Cheap talk has often been thought incapable of supporting the emergence of cooperation because costless signals, easily faked, are unlikely to be reliable (Zahavi and Zahavi, 1997). I show how, in a social network model of cheap talk with reinforcement learning, cheap talk does enable the emergence of cooperation, provided that individuals also temporally discount the past. This establishes one mechanism that suffices for moving a population of initially uncooperative individuals to a state of mutually beneficial cooperation even in the absence of formal institutions.
This paper examines two questions about scientists’ search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon (2009) that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments.
As many Western countries emerged from initial periods of lockdown in spring 2020, they had brought COVID-19 infection rates down significantly. This was followed, however, with more drastic second and third waves of viral spread, which many of these same countries are struggling to bring under control, even with the implementation of further periods of lockdown. Could this have been prevented by policymakers? We revisit two strategies that were focus of much discussion during the early stages of the pandemic, and which were implemented in several Western countries, albeit in a weakened form. These strategies both proceed by targeting certain segments of the population, while allowing others to go about their lives unhindered. The first suggests selectively isolating those that would most likely suffer severe adverse effects if infected – in particular the elderly. The second involves identifying and quarantining those who are likely to be infected through a contact tracing app that would centrally store users’ information. We suggest that both strategies showed promise in preventing the need for further lockdowns, albeit in a significantly more stringent form than anything that was implemented in Western countries. We then proceed to an ethical evaluation of these more stringent policies. We contend that selective isolation strategies face severe ethical problems due to its discriminatory nature, while the ethical issues with a more aggressive contact tracing regime can be mitigated. This analysis has implications for how to respond effectively and ethically to future pandemics, and perhaps contains lessons on how to successfully emerge from our current predicament.
Economic policy evaluations require social welfare functions for variable-size populations. Two important candidates are critical-level generalized utilitarianism (CLGU) and rank-discounted critical-level generalized utilitarianism, which was recently characterized by Asheim and Zuber (2014) (AZ). AZ introduce a novel axiom, existence of egalitarian equivalence (EEE). First, we show that, under some uncontroversial criteria for a plausible social welfare relation, EEE suffices to rule out the Repugnant Conclusion of population ethics (without AZ’s other novel axioms). Second, we provide a new characterization of CLGU: AZ’s set of axioms is equivalent to CLGU when EEE is replaced by the axiom same-number independence.
. Where do journal editors look to find someone to referee your manuscript (in the typical “double blind” review system in academic journals)? One obvious place to look is the reference list in your paper. …
Leo Strauss was a twentieth-century German Jewish émigré
to the United States whose intellectual corpus spans ancient, medieval
and modern political philosophy and includes, among others, studies of
Plato, Maimonides, Machiavelli, Hobbes, Spinoza, and Nietzsche. Strauss wrote mainly as a historian of philosophy and most of his
writings take the form of commentaries on important thinkers and their
writings. Yet as he put it: “There is no inquiry into the
history of philosophy that is not at the same time a
philosophical inquiry” (PL, p. 41). While much of his
philosophical project involved an attempt to rethink pre-modern
philosophy, the impetus for this reconsideration and the philosophical
problems that vexed Strauss most were decidedly modern.
For the uninitiated, the dense nature of mathematical language can act as an obscuring force. With this essay we aim to bring two classical results of discrete mathematics into the light. To this end we analyze winning strategies in a certain class of solitaire games. The gains are non-standard proofs of the results of K˝onig  and Vizing . For the standard treatment of these results, see . (For a dense and obscure version of the non-standard proofs presented here, see .) First, let’s introduce the games.
Joseph Henrich's ambitious tome, The WEIRDest People in the World, is driving me nuts. It's good enough and interesting enough that I want to read it. Henrich's general idea is that people in Western, Educated, Industrial, Rich, Democratic (WEIRD) societies differ psychologically from people in more traditionally structured societies, and that the family policies of the Catholic Church in medieval Europe lie at the historical root of this difference. …
In some severely uncertain situations, exemplified by climate change and novel pandemics, policymakers lack a reasoned basis for assigning probabilities to the possible outcomes of the policies they must choose between. I outline and defend an uncertainty averse, egalitarian approach to policy evaluation in these contexts. The upshot is a theory of distributive justice which offers especially strong reasons to guard against individual and collective misfortune.
People care very much about being listened to. In everyday talk, we make moral-sounding judgements of people as listeners: praising a doctor who listens well even if she does not have a ready solution, or blaming a boss who does not listen even if the employee manages to get her situation addressed. In this sense, listening is a normative behaviour: that is, we ought to be good listeners. Whilst several disciplines have addressed the normative importance of interpersonal listening—particularly in sociology, psychology, media and culture studies— analytic philosophy does not have a framework for dealing with listening as a normative interpersonal behaviour. Listening usually gets reduced mere speech-parsing (in philosophy of language), or into a matter of belief and trust in the testimony of credible knowers (in social epistemology). My preliminary task is to analyse why this reductive view is taken for granted in the discipline; to diagnose the problem behind the reduction and propose a more useful alternative approach.
Traditionally, the mechanism design literature has been primarily focused on settings where the bidders’ valuations are independent. However, in settings where valuations are correlated, much stronger results are possible. For example, the entire surplus of efficient allocations can be extracted as revenue. These stronger results are true, in theory, under generic conditions on parameter values. But in practice, they are rarely, if ever, implementable due to the stringent requirement that the mechanism designer knows the distribution of the bidders types exactly. In this work, we provide a computationally efficient and sample efficient method for designing mechanisms that can robustly handle imprecise estimates of the distribution over bidder valuations. This method guarantees that the selected mechanism will perform at least as well as any ex-post mechanism with high probability. The mechanism also performs nearly optimally with sufficient information and correlation. Further, we show that when the distribution is not known, and must be estimated from samples from the true distribution, a sufficiently high degree of correlation is essential to implement optimal mechanisms. Finally, we demonstrate through simulations that this new mechanism design paradigm generates mechanisms that perform significantly better than traditional mechanism design techniques given sufficient samples.
The article at hand presents the results of a literature review on the ethical issues related to scientific authorship. These issues are understood as questions and/or concerns about obligations, values or virtues in relation to reporting, authorship and publication of research results. For this purpose, the Web of Science core collection was searched for English resources published between 1945 and 2018, and a total of 324 items were analyzed. Based on the review of the documents, ten ethical themes have been identified, some of which entail several ethical issues. Ranked on the basis of their frequency of occurrence these themes are: 1) attribution, 2) violations of the norms of authorship, 3) bias, 4) responsibility and accountability, 5) authorship order, 6) citations and referencing, 7) definition of authorship, 8) publication strategy, 9) originality, and 10) sanctions. In mapping these themes, the current article explores major ethical issue and provides a critical discussion about the application of codes of conduct, various understandings of culture, and contributing factors to unethical behavior.
Many believe that employment can be wrongfully exploitative, even if it is consensual and mutually beneficial. At the same time, it may seem third parties should not do anything to preclude or eliminate such arrangements, given these same considerations of consent and benefit. I argue that there are perfectly sensible, intuitive ethical positions that vindicate this “Reasonable View.” The view requires such defense because the literature often suggests that there is no theoretical space for it. I respond to arguments for the clearest symptom of this obscuration: the so-called nonworseness claim that a consensual, mutually beneficial transaction cannot be “morally worse” than its absence. In addition to making space for the Reasonable View, this serves my dialectical goal of encouraging distinct attention to first- and third-party obligations.
Many consider Nozick’s “utility monster”—a being more efficient than ordinary people at converting resources into well-being, with no upper limit—to be a damning counterexample to utilitarianism. But our intuitions may be reversed by considering a variation in which the utility monster starts from a baseline status of massive suffering. This suggests a rethinking of the force of the original objection.
It has been claimed that a unique feature of human culture is that it accumulates beneficial modifications over time. On the basis of a couple of methodological considerations, we here argue that, perhaps surprisingly, there is insufficient evidence for a proper test of this claim. And we indicate what further research would be needed to firmly establish the cumulativeness of human culture.
Tommaso Campanella (Stilo, 1568–Paris, 1639) was one of the
most important philosophers of the late Renaissance. Although his
best-known work today is the utopian text La città del
Sole (The City of the Sun), his thought was extremely
complex and engaged with all fields of learning. The fundamental core
of his thinking, which will be examined in this article, was concerned
with the philosophy of nature (what would nowadays be called science),
magic, political theory and natural religion.
Over the past three years, I have returned to one question over and over again: how does technology reshape our moral beliefs and practices? In his classic study of medieval technology, Lynn White Jr argues that simple technological changes can have a profound effect on social moral systems. …
In defence of pluralism Recently, after a couple of hours discussing a problem in the philosophy of mathematics, a colleague mentioned that he wanted to propose a sort of pluralism as a solution. We were debating the foundations of mathematics, and he wanted to consider the claim that there might be no single unique foundation, but rather many different foundations, no one of them better than the others. …
It is one of the great good fortunes of my life that I was able to count Dick as a friend for almost 40 years. I first met him shortly after I arrived at the University in 1975 as a new assistant professor in the Philosophy Department. I moved to California in 1999, but the friendship continued at a distance after that.
Arrhenius’s impossibility theorems purport to demonstrate that no population axiology can satisfy each of a small number of intuitively compelling adequacy conditions. However, it has recently been pointed out that each theorem depends on a dubious assumption: Finite Fine-Grainedness. This assumption states that there exists a finite sequence of slight welfare differences between any two welfare levels. Denying Finite Fine-Grainedness makes room for a lexical population axiology which satisfies all of the compelling adequacy conditions in each theorem. Therefore, Arrhenius’s theorems fail to prove that there is no satisfactory population axiology. In this paper, I argue that Arrhenius’s theorems can be repurposed. Since all of our population-affecting actions have a non-zero probability of bringing about more than one distinct population, it is population prospect axiologies that are of practical relevance, and amended versions of Arrhenius’s theorems demonstrate that there is no satisfactory population prospect axiology. These impossibility theorems do not depend on Finite Fine-Grainedness, so lexical views do not escape them.
[My thanks to Zach Barnett for writing the following guest post...]At its best, philosophy encourages us to challenge our deepest and most passionately held convictions. No paper does this more forcefully than John Taurek’s “Should the Numbers Count?” Taurek’s paper challenges us to justify the importance of numbers in ethics.Six people are in trouble. …
This paper explores the feasibility of offering a restorative justice (RJ) approach in cases of domestic violence (DV). I argue that widely used RJ processes—such as ‘conferencing’ —are unlikely to be sufficiently safe or effective in cases of DV, at least as these processes are standardly designed and practiced (Sections 1-6). I then support the view that if RJ is to be used in cases of DV, then new specialist processes will need to be co-designed with key stakeholders to ensure they embody not only RJ principles, but also feminist theory and the concept of transformative justice (Section 7).