1. 32061.961291
    Neofregeanism and structuralism are among the most promising recent approaches to the philosophy of mathematics. Yet both have serious costs. We develop a view, structuralist neologicism, which retains the central advantages of each while avoiding their more serious costs. The key to our approach is using arbitrary reference to explicate how mathematical terms, introduced by abstraction principles like Hume’s, refer. Focusing on numerical terms, we argue that this allows us to treat abstraction principles as implicit definitions serving to determine all (known) properties of the numbers, achieving a key neofregean advantage, while preserving the key structuralist advantage that which objects play the number role doesn’t matter.
    Found 8 hours, 54 minutes ago on Jack Woods's site
  2. 32522.961356
    It has long been an attraction of Reformed apologetics that it validates the beliefs of Christians with no special philosophical or historical training. Surely, it seems, it should not be necessary for the little child, the kindly old lady, or the hard-working farmer, not called to abstract argument, to have explicit evidences for Christian belief in order to be epistemically justified in it. In this context, the evidentialist is easily cast as the grumpy uncle of the Christian family, setting impossibly high standards for ordinary people and perhaps even for himself, and implying that God is not pleased by a childlike act of faith. By arguing for the instigation of the Holy Spirit or some other spiritual faculty available to the unlearned, Reformed apologists champion the intuitively plausible position that one need not be a philosopher to hold legitimately to Christian belief.
    Found 9 hours, 2 minutes ago on Lydia McGrew's site
  3. 35403.961384
    Discussions of psychological measurement are largely disconnected from issues of measurement in the natural sciences. We show that there are interesting parallels and connections between the two, by focusing on a real and detailed example (temperature) from the history of science. More specifically, our novel approach is to study the issue of validity based on the history of measurement in physics, which will lead to three concrete points that are relevant for the validity debate in psychology. First of all, studying the causal mechanisms underlying the measurements can be crucial for evaluating whether the measurements are valid. Secondly, psychologists would benefit from focusing more on the robustness of measurements. Finally, we argue that it is possible to make good science based on (relatively) bad measurements, and that the explanatory success of science can contribute to justifying the validity of measurements.
    Found 9 hours, 50 minutes ago on Markus I. Eronen's site
  4. 35443.961416
    Mechanistic explanation and metascientific reductionism are two recent and widely discussed approaches to explanation and reduction in neuroscience. I will argue that these are incompatible and that mechanistic explanation has a stronger case, especially when it is combined with James Woodward’s manipulationist model of causal explanation.
    Found 9 hours, 50 minutes ago on Markus I. Eronen's site
  5. 35458.961449
    There is a growing consensus among philosophers of science that scientific endeavors of understanding the human mind or the brain exhibit explanatory pluralism. Relatedly, several philosophers have in recent years defended an interventionist approach to causation that leads to a kind of causal pluralism. In this talk, I explore the consequences of these recent developments in philosophy of science for some of the central debates in philosophy of mind. First, I argue that if we adopt explanatory pluralism and the interventionist approach to causation, our understanding of physicalism has to change, and this leads to what I call pluralistic physicalism. Secondly, I show that this pluralistic physicalism is not endangered by the causal exclusion argument.
    Found 9 hours, 50 minutes ago on Markus I. Eronen's site
  6. 40338.961479
    Brownian computers are supposed to illustrate how logically reversible mathematical operations can be computed by physical processes that are thermodynamically reversible or nearly so. In fact, they are thermodynamically irreversible processes that are the analog of an uncontrolled expansion of a gas into a vacuum.
    Found 11 hours, 12 minutes ago on John Norton's site
  7. 40506.961496
    In the later pages of the notebook, as Einstein let general covariance slip away, he devised and abandoned a new proposal for his gravitational field equations. This same proposal, revived nearly three years later, opened passage to his final theory. In abandoning it in the notebook, Einstein had all but lost his last chance of deliverance. This chapter reports and develops our group’s accounts of this decision. Einstein’s later accounts of this decision blame it upon what he called the “fateful prejudice” of misinterpreting the Christoffel symbols. We suggest that Einstein’s aberrant use and understanding of coordinate systems and coordinate conditions was as important as another fateful prejudice.
    Found 11 hours, 15 minutes ago on John Norton's site
  8. 40537.96151
    The epistemic state of complete ignorance is not a probability distribution. In it, we assign the same, unique, ignorance degree of belief to any contingent outcome and each of its contingent, disjunctive parts. That this is the appropriate way to represent complete ignorance is established by two instruments, each individually strong enough to identify this state. They are the principle of indifference (PI) and the notion that ignorance is invariant under certain redescriptions of the outcome space, here developed into the ‘principle of invariance of ignorance’ (PII). Both instruments are so innocuous as almost to be platitudes. Yet the literature in probabilistic epistemology has misdiagnosed them as paradoxical or defective since they generate inconsistencies when conjoined with the assumption that an epistemic state must be a probability distribution. To underscore the need to drop this assumption, I express PII in its most defensible form as relating symmetric descriptions and show that paradoxes still arise if we assume the ignorance state to be a probability distribution. 1. Introduction. In one ideal, a logic of induction would provide us with a belief state representing total ignorance that would evolve towards different belief states as new evidence is learned. That the Bayesian system cannot be such a logic follows from well-known, elementary considerations. In familiar paradoxes to be discussed here, the notion that indifference over outcomes requires equality of probability rapidly leads to contradictions. If our initial ignorance is sufficiently great, there are so many ways to be indifferent that the resulting equalities contradict the additivity of the probability calculus. We can properly assign equal probabilities in a prior probability distribution only if our ignorance is not complete and we know enough to be able to identify which is the right partition of the outcome space over which to exercise indifference. While a zero value can denote ignorance in alternative systems such as that of Shafer-Dempster, representing ignorance by zero probability fails in more than one way.
    Found 11 hours, 15 minutes ago on John Norton's site
  9. 40774.961524
    Landauer's Principle asserts that there is an unavoidable cost in thermodynamic entropy creation when data is erased. It is usually derived from incorrect assumptions, most notably, that erasure must compress the phase space of a memory device or that thermodynamic entropy arises from the probabilistic uncertainty of random data. Recent work seeks to prove Landauer’s Principle without using these assumptions. I show that the processes assumed in the proof, and in the thermodynamics of computation more generally, can be combined to produce devices that both violate the second law and erase data without entropy cost, indicating an inconsistency in the theoretical system. Worse, the standard repertoire of processes selectively neglects thermal fluctuations. Concrete proposals for how we might measure dissipationlessly and expand single molecule gases reversibly are shown to be fatally disrupted by fluctuations. Reversible, isothermal processes on molecular scales are shown to be disrupted by fluctuations that can only be overcome by introducing entropy creating, dissipative processes.
    Found 11 hours, 19 minutes ago on John Norton's site
  10. 40962.961537
    Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
    Found 11 hours, 22 minutes ago on John Norton's site
  11. 41032.961551
    Thought experiments in science are merely picturesque argumentation. I support this view in various ways, including the claim that it follows from the fact that thought experiments can err but can still be used reliably. The view is defended against alternatives proposed by my cosymposiasts. 1. Introduction. A scientist—a Galileo, Newton, Darwin or Einstein— presents us with some vexing problem. We are perplexed. In a few words of simple prose, the scientist then conjures up an experiment, purely in thought. We follow, replicating its falling bodies or spinning buckets in our minds, and our uncertainty evaporates. We know the resolution and somehow we sense that we knew it all along. That moment of realization is exquisite, and it is difficult to resist the sense that something of profound epistemic moment has just transpired.
    Found 11 hours, 23 minutes ago on John Norton's site
  12. 41320.961564
    Two fundamental errors led Einstein to reject generally covariant gravitational field equations for over two years as he was developing his general theory of relativity. The first is well known in the literature. It was the presumption that weak, static gravitational fields must be spatially flat and a corresponding assumption about his weak field equations. I conjecture that a second hitherto unrecognized error also defeated Einstein's efforts. The same error, months later, allowed the hole argument to convince Einstein that all generally covariant gravitational field equations would be physically uninteresting.
    Found 11 hours, 28 minutes ago on John Norton's site
  13. 41534.961578
    Curie’s principle asserts that every symmetry of a cause manifests as a symmetry of the effect. It can be formulated as a tautology that is vacuous until it is instantiated. However, instantiation requires us to know the correct way to map causal terminology onto the terms of a science. Causal metaphysics has failed to provide a unique, correct way to carry out the mapping. Thus, successful or unsuccessful instantiation merely reflects our freedom of choice in the mapping.
    Found 11 hours, 32 minutes ago on John Norton's site
  14. 65548.961605
    A fundamental commitment of Aristotelianism seems to be that all reality supervenes on substances and accidents. If according to worlds w1 and w2 there are the same substances and accidents, then w1 = w2. …
    Found 18 hours, 12 minutes ago on Alexander Pruss's Blog
  15. 71797.96164
    Bad and good luck alike can do you in—morally or epistemically speaking. For luck undermines the moral or epistemic credit that might have otherwise been coming your way. So it's no surprise that thinking about luck is all the rage these days—among philosophers, that is. Many accounts of luck are modal. They say that luck is at least in part a matter of what could have been. So you enjoy good luck in some respect, for example, just if that condition is good for you and things could very well have gone otherwise. There's something to the thought, no doubt. But in this note, I will show that one important expression of it is wrong. My target: Consensus: S is lucky with respect to event e only if: possibly, e does not occur. Put differently: S is lucky with respect to proposition p only if: possibly, p is false. Everyone who gives a modal account of luck endorses Consensus. To be sure, the modal theorists disagree about details (some maintain, for example, that worlds in which e does not occur must be sufficiently nearby to actuality; others say there are additional conditions on luck that concern control and interests). But Consensus or theses that obviously entail it are widely endorsed.
    Found 19 hours, 56 minutes ago on Andrew Bailey's site
  16. 71812.961659
    Realism is generally assumed as the correct position with regards to psychological research and the measurement of psychological attributes in psychometrics. Borsboom et al. (2003), for instance, argued that the choice of a reflective measurement model necessarily implies a commitment to the existence of psychological constructs as well as a commitment to the belief that empirical testing of measurement models can justify their correspondence with real causal structures. Hood (2013) deemphasized Borsboom et al.’s position and argued that the choice of a reflective measurement model does not necessarily require ontological commitments, though, in his view, it does necessitate a commitment to minimal epistemic realism. Although these arguments are formulated with regard to psychological research, they can actually be generalized to other disciplines in social sciences that use similar methodologies and statistical techniques. In Hood’s opinion, empiricism does not suffice to provide an adequate account of the choice of reflective measurement models given that this choice requires an appeal to causal explanations. In this paper, we argue against Hood and answer this challenge, providing epistemic foundations for social science research that do not appeal to realism.
    Found 19 hours, 56 minutes ago on PhilSci Archive
  17. 71850.961674
    I investigate the principle anything goes within the context of general relativity. After a few preliminaries, I show a sense in which the universe is unknowable from within this context; I suggest that we ‘keep our options open’ with respect to competing models of it. Given the state of affairs, proceeding counter-inductively seems to be especially appropriate; I use this method to blur some of the usual lines between ‘reasonable’ and ‘unreasonable’ models of the universe. Along the way, one is led to a useful collection of variant theories of general relativity – each theory incompatible with the standard formulation. One may contrast one variant theory with another in order to understand foundational questions within ‘general relativity’ in a more nuanced way. I close by sketching some of the work ahead if we are to embrace such a pluralistic methodology.
    Found 19 hours, 57 minutes ago on PhilSci Archive
  18. 71888.961687
    A regularity theory of causation analyses type-level causation in terms of Boolean difference-making. The essential ingredient that helps this theoretical framework overcome the well-known problems of Hume’s and Mill’s classical regularity theoretic proposals is a principle of non-redundancy: only redundancy-free Boolean dependency structures track causation. The first part of this paper argues that the recent regularity theoretic literature has not consistently implemented this principle, for it disregarded two important types of redundancies: componential and structural redundancies. The second part then develops a new variant of a regularity theory that does justice to all types of redundancies and, thereby, provides the first all-inclusive notion of Boolean difference-making.
    Found 19 hours, 58 minutes ago on PhilSci Archive
  19. 82816.961725
    Political theorists sometimes defend the value of idealistic normative theories by arguing that they help specify principles for evaluating feasible solutions to political problems. But this defense is ambiguous between three interpretations, two of which I show to be nonstarters. The third interpretation says (roughly) that a description of a normatively ideal society can provide useful information about the evaluative criteria that we should use when comparing social possibilities. I show why that this claim rings hollow as a defense of idealistic normative theories. Put roughly, because ideal theories bracket the causes and consequences of many real-world social problems, they have little to say about the criteria that are most helpful for evaluating solutions to these problems. One upshot is that, if specifying principles of comparative evaluation is a principal task of political theory, then we need nonideal theory as much as — if not more than — ideal theory.
    Found 23 hours ago on PhilPapers
  20. 82842.961739
    Since 2016, there has been an explosion of academic work and journalism that fixes its subject matter using the terms ‘fake news’ and ‘post-truth’. In this paper, I argue that this terminology is not up to scratch, and that academics and journalists ought to completely stop using the terms ‘fake news’ and ‘post-truth’. I set out three arguments for abandonment. First, that ‘fake news’ and ‘post-truth’ do not have stable public meanings, entailing that they are either nonsense, context-sensitive, or contested. Secondly, that these terms are unnecessary, because we already have a rich vocabulary for thinking about epistemic dysfunction. Thirdly, I observe that ‘fake news’ and ‘post-truth’ have propagandistic uses, meaning that using them legitimates anti-democratic propaganda, and runs the risk of smuggling bad ideology into conversations.
    Found 23 hours ago on PhilPapers
  21. 140920.961752
    While ideal (surgical) interventions are acknowledged by many as valuable tools for the analysis of causation, recent discussions have shown that, since there are no ideal interventions on upper-level phenomena that non-reductively supervene on their underlying mechanisms, interventions cannot—contrary to a popular opinion—ground an informative analysis of constitution. This has led some to abandon the project of analyzing constitution in interventionist terms. By contrast, this paper defines the notion of a horizontally surgical intervention, and argues that, when combined with some innocuous metaphysical principles about the relation between upper and lower levels of mechanisms, that notion delivers a sufficient condition for constitution. This, in turn, strengthens the case for an interventionist analysis of constitution.
    Found 1 day, 15 hours ago on Michael Baumgartner's site
  22. 144443.961765
    In this paper, I do three things. First, I say what I mean by a ‘companions in guilt’ argument in meta-ethics. Second, I distinguish between two kinds of argument within this family, which I call ‘arguments by entailment’ and ‘arguments by analogy’. Third, I explore the prospects for companions in guilt arguments by analogy. During the course of this discussion, I identify a distinctive variety of argument, which I call ‘arguments by absorption’. I argue that this variety of argument (at least in the version considered here) inherits some of the weaknesses of standard arguments by analogy and entailment without obviously adding to their strength.
    Found 1 day, 16 hours ago on Hallvard Lillehammer's site
  23. 205380.961782
    In Scientific Ontology, I attempt to describe the nature of our investigations into what there is and associated theorizing in a way that respects the massive contributions of the sciences to this endeavor, and yet does not shy away from the fact that the endeavor itself is inescapably permeated by philosophical commitments. While my interest is first and foremost in what we can learn from the sciences about ontology, it quickly extends to issues that go well beyond scientific practices themselves, for two reasons. For one thing, it is not merely the case that philosophical considerations are relevant to ontological judgments even in the sciences; additionally, there are good philosophical reasons to believe that different assessments of these considerations are rationally permissible, which entails that rational agents may well come to different conclusions about scientific ontology in ways that admit of no ultimate resolution, in principle. Secondly, given this defensible variability of assessment, we have good reason to regard some disputes about whether particular patches of ontological theorizing deserve the label ‘‘scientific,’’ as opposed to ‘‘non-’’ or ‘‘un-scientific,’’ as ultimately irresolvable as well. All of this may be controversial, but I take it to be a true description of our precariously human epistemic condition in the realm of ontology.
    Found 2 days, 9 hours ago on Anjan Chakravartty's site
  24. 216301.961868
    A subject S's belief that Q is well-grounded if and only if it is based on a reason of S that gives S propositional justification for Q. Depending on the nature of S's reason, the process whereby S bases her belief that Q on it can vary. If S's reason is non-doxastic–– like an experience that Q or a testimony that Q––S will need to form the belief that Q as a spontaneous and immediate response to that reason. If S's reason is doxastic––like a belief that P––S will need to infer her belief that Q from it. The distinction between these two ways in which S's beliefs can be based on S's reasons is widely presupposed in current epistemology but––we argue in this paper––is not exhaustive. We give examples of quite ordinary situations in which a well-grounded belief of S appears to be based on S's reasons in neither of the ways described above. To accommodate these recalcitrant cases, we introduce the notion of enthymematic inference and defend the thesis that S can base a belief that Q on doxastic reasons P1, P2, …, Pn via inferring enthymematically Q from P1, P2, …, Pn.
    Found 2 days, 12 hours ago on Luca Moretti's site
  25. 257538.961908
    In this paper I deal with epistemological issues that stem from the hypothesis that reasoning is not only a means of transmitting knowledge from premise-beliefs to conclusion-beliefs, but also a primary source of knowledge in its own right. The idea is that one can gain new knowledge on the basis of suppositional reasoning. After making some preliminary distinctions, I argue that there are no good reasons to think that purported examples of knowledge grounded on pure reasoning are just examples of premise-based inferences in disguise. Next, I establish what kinds of true propositions can to a first approximation be known on the basis of pure reasoning. Finally, I argue that beliefs that are competently formed on the basis of suppositional reasoning satisfy both externalist and internalist criteria of justification.
    Found 2 days, 23 hours ago on PhilPapers
  26. 268895.961941
    I previously explained how Seth Lazar's first objection to my view was confused. His second, however, is more interesting. Lazar writes: Chappell thinks the objection has to do only with attitudes. His token-pluralistic utilitarianism can, in its deontic verdicts, be extensionally identical to token-monistic utilitarianism (according to which only aggregate well-being is non-instrumentally valuable), but preferable since it encourages us to adopt the appropriate attitude to the losses inflicted in the pursuit of the overall good. …
    Found 3 days, 2 hours ago on Philosophy, et cetera
  27. 280435.961975
    Plato’s argument against poetry in Republic 10 is perplexing. He condemns not all poetry, but only “however much of it is imitative (hosê mimêtikê)” (595a). A metaphysical charge against certain works of poetry – that they are forms of imitation, “at a third remove from the truth” – is thus used to justify an ethical charge: that these works cripple our thought and corrupt our souls.
    Found 3 days, 5 hours ago on Jessica Moss's site
  28. 309732.962035
    After presenting the current situation of epistemological research in Latin America and part of its history, this entry will address five topics: skepticism (especially in its Pyrrhonian stripe), core epistemology, formal epistemology, Wittgenstein’s thought in connection with epistemology and skepticism, and epistemology of law. It should be noted from the outset that the entry does not purport to provide a comprehensive account of epistemology in Latin America, but rather to paint a general picture of it by focusing on the main issues that have been discussed within that field. We will take into consideration the work of those scholars who have written (in Spanish, Portuguese, or English) on epistemological issues independently of both whether they are currently based in Latin America and whether they have worked in a non-Latin American country for a considerable part of their careers.
    Found 3 days, 14 hours ago on Diego Machuca's site
  29. 315295.962058
    Consider the view that each individual possesses a haecceity: a non-qualitative property the instantiation of which is both necessary and sufficient to be that very individual. One reason to believe in the existence of haecceities – perhaps the reason, and certainly the most influential – is that they are required to explain why any one individual is numerically distinct from another even when the two are qualitatively exactly alike, as it is with the pair of iron spheres famously discussed by Max Black (1952). According to this line of reasoning, what explains why Black’s iron spheres are two in number cannot merely be a difference in their qualitative properties. Rather, it is the fact that they possess two different haecceities, the numerical distinctness of which is taken as brute.
    Found 3 days, 15 hours ago on PhilPapers
  30. 317111.962072
    Classical logic requires each singular term to denote an object in the domain of quantification—which is usually understood as the set of “existing” objects. Free logic does not. Free logic is therefore useful for analyzing discourse containing singular terms that either are or might be empty. A term is empty if it either has no referent or refers to an object outside the domain. Most free logics have been first-order, their quantifiers ranging over individuals. Recently, however, some work on higher-order free logics has appeared. Corine Besson (2009) argues that internalist theories of natural kinds require second-order free logics whose quantifiers range over kinds, and she finds precedent for this idea ranging as far back as Cocchiarella (1986).
    Found 3 days, 16 hours ago on Wes Morriston's site