Logical monists and pluralists disagree about how many correct logics there are; the monists say there is just one, the pluralists that there are more. Could it turn out that both are wrong, and that there is no logic at all? Such a view might with justice be called logical nihilism and here I’ll assume a particular gloss on what that means: nihilism is the view that there are no laws of logic, so that all candidates—e.g. the law of excluded middle, modus ponens, disjunctive syllogism et. al.—fail. Nihilism might sound absurd, but the view has come up in recent discussions of logical pluralism. Some pluralists have claimed that different logics are correct for different kinds of case, e.g. classical logic for consistent cases and paraconsistent logics for dialethic ones. Monists have responded by appealing to a principle of generality for logic: a law of logic must hold for absolutely all cases, so that it is only those principles that feature in all of the pluralist’s systems that count as genuine laws of logic. The pluralist replies that the monist’s insistence on generality collapses monism into nihilism, because, they maintain, every logical law fails in some cases.
Berkeley’s ‘master argument’ for idealism has been the subject of extensive criticism. Two of his strongest critics, A.N. Prior and J.L. Mackie, argue that due to various logical confusions on the part of Berkeley, the master argument fails to establish his idealist conclusion. Prior (1976) argues that Berkeley’s argument ‘proves too little’ in its conclusion, while Mackie (1964) contends that Berkeley confuses two different kinds of self-refutation in his argument. In this paper, I put forward a defence of the master argument based on intuitionistic logic. I argue that, analysed along these lines, Prior’s and Mackie’s criticisms fail to undermine Berkeley’s argument.
Whereas Bayesians have proposed norms such as probabilism, which requires immediate and permanent certainty in all logical truths, I propose a framework on which credences, including credences in logical truths, are rational because they are based on reasoning that follows plausible rules for the adoption of credences. I argue that my proposed framework has many virtues. In particular, it resolves the problem of logical omniscience.
In the societal tradeoffs problem, each agent perceives certain quantitative tradeoffs between pairs of activities, and the goal is to aggregate these tradeoffs across agents. This is a problem in social choice; specifically, it is a type of quantitative judgment aggregation problem. A natural rule for this problem was axiomatized by Conitzer et al. [AAAI 2016]; they also provided several algorithms for computing the outcomes of this rule. In this paper, we present a significantly improved algorithm and evaluate it experimentally. Our algorithm is based on a tight connection to minimum-cost flow that we exhibit. We also show that our algorithm cannot be improved without breakthroughs on min-cost flow.
. Excerpts from the Preface:
The Statistics Wars:
Today’s “statistics wars” are fascinating: They are at once ancient and up to the minute. They reflect disagreements on one of the deepest, oldest, philosophical questions: How do humans learn about the world despite threats of error due to incomplete and variable data? …
We show that combining two different hypothetical enhancements to quantum computation— namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. This complements a related result due to Raz. The proof uses locally decodable codes.
My student Brandon Coya has finished his thesis! • Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018. It’s about networks in engineering. …
« The stupidest story I ever wrote (it was a long flight)
PDQP/qpoly = ALL
I’ve put up a new paper. Unusually for me these days, it’s a very short and simple one (8 pages)—I should do more like this! Here’s the abstract:
We show that combining two different hypothetical enhancements to quantum computation—namely, quantum advice and non-collapsing measurements—would let a quantum computer solve any decision problem whatsoever in polynomial time, even though neither enhancement yields extravagant power by itself. …
My favourite fallacy is the fallacy fallacy. It’s the fallacy of thinking that something is a fallacy when it isn’t. This paper concerns a high-profile instance, namely the phenomenon of hindsight bias. Roughly, it is the phenomenon of being more confident that some body of evidence supports a hypothesis when one knows that the hypothesis is true, than when one doesn’t.
Decision-makers face severe uncertainty when they are not in a position to assign precise probabilities to all of the relevant possible outcomes of their actions. Such situations are common—novel medical treatments and policies addressing climate change are two examples. Many decision-makers respond to such uncertainty in a cautious manner and are willing to incur a cost to avoid it. There are good reasons for taking such a cautious, uncertainty-averse attitude to be permissible. So far, however, there has been very little work on developing a theory of distributive justice which incorporates it. We aim to remedy this lack. We put forward a novel, uncertainty-averse egalitarian view of distributive justice. We analyse when the twin aims of reducing inequality and limiting the burdens of severe uncertainty are congruent and when they conflict, and highlight several practical implications of the proposed view. We also demonstrate that if uncertainty aversion is permissible, then utilitarians must relinquish a favourite argument against egalitarianism.
It is often assumed that one couldn’t finitely specify a nonmeasurable set. In this post I will argue for two theses:
It is possible that someone finitely specifies a nonmeasurable set. It is possible that someone finitely specifies a nonmeasurable set and reasonably believes—and maybe even knows—that she is doing so. …
Zero provides a challenge for philosophers of mathematics with realist inclinations. On the one hand it is a bona fide number, yet on the other it is linked to ideas of nothingness and non-being. This paper provides an analysis of the epistemology and metaphysics of zero. We develop several constraints and then argue that a satisfactory account of zero can be obtained by integrating recent work in numerical cognition with a philosophical account of absence perception.
Let us motivate this claim in more detail. Experimentation is a key element when characterizing simulation modeling , exactly because it occurs in two varieties. The first variety has been called theoretical model, computer, or numerical experiments. We prefer to call them simulation experiments. They are used to investigate the behavior of models. Clearly simulation offers new possibilities for conducting experiments of this sort and hence investigating models beyond what is tractable by theoretical analysis. We are interested in how simulation experiments function in simulation modeling. Importantly, relevant properties of simulation models can be known only by simulation experiments. There are two immediate and important consequences. First, simulation experiments are unavoidable in simulation modeling. Second, when researchers construct a model and want to find out how possible elaborations of the current version perform, they will have to conduct repeated experiments.
Scientists are generally subject to social pressures, including pressures to conform with others in their communities, that affect achievement of their epistemic goals. Here we analyze a network epistemology model in which agents, all else being equal, prefer to take actions that conform with those of their neighbors. This preference for conformity interacts with the agents’ beliefs about which of two (or more) possible actions yields the better outcome. We find a range of possible outcomes, including stable polarization in belief and action. The model results are sensitive to network structure. In general, though, conformity has a negative effect on a community’s ability to reach accurate consensus about the world.
We provide a novel perspective on “regularity” as a property of representations of the Weyl algebra. We first critique a proposal by Halvorson [2004, “Complementarity of representations in quantum mechanics”, Studies in History and Philosophy of Modern Physics 35(1), pp. 45–56], who argues that the non-regular “position” and “momentum” representations of the Weyl algebra demonstrate that a quantum mechanical particle can have definite values for position or momentum, contrary to a widespread view. We show that there are obstacles to such an intepretation of non-regular representations. In Part II, we propose a justification for focusing on regular representations, pace Halvorson, by drawing on algebraic methods.
A critical survey of some attempts to define ‘computer’, beginning with some informal ones (from reference books, and definitions due to H. Simon, A.L. Samuel, and M. Davis), then critically evaluating those of three philosophers (J.R. Searle, P.J. Hayes, and G. Piccinini), and concluding with an examination of whether the brain and the universe are computers.
The sustained failure of efforts to design an infinite lottery machine using ordinary probabilistic randomizers is traced back to a problem familiar to set theorists: there are no constructive prescriptions for probabilistically non-measurable sets. Yet construction of such sets is required if we are to be able to read the result of an infinite lottery machine that is built from ordinary probabilistic randomizers. All such designs face a dilemma: they can provide an accessible (readable) result with probability zero; or an inaccessible result with probability greater than zero.
Since Keenan & Stavi (1986), exceptive phrases (EPs) like but/except Luciano have been taken to modify generalized quantifiers by subtracting their complements from the ‘host’ quantifier’s domain. Thus, an EP entails a negatively-restricted relative clause (NRR).
. “If a statistical analysis is clearly shown to be effective … it gains nothing from being … principled,” according to Terry Speed in an interesting IMS article (2016) that Harry Crane tweeted about a couple of days ago [i]. …
There exists a common view that for theories related by a ‘duality’, dual models typically may be taken ab initio to represent the same physical state of affairs, i.e. to correspond to the same possible world. We question this view, by drawing a parallel with the distinction between ‘interpretational’ and ‘motivational’ approaches to symmetries.
guest post by Christian Williams
Mike Stay has been doing some really cool stuff since earning his doctorate. He’s been collaborating with Greg Meredith, who studied the π-calculus with Abramsky, and then conducted impactful research and design in the software industry before some big ideas led him into the new frontier of decentralization. …
We advocate and develop a states-based semantics for both nominal and adjectival confidence reports, as in Ann is confident/has confidence that it’s raining, and their comparatives Ann is more confident/has more confidence that it’s raining than that it’s snowing. Other examples of adjectives that can report confidence include sure and certain. Our account adapts Wellwood’s account of adjectival comparatives in which the adjectives denote properties of states, and measure functions are introduced compositionally. We further explore the prospects of applying these tools to the semantics of probability operators. We emphasize three desirable and novel features of our semantics: (i) probability claims only exploit qualitative resources unless there is explicit compositional pressure for quantitative resources; (ii) the semantics applies to both probabilistic adjectives (e.g., likely) and probabilistic nouns (e.g., probability); (iii) the semantics can be combined with an account of belief reports that allows thinkers to have incoherent probabilistic beliefs (e.g. thinking that A & B is more likely than A) even while validating the relevant purely probabilistic claims (e.g. validating the claim that A & B is never more likely than A). Finally, we explore the interaction between confidence-reporting discourse (e.g., I am confident that...) and belief-reports about probabilistic discourse (e.g., I think it’s likely that...).
they each know that this is so, and so on.3 It need not matter how rationality is understood in the present context, as long as it entails thefollowing: that if a rational agent knows he can obtain m by performing one of two alternative actions, n by performing the other, and m isbetter by his standards, then he performs the first alternative: he
Classical logic is characterized by the familiar truth-value semantics, in which an interpretation assigns one of two truth values to any propositional letter in the language (in the propositional case), and a function from a power of the domain to the set of truth values in the predicate case. Truth values of composite sentence are assigned on the basis of the familiar truth functions. This abstract semantics immediately yields an applied semantics in the sense that the truth value of an interpreted sentence is given by the truth value of that sentence in an interpretation in which the propositional variables are given the truth values of the statements that interpret them. So if p is interpreted as the statement “Paris is in France” and q as “London is in Italy” then the truth value of “p ∨ q” is |p ∨ q| where the interpretation | | is given by |p| = T and |q| = F. And since the truth value of |A ∨ B| is defined as
It has been argued that an epistemically rational agent’s evidence is subjectively mediated through some rational epistemic standards, and that there are incompatible but equally rational epistemic standards available to agents. This supports Permissiveness, the view according to which one or multiple fully rational agents are permitted to take distinct incompatible doxastic attitudes towards P (relative to a body of evidence). In this paper, I argue that the above claims entail the existence of a unique and more reliable epistemic standard. My strategy relies on Condorcet’s Jury Theorem. This gives rise to an important problem for those who argue that epistemic standards are permissive, since the reliability criterion is incompatible with such a type of Permissiveness.
Posted on Tuesday, 08 May 2018
A might counterfactual is a statement of the form 'if so-and-so were
the case then such-and-such might be the case'. I used to think that
there are different kinds of might counterfactuals: that sometimes
the 'might' takes scope over the entire conditional, and other times
it does not. …
I argue for a full mathematisation of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: “physics from no physics”. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either don’t have physical interpretation (as for von Neumann’s axioms for quantum theory), or contain physical primitives as “clock”, “rigid rod ”, “force”, “inertial mass” (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the Sixth Hilbert’s Problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory.
Much of the discussion of set-theoretic independence, and whether or not we could legitimately expand our foundational theory, concerns how we could possibly come to know the truth value of independent sentences. This paper pursues a slightly different tack, examining how we are ignorant of issues surrounding their truth. We argue that a study of how we are ignorant reveals a need for an understanding of set-theoretic explanation and motivates a pluralism concerning the adoption of foundational theory.
In the framework of Brans-Dicke (BD) theory, the present study determines the time dependence of BD parameter, energy density and equation of state (EoS) parameter of the cosmic fluid in a universe expanding with acceleration, preceded by a phase of deceleration. For this purpose, a scale factor has been so chosen for the present model that the deceleration parameter, obtained from it, shows a signature flip with time. Considering the dark energy to be responsible for the entire pressure, the time evolution of energy parameters for matter and dark energy and the EoS parameter for dark energy have been determined. A term, representing interaction between matter and dark energy, has been calculated. Its negative value at the present time indicates conversion of matter into dark energy. It is evident from the present study that the nature of dependence of the scalar field upon the scale factor plays a very important role in governing the time evolution of the cosmological quantities studied here. This model has an inherent simplicity in the sense that it allows one to determine the time evolution of dark energy for a homogeneous and isotropic universe, without involving any self-interaction potential or cosmological constant in the formulation.
guest post by Matteo Polettini
Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. …