In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it.
The thesis is: the orders of infinite smallness which occur in the early calculus can be understood in terms of what we now call a valuation.
Re. Chapter 4: Probably the most well-known application of model theory has been Robinson’s development of nonstandard analysis to resuscitate the historical notion of an infinitesimal. We tried to further this development by showing how it could be used to handle the orders of infinite smallness in the early calculus.
Parametric context-sensitivity is a recognized but under-theorized form of context-sensitivity—under-theorized especially as compared to the sort of indexicality Kaplan [1977/1989] brought into focus. I single it out for attention here, using three case studies as stalking horses. The idea is to show that getting straight about parametric context-sensitivity is clarifying for a number of foundational semantic issues. Starting with modals, I argue that parametric context-sensitivity problematizes prevailing definitions of context-sensitivity, and offer alternative definitions. Turning next to variables (pronouns, and things like them), I bring out the way in which parametric context-sensitivity problematizes the idea that content is compositional, coming at Rabern ’s insights from another direction. Clarity about the possibilities for parametrically contextualist analyses of pronouns helps in distinguishing several kinds of monstrous operations, and helps bring into focus what is at issue in the question whether (as Santorio  has recently argued) the role of context in semantics is entirely post-semantic. (In a sequel to this paper, I turn to the third case study: indicative conditionals.)
In previous chapters, we saw how Mādhyamika philosophers, especially Candrakīrti, have elaborated on the notion of two truths. One puzzling feature of the notion of two truths concerns the notion of truth . The Mādhyamika, particularly the Prāsaṅgika, holds that everything is empty of intrinsic nature ( śūnya ). This has been thought to mean that, for the Prāsaṅgika, a conventional truth is an unrefl ective endorsement of what people already accept ( lokaprasiddha ), as supposed by Kamalaśīla. A consequence of this is that the normative role of truth is fl attened, and thus the conventional authority of epistemic practices is undermined. (See the passage from the Sarvadharma niḥsvabhāvasidd hi quoted on page 000.) The problem Kamalaśīla points out is not just that the truth of conventional truths is unexplained but also that no sophisticated analysis of anything can be given. Thus, the Prāsaṅgika is trapped in the dismal slough of pure conventionalism.
As I explained in the previous post, CSR’s account of scientific representation is based on the neuroscientific account of the brain-world relationship. The neuroscientific account is presented in terms of the Predictive Processing Theory (PPT) and the Free Energy Principle (FEP) as being developed by Karl Friston and others. …
. We celebrated Jerzy Neyman’s Birthday (April 16, 1894) last night in our seminar: here’s a pic of the cake. My entry today is a brief excerpt and a link to a paper of his that we haven’t discussed much on this blog: Neyman, J. …
To understand something involves some sort of commitment to a set of propositions comprising an account of the understood phenomenon. Some take this commitment to be a species of belief; others, such as Elgin and I, take it to be a kind of cognitive policy. This paper takes a step back from debates about the nature of understanding and asks when this commitment involved in understanding is epistemically appropriate, or ‘acceptable’ in Elgin’s terminology. In particular, appealing to lessons from the lottery and preface paradoxes, it is argued that this type of commitment is sometimes acceptable even when it would be rational to assign arbitrarily low probabilities to the relevant propositions. This strongly suggests that the relevant type of commitment is sometimes acceptable in the absence of epistemic justification for belief, which in turn implies that understanding does not require justification in the traditional sense. The paper goes on to develop a new probabilistic model of acceptability, based on the idea that the maximally informative accounts of the understood phenomenon should be optimally probable. Interestingly, this probabilistic model ends up being similar in important ways to Elgin’s proposal to analyze the acceptability of such commitments in terms of ‘reflective equilibrium’.
The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once. …
A foundational commitment of Aristotelian philosophy is that all facts are grounded in what substances and features intrinsic to substances, namely forms and accidents, exist. But it is possible for the past to have been different without there being any difference in what substances and features intrinsic to substances presently exist. …
Neyman April 16, 1894 – August 5, 1981
My second Jerzy Neyman item, in honor of his birthday, is a little play that I wrote for Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018):
A local acting group is putting on a short theater production based on a screenplay I wrote: “Les Miserables Citations” (“Those Miserable Quotes”) . …
For readers unfamiliar with epistemology a very brief overview of the subject may be useful prior to reading the papers by Hayes and McCarthy, linking epistemology and Artificial Intelligence.
Work in both animals and humans has demonstrates that the brain specifically tracks the space near the body—the so-called peripersonal space (PPS). These representations appear to be multimodal and expressed in body-centered coordinates. They also play an important role in defense of the body from threat, manual action within PPS, and the use of tools—the latter, notably, ‘extending’ PPS to encompass the tool itself. Yet different authors disagree about important aspects of these representations, including how many there are. I suggest that the questions about the nature and number of PPS representations cannot be separated from the question of the mathematical basis of the corresponding representational spaces. I distinguish cartographic from functional bases for representation, suggesting that the latter is both a plausible account and supports a single-representation view. I conclude with reflections on functional bases and what they show about representation in cognitive science.
In the previous post, I have remarked that the existing forms of SR do not use the full capacity of their logical frameworks to account for a substantial relation between the structure of the scientific theories and reality. …
Qing philosophy refers to the topography of the intellectual terrain
of seventeenth- and eighteenth-century China, which sported coherent
patterns and modes of intellection and argumentation among the texts
and writings of the scholars in the period. In accordance with the
current historiographical convention, the time-span fell within the
so-called “late imperial” era that encompassed the
transition from the Ming dynasty (1368–1644) to the Qing
(1644–1911), as well as the first half of the Qing imperium. That Qing philosophy is distinguished as an independent subject and
entity, with its presumed boundaries and prominent features, is not
merely a function of chronology.
Generally regarded as one of the most important philosophers to write
in English, David Hume (1711–1776) was also well known in his
own time as an historian and essayist. A master stylist in any genre,
his major philosophical works—A Treatise of Human
Nature (1739–1740), the Enquiries concerning Human
Understanding (1748) and concerning the Principles of
Morals (1751), as well as his posthumously published
Dialogues concerning Natural Religion (1779)—remain
widely and deeply influential. Although Hume’s more conservative contemporaries denounced his
writings as works of scepticism and atheism, his influence is evident
in the moral philosophy and economic writings of his close friend Adam
Albert Einstein’s bold assertion of the form-invariance of the equation of a spherical light wave with respect to inertial frames of reference (1905) became, in the space of six years, the preferred foundation of his theory of relativity. Early on, however, Einstein’s universal light-sphere invariance was challenged on epistemological grounds by Henri Poincare, who promoted an alternative demonstration of the foundations of relativity theory based on the notion of a light ellipsoid. A third figure of light, Hermann Minkowski’s lightcone also provided a new means of envisioning the foundations of relativity. Drawing in part on archival sources, this paper shows how an informal, international group of physicists, mathematicians, and engineers, including Einstein, Paul Langevin, Poincare, Hermann Minkowski, Ebenezer Cunningham, Harry Bateman, Otto Berg, Max Planck, Max Laue, A. A. Robb, and Ludwig Silberstein, employed figures of light during the formative years of relativity theory in their discovery of the salient features of the relativistic worldview.
In a recent paper (Synthese, 2019. https://doi.org/10.1007/s11229- 019-02101-3), Oldofredi presents a critical analysis of my mentalistic formulation of the measurement problem of quantum mechanics. Here I answer these criticisms, and explain more clearly why the formulation is helpful to understand and solve the measurement problem.
The Federal Communications Commission (FCC) auction was a new kind of auction used for the allocation of licences for the use and exploitation of the electromagnetic spectrum in The United States. This auction set a methodological standard of design and engineering in economics; its design adopted some properties from the traditional English and Dutch auctions and it also add new innovative properties, such as multiple rounds where bidders can return unwanted items. Unlike the English and the Dutch auctions, the FCC auction was designed and built by social scientists. The large revenue it raised was hailed as a proof of success of mechanism design theory. This success led some European governments to hire mechanism designers for the design and implementation of similar auctions for the allocation of licences on the electromagnetic spectrum. The success was not only due to the knowledge available from mechanism design theory but also from the practical knowledge experimental economists have, they performed the experiments testing the rules and mechanisms, which produced data crucial for the design and the implementation of the new auction. In this article, I present a methodological account of the FCC auction design discussing two main components of it, namely the blueprint produced by mechanism designers and the experiments performed for producing the data missing in the blueprint. I also evaluate this blueprint using the types of design and principles, namely minimal analogy and type-hierarchies.
Abū Naṣr al-Fārābī (Iraq,
c. 870–c. 950) devoted his career to introducing the work of
Aristotle to educated Arabic-speaking citizens of the Islamic Empire. Several of his major writings are lost in whole or part. But many of
his books explaining Aristotle’s Organon (the
collection of Aristotle’s writings on logic and related
subjects) have survived, and the number of them available in Western
translations is increasing steadily. For general information on
al-Fārābī see the entry on
Al-Farabi. Al-Fārābī studies the various roles of language in
human life and society. He emphasises the use of language to convey
information, to ask questions and resolve disagreements, and to
describe distinctions and classifications.
Today is Jerzy Neyman’s birthday. I’ll post various Neyman items this week in recognition of it, starting with a guest post by Aris Spanos. Happy Birthday Neyman! A. Spanos
A Statistical Model as a Chance Mechanism
Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. …
Previously I introduced the
problem of scientific representation and remarked that Cognitive Structural
Realism (CSR) aims to address it. CSR is (evidently) a version of SR, but it is
also the inheritor of Ronald Giere and colleagues’ Cognitive Models of Science Approach
The Sleeping Beauty Problem is a polarizing thought experiment involving a fair coin toss, memory erasure and temporal uncertainty. Despite its simplicity there is no agreed upon solution. In this work I put forward a set of arguments that support the so-called Halfer or 1/2 solution to the problem, while undermining the competing Thirder or 1/3 solution. In analyzing Elga’s original argument for the 1/3 solution, I bring to light a subtle but clear contradiction in his reasoning using temporal logic. Temporal reasoning also helps to neutralize the main criticisms against the 1/2 solution. Surprisingly, for some questions of probability or credence, it appears we need to distinguish between an event that has yet to occur, and the same event after it has already occurred. Knowledge that an event has been decided (without knowing the result) can be a type of admissible evidence when updating credences.
Metametaphysics concerns foundational metaphysics. Questions of foundational metaphysics include: What is the subject matter of metaphysics? What are its aims? What is the methodology of metaphysics? Are metaphysical questions coherent? If so, are they substantive or trivial in nature? Some have claimed that the notion of grounding is useful in addressing such questions. In this chapter, we introduce some core debates about whether—and, if so, how—grounding should play a role in metametaphysics.
Questions about moral character have recently come to occupy a central
place in philosophical discussion. Part of the explanation for this
development can be traced to the publication in 1958 of G. E. M.
Anscombe’s seminal article “Modern Moral
Philosophy.” In that paper Anscombe argued that Kantianism and
utilitarianism, the two major traditions in western moral philosophy,
mistakenly placed the foundation for morality in legalistic notions
such as duty and obligation. To do ethics properly, Anscombe argued,
one must start with what it is for a human being to flourish or live
well. That meant returning to some questions that mattered deeply to
the ancient Greek moralists.
Medieval philosophical texts are written in a variety of literary
forms, many peculiar to the period, like the summa or
disputed question; others, like the commentary, dialogue, and axiom,
are also found in ancient and modern sources but are substantially
different in the medieval period from the classical or modern
instantiations of these forms. Many philosophical texts also have a
highly polemical style and/or seem deferential to the authoritative
sources they cite. Further, medieval philosophical thinkers operated
under the threat of censure from political and religious authority,
moving them, some have argued, to write esoterically or indirectly to
protect themselves from persecution for their true views.
There are different ways to formalise roughly the same knowledge, which negatively affects ontology reuse and alignment and other tasks such as formalising competency questions automatically. We aim to shed light on, and make more precise, the intuitive notion of such ‘representation styles’ through characterising their inherent features and the dimensions by which a style may differ. This has led to a total of 28 different traits that are partitioned over 10 dimensions. The operationalisability was assessed through an evaluation of 30 ontologies on those dimensions and applicable values. It showed that it is feasible to use the dimensions and values and resulting in three easily recognisable types of ontologies. Most ontologies had clearly one or the other trait, whereas some were inherently mixed due to inclusion of different and conflicting design decisions.
Suppose Alice has an inconsistent probabilistic assignment PA. Then, famously, there is a series of bets on single propositions (call these binary bets) that is a Dutch Book against Alice: i.e., Alice by her lights will accept each bet, and is guaranteed to lose money. …
Like me, you might have naively speculated that the more truth you know, the better you’ll do in gambling scenarios. But this is mistaken, at least when taken in the strong sense that there is a guarantee of doing better (or even just as well). …
At least in anglophone countries, Spinoza’s reputation as a
political thinker is eclipsed by his reputation as a rationalist
metaphysician. Nevertheless, Spinoza was a penetrating political
theorist whose writings have enduring significance. In his two
political treatises, Spinoza advances a number of forceful and original
arguments in defense of democratic governance, freedom of thought and
expression, and the subordination of religion to the state. On the
basis of his naturalistic metaphysics, Spinoza also offers trenchant
criticisms of ordinary conceptions of right and duty. And his account
of civil organization stands as an important contribution to the development of constitutionalism and the rule of law.