1. 64416.772545
    Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. It is also a framework used in other areas of theoretical physics, such as condensed matter physics and statistical mechanics. In a rather informal sense QFT is the extension of quantum mechanics (QM), dealing with particles, over to fields, i.e. systems with an infinite number of degrees of freedom. (See the entry on quantum mechanics.) In the last decade QFT has become a more widely discussed topic in philosophy of science, with questions ranging from methodology and semantics to ontology.
    Found 17 hours, 53 minutes ago on Stanford Encyclopedia of Philosophy
  2. 77293.772673
    Neurophysiology and neuroanatomy limit the set of possible computations that can be performed in a brain circuit. Although detailed data on individual brain microcircuits is available in the literature, cognitive modellers seldom take these constraints into account. One reason for this is the intrinsic complexity of accounting for mechanisms when describing function. In this paper, we present multiple extensions to the Neural Engineering Framework that simplify the integration of low-level constraints such as Dale’s principle and spatially constrained connectivity into high-level, functional models. We apply these techniques to a recent model of temporal representation in the Granule-Golgi microcircuit in the cerebellum, extending it towards higher degrees of biological plausibility. We perform a series of experiments to analyze the impact of these changes on a functional level. The results demonstrate that our chosen functional description can indeed be mapped onto the target microcircuit under biological constraints. Further, we gain insights into why these parameters are as observed by examining the effects of parameter changes. While the circuit discussed here only describes a small section of the brain, we hope that this work inspires similar attempts of bridging low-level biological detail and high-level function. To encourage the adoption of our methods, we published the software developed for building our model as an open-source library.
    Found 21 hours, 28 minutes ago on Chris Eliasmith's site
  3. 98179.772704
    Last week, I explained how you can give an accuracy dominance argument for Probabilism without assuming that your inaccuracy measures are additive -- that is, without assuming that the inaccuracy of a whole credence function is obtained by adding up the inaccuracy of all the individual credences that it assigns. …
    Found 1 day, 3 hours ago on M-Phi
  4. 292362.772738
    Psycholinguistic studies have repeatedly demonstrated that downward entailing (DE) quantifiers are more difficult to process than upward entailing (UE) ones. We contribute to the current debate on cognitive processes causing the monotonicity effect by testing predictions about the underlying processes derived from two competing theoretical proposals: two-step and pragmatic processing models. We model reaction times and accuracy from two verification experiments (a sentence-picture and a purely linguistic verification task), using the diffusion decision model (DDM). In both experiments, verification of UE quantifier more than half was compared to verification of DE quantifier fewer than half. Our analyses revealed the same pattern of results across tasks: Both non-decision times and drift rates, two of the free model parameters of the DDM, were affected by the monotonicity manipulation. Thus, our modeling results support both two-step (prediction: non-decision time is affected) and pragmatic processing models (prediction: drift rate is affected).
    Found 3 days, 9 hours ago on Jakub Szymanik's site
  5. 292377.772772
    [177] Shenoy, Prakash P. (1991), On Spohn’s Rule for Revision of Beliefs. International Journal of Approximate Reasoning 5, 149-181. [178] Spirtes, Peter & Glymour, Clark & Scheines, Richard (2000), Causation,
    Found 3 days, 9 hours ago on Franz Huber's site
  6. 292416.772803
    The cerebellum is classically described in terms of its role in motor control. Recent evidence suggests that the cerebellum supports a wide variety of functions, including timing-related cognitive tasks and perceptual prediction. Correspondingly, deciphering cerebellar function may be important to advance our understanding of cognitive processes. In this paper, we build a model of eyeblink conditioning, an extensively studied low-level function of the cerebellum. Building such a model is of particular interest, since, as of now, it remains unclear how exactly the cerebellum manages to learn and reproduce the precise timings observed in eyeblink conditioning that are potentially exploited by cognitive processes as well. We employ recent advances in large-scale neural network modeling to build a biologically plausible spiking neural network based on the cerebellar microcircuitry. We compare our simulation results to neurophysiological data and demonstrate how the recurrent Granule-Golgi subnetwork could generate the dynamics representations required for triggering motor trajectories in the Purkinje cell layer. Our model is capable of reproducing key properties of eyeblink conditioning, while generating neurophysiological data that could be experimentally verified.
    Found 3 days, 9 hours ago on Chris Eliasmith's site
  7. 292426.772827
    Inspired by work of Stefano Zambelli on these topics, this paper the complex nature of the relation between technology and computability. This involves reconsidering the role of computational complexity in economics and then applying this to a particular formulation of the nature of technology as conceived within the Sraffian framework. A crucial element of this is to expand the concept of technique clusters. This allows for understanding that the set of possible techniques is of a higher cardinality of infinity than that of the points on a wage-profit frontier. This is associated with potentially deep discontinuities in production functions and a higher form of uncertainty involved in technological change and growth.
    Found 3 days, 9 hours ago on Barkley Rosser's site
  8. 292459.772842
    In a recent paper, Barrio, Tajer and Rosenblatt establish a correspondence between metainferences holding in the strict-tolerant logic of transparent truth ST and inferences holding in the logic of paradox LP . They argue that LP is ST ’s external logic and they question whether ST ’s solution to the semantic paradoxes is fundamentally different from LP ’s. Here we establish that by parity of reasoning, ST can be related to LP ’s dual logic K3 . We clarify the distinction between internal and external logic and argue that while ST ’s nonclassicality can be granted, its self-dual character does not tie it to LP more closely than to K3 .
    Found 3 days, 9 hours ago on David Ripley's site
  9. 307890.772886
    This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus bears a close relation to the →, ¬ fragment of core logic, and so is called ‘core type theory’. This paper presents a novel typed term calculus and reduction relation for it, and proves that the reduction relation is strongly normalizing—that there are no infinite reduction sequences. The calculus is similar to the simply-typed lambda calculus with an empty type, but with a twist. The simply-typed lambda calculus with an empty type bears a close relation to the →, ⊥ fragment of intuitionistic logic ([Howard, ; Scherer, 2017; Sørensen and Urzyczyn, 2006]); the calculus to be presented here bears a similar relation to the →, ¬ fragment of a logic known as core logic. Because of this connection, I’ll call the calculus core type theory.
    Found 3 days, 13 hours ago on David Ripley's site
  10. 389836.772913
    For a PDF of this post, see here.One of the central arguments in accuracy-first epistemology -- the one that gets the project off the ground, I think -- is the accuracy-dominance argument for Probabilism. …
    Found 4 days, 12 hours ago on M-Phi
  11. 489407.772928
    Decision making (DM) requires the coordination of anatomically and functionally distinct cortical and subcortical areas. While previous computational models have studied these subsystems in isolation, few models explore how DM holistically arises from their interaction. We propose a spiking neuron model that unifies various components of DM, then show that the model performs an inferential decision task in a human-like manner. The model (a) includes populations corresponding to dorsolateral prefrontal cortex, orbitofrontal cortex, right inferior frontal cortex, pre-supplementary motor area, and basal ganglia; (b) is constructed using 8000 leaky-integrate-and-fire neurons with 7 million connections; and (c) realizes dedicated cognitive operations such as weighted valuation of inputs, accumulation of evidence for multiple choice alternatives, competition between potential actions, dynamic thresholding of behavior, and urgency-mediated modulation. We show that the model reproduces reaction time distributions and speed-accuracy tradeoffs from humans performing the task. These results provide behavioral validation for tasks that involve slow dynamics and perceptual uncertainty; we conclude by discussing how additional tasks, constraints, and metrics may be incorporated into this initial framework.
    Found 5 days, 15 hours ago on Chris Eliasmith's site
  12. 489467.772942
    Spatial cognition relies on an internal map-like representation of space provided by hippocampal place cells, which in turn are thought to rely on grid cells as a basis. Spatial Semantic Pointers (SSP) have been introduced as a way to represent continuous spaces and positions via the activity of a spiking neural network. In this work, we further develop SSP representation to replicate the firing patterns of grid cells. This adds biological realism to the SSP representation and links biological findings with a larger theoretical framework for representing concepts. Furthermore, replicating grid cell activity with SSPs results in greater accuracy when constructing place cells.Improved accuracy is a result of grid cells forming the optimal basis for decoding positions and place cell output. Our results have implications for modelling spatial cognition and more general cognitive representations over continuous variables.
    Found 5 days, 15 hours ago on Chris Eliasmith's site
  13. 575430.772956
    Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn’t defer to your doctor by aligning your credence in the de se thought ‘I am sick’ with the doctor’s credence in that same de se thought. Nor should you defer to the objective chances by setting your credence in the thought ‘The actual winner wins’ equal to the objective chance that the actual winner wins. Here, I generalize principles of expert deference to handles these kinds of problem cases.
    Found 6 days, 15 hours ago on PhilPapers
  14. 589593.77297
    Here’s a paper on categories where the morphisms are open physical systems, and composing them describes gluing these systems together: • John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics. …
    Found 6 days, 19 hours ago on Azimuth
  15. 607122.772984
    Consumption decisions are partly in‡uenced by values and ideologies. Consumers care about global warming as well as about child labor and fair trade. Incorporating values into the consumer’s utility function will often violate monotonicity, in case consumption hurts cherished values in a way that isn’t offset by the hedonic bene…ts of material consumption. We distinguish between intrinsic and instrumental values, and argue that the former tend to introduce discontinuities near zero. For example, a vegetarian’s preferences would be discontinuous near zero amount of animal meat. We axiomatize a utility representation that captures such preferences and discuss the measurability of the degree to which consumers care about such values.
    Found 1 week ago on Itzhak Gilboa's site
  16. 613627.772998
    Relying on some auxiliary assumptions, usually considered mild, Bell’s theorem proves that no local theory can reproduce all the predictions of quantum mechanics. In this work, we introduce a fully local, superdeterministic model that, by explicitly violating settings independence—one of these auxiliary assumptions, requiring statistical independence between measurement settings and systems to be measured—is able to reproduce all the predictions of quantum mechanics. Moreover, we show that, contrary to widespread expectations, our model can break settings independence without an initial state that is too complex to handle, without visibly losing all explanatory power and without outright nullifying all of experimental science. Still, we argue that our model is unnecessarily complicated and does not offer true advantages over its non-local competitors. We conclude that, while our model does not appear to be a viable contender to their non-local counterparts, it provides the ideal framework to advance the debate over violations of statistical independence via the superdeterministic route.
    Found 1 week ago on PhilSci Archive
  17. 613646.773012
    The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.
    Found 1 week ago on Kevin D. Hoover's site
  18. 621504.773032
    In linguistics, the dominant approach to the semantics of plurals appeals to mereology. However, this approach has received strong criticisms from philosophical logicians who subscribe to an alternative framework based on plural logic. In the first part of the article, we offer a precise characterization of the mereological approach and the semantic background in which the debate can be meaningfully reconstructed. In the second part, we deal with the criticisms and assess their logical, linguistic, and philosophical significance. We identify four main objections and show how each can be addressed. Finally, we compare the strengths and shortcomings of the mereological approach and plural logic. Our conclusion is that the former remains a viable and well-motivated framework for the analysis of plurals.
    Found 1 week ago on David Nicolas's site
  19. 633284.773046
    Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: (i) the quantum state is a complete description of a physical system; (ii) the quantum (superposition) state is grounded in its terms, and (iii) the superposition terms are themselves grounded in local goings-on about the system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism.
    Found 1 week ago on PhilPapers
  20. 633354.77306
    According to an increasingly popular view in epistemology and philosophy of mind, beliefs are sensitive to contextual factors such as practical factors and salient error possibilities. A prominent version of this view, called credal sensitivism, holds that the context-sensitivity of belief is due to the context-sensitivity of degrees of belief or credence. Credal sensitivism comes in two variants: while credence-one sensitivism (COS) holds that maximal confidence (credence one) is necessary for belief, threshold credal sensitivism (TCS) holds that belief consists in having credence above some threshold, where this threshold doesn’t require maximal confidence. In this paper, I argue that COS has difficulties in accounting for three important features about belief: i) the compatibility between believing p and assigning non-zero credence to certain error possibilities that one takes to entail not-p, ii) the fact that outright beliefs can occur in different strengths, and iii) beliefs held by unconscious subjects. I also argue that TCS can easily avoid these problems. Finally, I consider an alleged advantage of COS over TCS in terms of explaining beliefs about lotteries. I argue that lottery cases are rather more problematic for COS than TCS. In conclusion, TCS is the most plausible version of credal sensitivitism.
    Found 1 week ago on PhilPapers
  21. 633407.773073
    The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to new information. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting.
    Found 1 week ago on PhilPapers
  22. 693533.773086
    If the Past Hypothesis underlies the arrows of time, what is the status of the Past Hypothesis? In this paper, I examine the role of the Past Hypothesis in the Boltzmannian account and defend the view that the Past Hypothesis is a candidate fundamental law of nature. Such a view is known to be compatible with Humeanism about laws, but as I argue it is also supported by a minimal non-Humean “governing” view. Some worries arise from the non-dynamical and time-dependent character of the Past Hypothesis as a boundary condition, the intrinsic vagueness in its specification, and the nature of the initial probability distribution. I show that these worries do not have much force, and in any case they become less relevant in a new quantum framework for analyzing time’s arrows—the Wentaculus. Hence, both Humeans and minimalist non-Humeans should embrace the view that the Past Hypothesis is a candidate fundamental law of nature and welcome its ramifications for other parts of philosophy of science.
    Found 1 week, 1 day ago on PhilSci Archive
  23. 809063.7731
    It has been argued in various places that measurement-induced collapses in Orthodox Quantum Mechanics yields a genuine structural (or intrinsic) quantum arrow of time. In this paper, I will critically assess this proposal. I begin by distinguishing between a structural and a non-structural arrow of time. After presenting the proposal of a collapse-based arrow of time in some detail and discussing some criticisms it has faced, I argue, first, that any quantum arrow of time in Orthodox Quantum Mechanics cannot be defined for the entire universe and, second, that it requires non-dynamical information to be established. Consequently, I deliver that any quantum arrow of time in Orthodox Quantum Mechanics is, at best, local and nonstructural, deflating the original proposal.
    Found 1 week, 2 days ago on PhilSci Archive
  24. 809113.773116
    The notion of time reversal has caused some recent controversy in philosophy of physics. In this paper, I claim that the notion is more complex than usually thought. In particular, I contend that any account of time reversal presupposes, explicitly or implicitly, an answer to the following questions: (a) What is time-reversal symmetry predicated of? (b) What sorts of transformations should time reversal perform, and upon what? (c) What role does time-reversal symmetry play in physical theories? Each dimension, I argue, not only admits divergent answers, but also opens a dimension of analysis that feeds the complexity of time reversal: modal, metaphysical, and heuristic, respectively. The comprehension of this multi-dimensionality, I conclude, shows how philosophically rich the notion of time reversal is in philosophy of physics
    Found 1 week, 2 days ago on PhilSci Archive
  25. 809276.773131
    A widespread view in physics holds that the implementation of time reversal in standard quantum mechanics must be given by an anti-unitary operator. In foundations and philosophy of physics, however, there has been some discussion about the conceptual grounds of this orthodoxy, largely relying on either its obviousness or its mathematical-physical virtues. My aim in this paper is to substantively change the traditional structure of the debate by highlighting the philosophical commitments underlying the orthodoxy. I argue the persuasive force of the orthodoxy greatly depends on a relationalist metaphysics of time and a by-stipulation view of time-reversal invariance. Only with such philosophical background can the orthodoxy of time reversal in standard quantum mechanics succeed and be properly justified.
    Found 1 week, 2 days ago on PhilSci Archive
  26. 872310.773145
    Catherine Herfeld: Professor List, what comes to your mind when someone refers to rational choice theory? What do you take rational choice theory to be? Christian List: When students ask me to define rational choice theory, I usually tell them that it is a cluster of theories, which subsumes individual decision theory, game theory, and social choice theory. I take rational choice theory to be not a single theory but a label for a whole field. In the same way, if you refer to economic theory, that is not a single theory either, but a whole discipline, which subsumes a number of different, specific theories. I am actually very ecumenical in my use of the label ‘rational choice theory’. I am also happy to say that rational choice theory in this broad sense subsumes various psychologically informed theories, including theories of boundedly rational choice. We should not define rational choice theory too narrowly, and we definitely shouldn’t tie it too closely to the traditional idea of homo economicus.
    Found 1 week, 3 days ago on Christian List's site
  27. 922970.773158
    The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide-scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses draw in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by Epistemic Utility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, Epistemic Utility Theory cannot vindicate such requirements.
    Found 1 week, 3 days ago on PhilPapers
  28. 941771.773171
    Let L be a sentential (object) language containing atoms ‘A’, ‘B’, . . . , and two logical connectives ‘&’ and ‘→’. In addition to these two logical connectives, L will also contain another binary connective ‘Ž’, which is intended to be interpreted as the English indicative. In the meta-language for L , we will have two meta-linguistic operations: ‘ð’ and ‘`’. ‘ð’ is a binary relation between individual sentences in L . It will be interpreted as “single premise entailment” (or “single premise deducibility inL ”). ‘`’ is a monadic predicate on sentences of L . It will be interpreted as “logical truth of the logic ofL ” (or “theorem of the logic of L We will not presuppose anything about the relationship between ‘ð’ and ‘`’. Rather, we will state explicitly all assumptions about these meta-theoretic relations that will be required for Gibbard’s Theorem. Below, I report a new version of Gibbardian Collapse. First, two preliminary remarks: (a) the “if. . . then” and “and” I’m using in the meta-meta-language of L to state the assumptions of the theorem are assumed to be classical, and (b) these assumptions are all schematic (i.e., they are to be interpreted as allowing any instances that can be formed from sentences of L We begin with eight (8) background assumptions, which are purely formal renditions of some of Gib-bard’s presuppositions in his collapse argument. Think of this as a (very weak) background logic for hŽ, &i.
    Found 1 week, 3 days ago on Branden Fitelson's site
  29. 1038994.773184
    What are the truth conditions of want ascriptions? According to a highly influential and fruitful approach, championed by Heim (1992) and von Fintel (1999), the answer is intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff within S’s belief set, S prefers the p worlds to the ¬p worlds. This approach faces a well known and as-yet unsolved problem, however: it makes the entirely wrong predictions with what we call (counter)factual want ascriptions, wherein the agent either believes p or believes ¬p—e.g., ‘I want it to rain tomorrow and that is exactly what is going to happen’ or ‘I want this weekend to last forever but of course it will end in a few hours’. We solve this problem. The truth conditions for want ascriptions are, we propose, connected to the agent’s conditional beliefs. We bring out this connection by pursuing a striking parallel between (counter)factual and non-(counter)factual want ascriptions on the one hand and counterfactual and indicative conditionals on the other.
    Found 1 week, 5 days ago on PhilPapers
  30. 1039959.773197
    Albert and Callender have challenged the received view that theories like classical electrodynamics and non-relativistic quantum mechanics are time-reversal invariant. According to their view of time-reversal invariance, these theories are not time-reversal invariant. If so, then the important metaphysical implication is that space-time must have a temporal orientation. There is a large debate on what is the best way of viewing time-reversal invariance, with many philosophers defending the standard notion contra Albert and Callender. In this paper, we will not be concerned so much with that aspect of the debate, but rather focus our attention on an aspect of the Albert and Callender view that has received little attention, namely the role of ontology. In the type of theories that are considered the ontology is actually underdetermined. We will argue that with a suitable choice of ontology, these theories are in fact time-reversal invariant according their view.
    Found 1 week, 5 days ago on PhilSci Archive