Alexandru Baltag (Amsterdam)
Collective learning versus informational cascades: towards a logical approach to social information flow
I look at some examples of both successful collective learning (the “wisdom of the crowds”) and its distortions (informational cascades, a.k.a. the “madness of the crowds”). I argue that the standard Bayesian analysis, though useful, is insufficient for a full understanding of these phenomena. I use ideas from (Dynamic) Epistemic Logic, Belief Revision Theory and Formal Learning Theory to analyze these examples, as a first step towards investigating in more generality the logical dynamics of social information.
Johan van Benthem (Amsterdam & Stanford)
Computation and social agency: what and how
Hans van Ditmarsch (Nancy)
Jan van Eijck (Amsterdam)
Propositional dynamic logic as a multi-agent strategy logic
Propositional Dynamic Logic or PDL was invented as a logic for reasoning about regular programming constructs. We propose a new perspective on PDL as a multi-agent strategic logic (MASL). This logic for strategic reasoning has group strategies as first class citizens, and brings game logic closer to standard modal logic. We demonstrate that MASL can express key notions of game theory, social choice theory and voting theory in a natural way, we give a sound and complete proof system for MASL, and show that MASL encodes coalition logic. Next, we extend the language to epistemic multi-agent strategic logic (EMASL) and we give examples of what it can express, and of how it can be used to pose new questions in epistemic social choice theory. (talk cancelled)
Amanda Friedenberg (Tempe)
Bargaining under strategic uncertainty
In this talk we will consider a situation where bargainers face strategic uncertainty, i.e., uncertainty about how others negotiate. A trivial implication of bargaining under strategic uncertainty is that there may be bargaining impasses. Can such impasses persist if bargainers are strategic, in the sense of engaging in forward induction reasoning? The paper shows that the answer is yes. In fact, such impasses can persist even in a situation where forward induction reasoning implies “no on path strategic uncertainty,” i.e., that is transparent to the bargainers that all strategic uncertainty is about surprise moves in negotiations. The paper goes on to characterize the behavioral implications of these assumptions.
Nina Gierasimczuk (Amsterdam)
Conclusive update and computability
I will present an original way of analyzing the computational aspects of propositional update. The framework is based on the paradigm of formal learning theory. I will explain how the inductive inference mechanism can be used to study the problem of convergence to certainty (i.e., the possibility of “conclusive update”). I will relate this problem to the concept of finite identifiability in formal learning theory. I will introduce preset learners, learning functions that explicitly use conclusive symptoms, as well as the concept of fastest learner, who comes up with the right conjecture on any input string that objectively leaves only the right choice. We will see how minimal symptoms influence the speed of finite identification. Throughout the talk I will outline and discuss the epistemic-logic motivations for considering finite identification in the context of conclusive update. The results presented in this talk come from a joint work with Dick de Jongh.
Patrick Girard (Auckland)
Logical dynamics of belief change in the community
In this paper we explore the relationship between norms of belief revision that may be adopted by members of a community and the resulting dynamic properties of the distribution of beliefs across that community. We show that at a qualitative level many aspects of social belief change can be obtained from a very simplistic model, which we call ‘threshold influence’. In particular, we focus on the question of what makes the beliefs of a community stable under various dynamical situations. Besides, we consider refinements and alternatives to the ‘threshold’ model. The most significant alternative is to move to consideration of plausibility judgements rather than mere beliefs. We show first that some such change is mandated by difficult problems with belief-based dynamics related to the need to decide on an order in which different beliefs are considered. Secondly, we show that the resulting plausibility-based account results in a dynamical system that is non-deterministic at the level of beliefs. Nonetheless, the plausibility-based account lacks certain intuitively desirable features, such as the preservation of the transitivity.
Stephan Hartmann (Munich)
Updating on conditionals = Kullback-Leibler + causal structure
Modeling how to learn an indicative conditional has been a major challenge for Formal Epistemologists. One proposal to meet this challenge is to request that the posterior probability distribution minimizes the Kullback-Leibler divergence to the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this paper, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question. The talk is based on joint work with Soroush Rafiee-Rad. (talk cancelled)
Vincent Hendricks (Copenhagen & New York)
Social proof in extensive games
This is an analysis of the extensive game SkipGambleQuit (SGQ) played by socially aware agents, agents revising their beliefs in the light of social proof provided by the choices of others. SGQ may be viewed as a model of monotonously accumulating gambling: The payoffs of involved agents rise the longer they stay in gamble, until they start to decrease. The optimal strategy is to gamble until the crash point, then quit. How agents decide when to quit under uncertainty is investigated by constructing a Doxastic Epistemic Temporal Logic model of SGQ, in which agents can have expectations about Nature’s action, as well as their co-players moves and expectations. A procedure is defined by which agents may extract information about others’ expectations from their previous actions, whereby social proof is generated to aid decision-making. This is a joint work (which will be presented together) with Rasmus K. Rendsvig.
Ziv Hellman (Tel Aviv)
Deludedly agreeing to agree
We study conditions relating to the impossibility of agreeing to disagree in models of interactive KD45 belief (in contrast to models of S5 knowledge, which are used in nearly all the agreements literature). Agreement and disagreement are studied under models of belief in both non-probabilistic decision models and probabilistic belief revision of priors. We show that even when the truth axiom is not assumed it turns out that players will find it impossible to agree to disagree under fairly broad conditions.
Hannes Leitgeb (Munich)
Rational belief: four approaches, one theory
What should a joint theory of rational belief and rational degrees of belief look like? While the former concept will contribute principles of doxastic logic, the latter will contribute principles of probability theory, but how can we make sense of their interaction? I will present four different approaches of how to answer this question: as it happens, all of them will ultimately justify one and the same joint theory of belief and degrees of belief. At the end of the talk, I will take some preliminary steps towards extending the theory to social belief.
Christian List (London)
Reasons for (prior) belief in Bayesian epistemology
Bayesian epistemology tells us how we should move from prior to posterior beliefs in light of new evidence or information, but says little about where our prior beliefs come from. It offers few resources to describe some prior beliefs as rational or well-justified, and others as irrational or unreasonable. A different strand of epistemology takes the central epistemological question to be not how to change one’s beliefs in light of new evidence, but what reasons justify a given set of beliefs in the first place. We offer an account of rational belief formation that closes some of the gap between Bayesianism and its reason-based alternative, formalizing the idea that an agent can have reasons for his or her (prior) beliefs, in addition to evidence or information in the ordinary Bayesian sense. Our analysis of reasons for belief is part of a larger programme of research on the role of reasons in rational agency. This is a joint work with Franz Dietrich.
Erik J. Olsson (Lund)
Should scientists communicate and, if so, how much?
The talk focuses on communication in groups where group members (“inquirers”) attend to a common question and are primarily interested in group “pay off” (and not in their possible individual gain). Suppose that we wish to maximize group competence in solving the common problem and that the group members are given, and so what we can play with is the communication structure of the group. How, in such a case, should we “hook people up”? The common sense answer would be “the more group members can communicate with each other, the more competent the group will become”. But studies in various disciplines, including economics and cognitive psychology, undermine the commonsense view. These studies have found that having a lot of communication links can be detrimental to group competence/performance. In the talk I report a further (simulation) study pointing in the same direction. Finally, I discuss the interpretation of these surprising findings.
Andrés Perea (Maastricht)
Plausibility orderings in dynamic games
Plausibility orderings play an important role in belief revision theory, as they can be used to characterize certain well-known classes of belief revision policies. In this talk we concentrate on dynamic games, in which players may have to revise their beliefs about the opponents’ strategy choices – but also about the opponents’ conditional beliefs – during the game. We investigate to what extent the forward induction concept of extensive form rationalizability (Pearce 1984, Battigalli 1997) and the backward induction concept of common belief in rationality (Perea 2012) can be characterized by plausibility orderings.
Hans Rott (Regensburg)
A Puzzle about Disagreement
This talk addresses the situation in which one speaker says “A” and a second speaker says “not-A”. I investigate how it can be sorted out whether the speakers have a substantive (genuine) disagreement, or whether they only have a merely verbal (terminological) disagreement that is due to their using different concepts. Starting from an idealizing distinction between interesting (controversial) and basic (uncontroversial) statements, I show that a major puzzle arises if we accept four principles that, taken individually, all look very plausible. As an example, I consider the question whether Sarai lied in the story told in Genesis 12.
HOSTS (LOGICIC PROJECT MEMBERS)
and other members of The Amsterdam Dynamics Group