Reliability and Belief Revision
The content of this lecture has been more thoroughly discussed in the NASSLLI 2012 course “Belief revision meets Formal Learning Theory” taught by Nina Gierasimczuk. In the lecture we will briefly discuss the following approaches to reliability of belief revision policies:
Inquiry via contraction-based belief revision (Martin & Osherson 1998)
We will discuss the class of “rational” scientists that keep on revising their beliefs in the light of incoming data, starting from some background theory. The inquiry is initiated from a set of formulas and the incoming datum is a formula of the same language. Hence, belief revision is a function of two arguments that yields a new belief. As in the well known AGM paradigm, the function will be defined to work in two steps: first, formulas that are incompatible with the new datum are removed from the belief set (belief contraction), then the new datum is added in (belief revision). We will devote some attention to the process of contraction (maxichoice and stringent contraction) and (iterated) revision defined from it. Then we will focus on the central topic: linguistic scientists based on revision, i.e., learning functions that change their belief sets, and their hypotheses, by means of the previously defined belief revision policy, and we will discuss their inductive inference power. We will also analyze the importance of the background theory and how it can be augmented to facilitate inquiry. Time permitting, in the end we will discuss the possibility of efficient inquiry via such understood belief revision.
Reliability of belief revision on possible histories (Kelly 1998)
The results rest on Spohn’s approach to belief revision: an agent’s epistemic state is given by an assignment of degrees of implausibility to possible worlds; the actual belief is taken to be the proposition satisfied by the possible worlds of implausibility degree zero. We will discuss various revision operators that differ in how they change the implausibility order on possible worlds. The inductive inquiry frameworks adopted here is that of prediction: the successive propositions received by the agent are true reports of successive outcomes of some discrete, sequential experiment. The goal of learning is to arrive at a sufficiently informative belief state that allows predicting how the sequence might possibly evolve in the unbounded future. The agent’s task is to stabilize on such a hypothesis for each outcome sequence admitted by the inductive problem. Within this framework of inductive inquiry we will analyze the learning power (reliability) of concrete belief revision methods of Spohn (1988), Boutilier (1993), Nayak (1994), Goldszmidt and Pearl (1994), and Darwiche and Pearl (1997). Time permitting, we will discuss the concept of inductive amnesia: the property of belief revision operators that signifies the trade-off between remembering the past data and the ability of predicting the future results of the experiment.
Truth-tracking of belief revision policies on plausibility states (Baltag et al. 2011, Gierasimczuk 2010)
On the side of belief revision we will follow the lines of the semantics of dynamic epistemic logic: beliefs of the agent is the content of those possible worlds that he considers most plausible; the revision results both in the change of the current belief, but can also induce modification of the plausibility order. In this context we will be concerned with the limiting properties of belief-revision understood as learning: identifying the actual world among the initial domain of the epistemic state. We will see that the ability to reliably learn is related to the ability to separate hypotheses by observations; hence, learnability can be viewed as a topological separation property. The results will concern mostly the conditions for universality of a belief revision policy (i.e., for a belief revision method being as powerful as full identification in the limit). This will lead to identifying factors that influence the (non-)universality of a belief-revision policy: the prior conditions for belief revision (e.g., well-founded plausibility states); type of incoming information (e.g., entirely truthful as opposed to partially erroneous); properties of belief-revision-based learning functions (e.g., conservatism).
Baltag, A., Gierasimczuk, N., and Smets, S. (2011). Belief Revision as a Truth-Tracking Process, in: Krzysztof R. Apt (Ed.): Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK-2011), ACM 2011.
Gierasimczuk, N. (2010). Knowing One’s Limits. Logical Analysis of Inductive Inference (Chapter 4), PhD Thesis, University of Amsterdam.
Kelly, K. (1998). Iterated Belief Revision, Reliability, and Inductive Amnesia, Erkenntnis, Vol. 50, pp. 11-58.
Kelly, K. (1998). The Learning Power of Iterated Belief Revision, in: Proceedings of the Seventh TARK Conference, Itzhak Gilboa (ed.), pp. 111-125.
Kelly, K., Schulte, O., and Hendricks, V. (1997). Reliable Belief Revision, in: Logic and Scientfic Methods, M. L. Dalla Chiara, et al. (eds.), Dordrecht: Kluwer.
Martin, E., and Osherson, D. (1997). Scientific Discovery Based on Belief Revision, The Journal of Symbolic Logic, Vol. 62, No. 4, pp. 1352-1370.
Martin, E., and Osherson, D. (1998). Elements of Scientific Inquiry, Cambridge: MIT Press.