Nina Gierasimczuk, Believing in Formal Learning Theory
This talk brings together two types of learning.
The first kind, getting to know about facts, is formalized and
analyzed in the domain of belief revision and the diverse frameworks
and doxastic logics. The main aim here is to formalize the elementary dynamics
of knowledge and epistemic attitudes towards incoming information. The second
kind, learning as a process, is studied within the framework of formal learning
theory. In this framework a general concept (language, grammar, theory) gets
to be identified by an agent on the basis of some elementary data (sentences,
results of experiments) over a long period of time. The learning agent
to change his mind on the way, and the process is successful if it results in
convergence to an appropriate hypothesis. In a sense this kind of learning is
built on top of the first kind, it consists of an iteration of simple
In this talk we propose a way to use the framework of learning theory
to evaluate belief-revision policies. On the inductive inference side,
are interested in the paradigm of language learning. As possible
concepts that are inferred we
take sets of atomic propositions. Therefore, receiving new data corresponds to
getting to know about facts. On the side of belief revision we follow
the lines of
dynamic epistemic logic (see van Benthem, 2007). Hence, we interpret current
beliefs of the agent (hypothesis) as the content of those possible
worlds that he
considers most plausible. The revision does not only result in the change of the
current hypothesis, but can also induce modification of the agent’s plausibility
order. We are mainly concerned with identifiability in the limit
(Gold, 1967). In the
first part we restrict ourselves to learning from sound and complete streams of
positive data. We show that learning methods based on belief revision via conditioning (update) and lexicographic revision are universal, i.e.,
prior conditions, those methods are as powerful as identification in the limit.
Those prior conditions, the agent’s prior dispositions for belief
revision, play a
crucial role here. We show that in some cases, these priors cannot be modeled
using standard belief-revision models (as based on well-founded preorders), but
only using generalized models (as simple preorders). Furthermore, we draw conclusions about the existence of tension between conservatism and learning power
by showing that the very popular, most ‘conservative’ belief-revision
methods fail to be universal. Then we turn
to the case of learning from both positive and negative data. Here, along with
information about facts the agent receives negative data about things that do
not hold of the actual world. We again assume these streams to be truthful and
we draw conclusions about iterated belief revision governed by such streams.
This enriched framework allows us to consider the occurrence of erroneous information. Provided that errors occur finitely often and are always eventually
corrected we show that the lexicographic revision method is still reliable, but
more conservative methods fail.
Our results can be interpreted as showing that applying certain types of
rules in certain contexts can be analyzed in terms of whether they can be relied
upon in the ‘quest for the truth’ (the analysis of inductive inference
in terms of
reliability has been for the first time provided by Kelly, 1996). In
we can naturally treat the procedural aspect of iterated belief
some intermediate stages of such iterations and relate them to the ultimate
success of a belief-revision policy.
The results presented in this talk come from a joint work with Alexandru
Baltag and Sonja Smets.
van Benthem, J. (2007). Dynamic logic for belief revision. Journal of Applied
Non-Classical Logics, 2:129–155.
Gold, E. (1967). Language identification in the limit. Information and Control,
Kelly, K. (1996). The Logic of Reliable Inquiry. Oxford University