#### Introduction:

### Formal Learning Theory

Learning theory has been formed as an attempt to formalize and understand the process of **language acquisition**. It defines language learners as functions that on ever larger and larger finite samples of a language keep outputting conjectures/grammars (supposedly) corresponding to the language in question. The generalization of this concept in the context of computability theory has taken the learners to be number-theoretic functions that on finite samples of a recursive set output indices that encode Turing machines, in an attempt to find an index of a machine that generates the set. In analogy to a child, who on the basis of finite samples learns to creatively use a language, by inferring an appropriate set of rules, learning functions are supposed to **stabilize** on a value that encodes a finite set of rules for generating the language.

Learning theory poses **computational constraints**. Learning functions are most often identified with computational devices, and this leads to assuming their recursivity. There are at least three mutually related reasons why learning theory has been developed in this direction. One comes from cognitive science: Church’s Thesis in its psychological version; one is practical: the need of implementing learning algorithms; and finally there is a theoretical one: limiting recursion is in itself a mathematically interesting subject for logic and theoretical computer science. One might say that the domain has been taken over by successful, ultimately **reliable** functions. The observation that reliability is the feature that distinguishes successful learning functions from other possible **mind-change policies** led to relaxing the recursive paradigm. Learning theory has been re-interpreted as the framework for analyzing the procedural aspects of science, and became **a study of information flow and general inquiry**. This resulted in the treatment of formal learning theory as the mathematical embodiment of a **normative epistemology**.

This lecture will cover basic concepts of formal learning theory: identification in the limit vs. finite identification, language learning vs. function learning, tell-tale sets, locking sequences, computability constraints, efficiency and rationality constraints on learning functions, various type of data sequences, etc. We will also give a hint of particular learning strategies in automata and grammar inference.

If time allows we will play the inductive inference card game The New Eleusis. The game is designed to mimic the theoretical process of scientific inquiry via experiments. The rules of the game can be found here.

#### Bibliography:

Angluin, D. (1980). Inductive Inference of Formal Languages from Positive Data. Information and Control 45(2):117-135.

Angluin, D. and Smith, C. H. (1983). Inductive inference: Theory and methods. ACM Computing Surveys, 15(3):237-269.

Gold, E. M. (1967). Language identification in the limit. Information and Control, 10:447-474.

Jain, S., Osherson, D., Royer, J. S., and Sharma, A. (1999). Systems that Learn. Cambridge: MIT Press.

Martin, E., and Osherson, D. (1998). Elements of Scientific Inquiry. Cambridge: MIT Press.