In the papers: "Belief Revision as a Truth-tracking Process" (TARK-conference-2011) and 'Truth-Tracking by Belief Revision' (2014), we analyze the learning power of iterated belief revision methods. In particular we focus in this paper on the universality of belief revision methods as methods to learn. This means we check whether or not a belief revision method can be used to learn everything that can be learnt. In our work we went over three possible such methods: conditioning, lexicographic revision and minimal revision. As stated in the paper, our main results prove that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. But we also show that a standard (well-founded) belief-revision setting is in general too narrow for this. Moreover, minimal belief revision is not universal. The setting of the paper goes on to explore how well these methods work when we allow observational errors (false observations) to occur. This work provides one of the first insights in the truth-tracking power of belief revision methods when we want to use them for learning.
In a later paper 'On the solvability of inductive problems: A study in epistemic topology' (which appeared in the proceedings of TARK-conference 2015) we investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of two key notions: solvability and learnability, and we use them to prove that AGM-style belief revision is 'universal', i.e., that every solvable problem is solvable by AGM conditioning.
At the TARK-conference of 2019, we presented our latest work in this direction in the paper on 'Learning Probabilities: Towards a Logic of Statistical Learning', in which we propose a model to form beliefs and learning about unknown probabilities.