The problem of belief change–the problem of how an intelligent agent should update their belief state in response to learning new information–is a crucial issue in knowledge representation. It bears deep formal and conceptual connections to a number of AI subdisciplines, including nonmonotonic reasoning and belief merging, as well as social choice.
A substantial and sophisticated literature on this topic has emerged over the past two to three decades, spanning a number of disciplines. In spite of this, there remains widespread disagreement on one of the most fundamental aspects of the core model. Indeed, while the rationality constraints operating on single changes in view have long been well understood, the nature of the principles governing the outcome of a succession of such changes–so-called iterated change–remains surprisingly very much up for grabs. This half-day tutorial offers an up-to-date and systematic survey of work on this vexed question, doubling up as an introduction to the study of belief change more generally.
Slides, including extensive bibliography at the end.