IJCAI 2019 Tutorial:
Iterated Belief Change
Instructors
- Richard Booth (Cardiff University, UK)
- Jake Chandler (La Trobe University, Melbourne, Australia)
Description
The problem of belief change–the problem of how an intelligent agent should update their belief state in response to learning new information–is a crucial issue in knowledge representation. It bears deep formal and conceptual connections to a number of AI subdisciplines, including nonmonotonic reasoning and belief merging, as well as social choice.
A substantial and sophisticated literature on this topic has emerged over the past two to three decades, spanning a number of disciplines. In spite of this, there remains widespread disagreement on one of the most fundamental aspects of the core model. Indeed, while the rationality constraints operating on single changes in view have long been well understood, the nature of the principles governing the outcome of a succession of such changes–so-called iterated change–remains surprisingly very much up for grabs. This half-day tutorial offers an up-to-date and systematic survey of work on this vexed question, doubling up as an introduction to the study of belief change more generally.
Outline of the Tutorial
- General introduction
- Single-shot belief change
- Iterated revision
- Iterated contraction
- Iterated change in conditional beliefs
- Open problems
Target Audience and Prerequisites
- Researchers (all levels) in knowledge representation, with an interest in belief change.
- Prerequisites:
- Propositional logic
- Some prior knowledge of qualitative belief change theory useful but not essential.
Slides
Slides, including extensive bibliography at the end.