Nina Gierasimczuk (Technical University of Denmark)
The Dynamics of True Belief: Learning by Revision and Merge
Successful learning can be understood as convergence to true beliefs. Can a particular belief revision method generate a universal learning method (learn everything that is learnable by any learning method)? We know that conditioning, lexicographic revision, and minimal revision differ with respect to their learning power - the first two can drive universal learning mechanisms (provided that the observations include only and all true data, and provided that non-well-founded prior plausibility relation is allowed), while minimal revision cannot. Learning in the presence of noise further complicates the situation. Given a fairness condition, which says that only finitely many input errors can occur, and that every error is eventually corrected, lexicographic revision is still universal, while the other two methods are not. Similar questions can be posed in the context of multi-agent belief revision, where a group revises their collective conjectures via a combination of belief revision and belief merge. What are the facilitators and the obstructors of group learning? As before, I will address these issues using the tools of formal learning theory.
The first part of the talk, concerning the single-agent case, is based on the 2019 Studia Logica paper “Truth-Tracking by Belief Revision” (joint work with Alexandru Baltag and Sonja Smets). The second, multi-agent part will report on the preliminary results obtained jointly with Zoé Christoff and Thomas Skafte.
Bio: Nina Gierasimczuk is an Associate Professor in Logic at the Department of Applied Mathematics and Computer Science, Technical University of Denmark. Her main research interests lie in the logical aspects of learning in both single- and multi-agent context, and involve computational learning theory, modal logic, and computability theory. Her current projects focus on symbolic learning in artificial intelligence in the context of action models, belief revision, and multi-agent systems. She is also interested in the coordination mechanisms involved in natural language evolution, and in the role of logic in cognitive science and visual arts. She is inspired by various styles of truth-pursuit, e.g., artistic vs scientific, qualitative vs quantitative.
Nina obtained her Ph.D. in Computer Science at the Institute for Logic, Language and Computation (University of Amsterdam, 2010) with Johan van Benthem and Dick de Jongh as advisors, an MA in Philosophy under the supervision of Marcin Mostowski (University of Warsaw, 2005), and a BFA from Gerrit Rietveld Academy, tutored by Manel Esparbé I Gasca, Willem van Weelden, and Q.S. Serafijn (Amsterdam, 2021).
Vered Shwartz (University of British Columbia, Canada)
Nonmonotonic Reasoning in Natural Language
Nonmonotonic reasoning is a core human reasoning ability that has been studied in classical AI but mostly overlooked in modern natural language processing (NLP). The main paradigm in NLP today is training supervised neural networks on large amounts of data to perform narrow tasks. While this paradigm proved beneficial for many language understanding and generation tasks, such models lack in their reasoning abilities, such as the ability to read between the lines and make tentative conclusions based on incomplete information. Conversely, the earlier approach of symbolic AI is often too rigid to cope with natural language, which is underspecified and messy.
In this talk I will present several recent papers addressing nonmonotonic reasoning in natural language, including abductive, counterfactual and defeasible reasoning. State-of-the-art neural models still lag behind human performance on these new tasks and datasets, setting the stage for future neuro-symbolic approaches and new and original directions for nonmonotonic reasoning in natural language.
Bio: Vered Shwartz is a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington. She will join the Department of Computer Science at the University of British Columbia as an Assistant Professor in fall 2021. Previously, Vered completed her PhD in Computer Science from Bar-Ilan University in 2019. Her research interests include commonsense reasoning, computational semantics and pragmatics, and multiword expressions.
Tran Cao Son (New Mexico State University, USA)
Model Reconciliation and Its Applications in Explainable Planning
Explainable AI (xAI) has become a prominent research topic. Answering questions such as ``why this?'', ``why not that?'', ``when does it work?'', ``when do you fail?'', etc. have been a critical part of xAI. The model reconciliation problem is a popular paradigm within the explainable AI planning (xAIP) community that has been proposed as a way to deal with some of the above questions. In this presentation, we present a generalization of the model reconciliation problem. We discuss its logic based formalization, a notion of an explanation in the model reconciliation problem, algorithms for solving the problem, and its application in xAIP.