KR and AI Research Talks

Information

Speakers 

Presentation slides of all the speakers are available in this shared folder.

Schedule 

Talk 1: 16:00 - 16:30

Title: BeliefFlow: a Framework for Logic-Based Belief Diffusion via Iterated Belief Change
Speaker: Nicolas Schwind (AIST, Japan)
Abstract: We present BeliefFlow, a novel framework for representing how logical beliefs spread among interacting agents within a network. In a Belief Flow Network (BFN), agents communicate asynchronously. The agents' beliefs are represented using epistemic states, which encompass their current beliefs and conditional beliefs guiding future changes. When communication occurs between two connected agents, the receiving agent changes its epistemic state using an improvement operator, a well-known type of rational iterated belief change operator that generalizes belief revision operators. We show that BFNs satisfy appealing properties, leading to two significant outcomes. First, in any BFN with strong network connectivity, the beliefs of all agents converge towards a global consensus. Second, within any BFN, we show that it is possible to compute an optimal strategy for influencing the global beliefs. This strategy, which involves controlling the beliefs of a least number of agents through bribery, can be identified from the topology of the network and can be computed in polynomial time.

Talk 2: 16:30 - 17:00

Title: Towards Epistemic-Doxastic Planning with Observation and Revision
Speaker: Elise Perrotin (AIST, Japan)
Abstract: Epistemic planning is useful in situations where multiple agents have different knowledge and beliefs about the world, such as in robot-human interaction. One aspect that has been largely neglected in the literature is planning with observations in the presence of false beliefs. This is a particularly challenging problem because it requires belief revision. We introduce a simple specification language for reasoning about actions with knowledge and belief. We demonstrate our approach on well-known false-belief tasks such as the Sally-Anne Task and compare it to other action languages. Our logic leads to an epistemic planning formalism that is expressive enough to model second-order false-belief tasks, yet has the same computational complexity as classical planning.

Talk 3: 17:00 - 17:30

Title: Interpretable Decision Tree Ensemble Learning with Abstract Argumentation for Binary Classification
Speaker: Teeradaj Racharak (JAIST, Japan)
Abstract: We marry two powerful ideas: decision tree ensemble for rule induction and abstract argumentation for aggregating inferences from diverse decision trees to produce better predictive performance and intrinsically interpretable than state-of-the-art ensemble models. Our approach, called Arguing Tree Ensemble, is a self-explainable model that first learns a group of decision trees from a given dataset. It then treats all decision trees as knowledgeable agents and lets them argue with each other for concluding a prediction. Unlike conventional ensemble methods, this proposal offers full transparency to the prediction process. Therefore, AI users are able to interpret and diagnose the prediction’s output.