Research

World models, causality, and intelligent tutoring systems

This ongoing research project aims at designing autonomous agents able to learn representations and world models from scratch, with a focus on generalization and sample efficiency. The generalization and transfer capabilities of world models can be improved by constraining the models' structure to be causal. On the other hand, intrinsic motivations can be leveraged to improve the sample efficiency of the world model learning. A preliminary work on this topic has successfully combined these two ideas:

Annabi, L. (2022). Intrinsically Motivated Learning of Causal World Models. In 5th International Workshop on Intrinsically Motivated Open-ended Learing IMOL (IMOL 2022).

These approaches are particularly interesting when applied to RL agents for which there is no (or few) human knowledge to be exploited to build the world model. One such area is intelligent tutoring, where the RL agent interacts with its environment (a student) by recommending educational content (lectures or exercises) and receiving test results as observations. Building a world model thus corresponds to building a model of the student and the knowledge structure of the domain to be taught. Previous work has shown how knowledge structure can be learned from random interactions with students:

Annabi, L. and Nguyen, S.M. (2023, November). Prerequisite Structure Discovery in Intelligent Tutoring Systems. In 2023 IEEE International Conference on Development and Learning (ICDL) (pp. 176-181). IEEE. 


Predictive coding, active inference and long-term memory

During my PhD I focused on designing neural architectures for prediction, inference and learning in long-term memories, using ideas from the literature around the free-energy principle, active inference and predictive coding. One of the contribution of this work was to draw a connection between the variational inference / predictive coding framework, and other models of long-term memory called Modern Continuous Hopfield Networks that use self-attention mechanisms:

Annabi, L., Pitti, A., & Quoy, M. (2022). On the relationship between variational inference and auto-associative memory. Advances in Neural Information Processing Systems, 35, 37497-37509. 

Another contribution of this thesis was to design different predictive coding networks -- and neural architectures implementing active inference -- that can process temporal data (see chapter 3. of my thesis) and:

Annabi, L., Pitti, A., & Quoy, M. (2021). Bidirectional interaction between visual and motor generative models using predictive coding and active inference. Neural Networks, 143, 638-656.