Social skills training targets behaviors necessary for success in social interactions. However, traditional classroom training for such skills is often insufficient to teach effective communication — one-to-one interaction in real-world scenarios is preferred to lecture-style information delivery.
This work introduces a framework that allows instructors to collaborate with large language models to dynamically design realistic scenarios for students to communicate. Our framework uses these scenarios to enable student rehearsal, provide immediate feedback and visualize performance for both students and instructors. Unlike traditional intelligent tutoring systems, instructors can easily co-create scenarios with a large language model without technical skills. Additionally, the system generates new scenario branches in real time when existing options don't fit the student's response.
Collaboration with: Illumia Labs, EdTeKLA Research Group and IRL Lab, University of Alberta
We propose an AI-based pilot trainer to help students learn how to fly aircraft. First, an AI agent uses behavioral cloning to learn flying maneuvers from qualified flight instructors. Later, the system uses the agent's decisions to detect errors made by students and provide feedback to help students correct their errors. The initial work presents an instantiation of the pilot trainer. We focus on teaching straight and level flying maneuvers by automatically providing formative feedback to the human student.
The extended work details the subsequent phase of an AI-based pilot trainer, building upon previous work that employed behavioral cloning for straight and level flight. This expanded system introduces three critical flight maneuvers: descent, climbing, and turning. To achieve expert-level performance in these more complex tasks, we leverage reinforcement learning (RL). The RL agent interacts with a simulated flight environment, continuously learning and optimizing its control policies to master the execution of these maneuvers. This approach allows the AI to develop robust and adaptable flying skills beyond direct imitation.
Collaboration with: Illumia Labs, EdTeKLA Research Group and IRL Lab, University of Alberta.
We are working on MAML-KT, a meta-learning approach for fast personalization in knowledge tracing. Instead of training a single global sequence model, MAML-KT learns an adaptable initialization that can specialize to an individual student with only a few interactions. This directly targets the cold-start phase, when timely feedback matters most but data are scarce, while remaining model-agnostic and easy to integrate into existing tutoring pipelines. Our results indicate improved early-phase accuracy and stable adaptation as more interactions arrive and our scenario analysis clarifies when meta-learning provides the largest gains versus cases where attention-style models catch up on longer, more diverse sequences. The goal is a practical recipe for rapid, data-efficient personalization that enhances real-world intelligent tutoring systems.
Safe Offline Reinforcement Learning that learns a critic that underestimates both reward and cost. Conceptually doubles safety at the expense of optimism.