Understanding how learners learn and providing informed feedback is a primary goal of the learning analytics community. However, data from a single modality, such as a pre-post test, self-reported questionnaire, trace data, image or video, and eye gaze, is insufficient to model the learners’ learning processes. Hence, researchers use data from multiple sensors to model the learners. No studies consider multimodal data from students’ learning-centered emotions, eye gaze features, and trace (log) data in an open-ended learning environment. Aligning these modalities based on time is a challenge. This project proposes a framework to align facial expressions and eye gaze to the learner’s actions based on time. We conducted a pilot study to understand the challenges faced in multimodal data alignment and propose methods to provide better clarity on the learning processes. The study was conducted in MEttLE OELE and collected more than 6 hours of data. After applying the proposed alignment framework, we discussed some cases where the proposed framework is used.
Ashwin T.S., Pathan R., Nath D., Rajendran R. Alignment and Emotion Summarization Framework of Multimodal Data in Open-ended Learning Environment [under review]