Current real-world e-learning experiences can be created by using novel interactions. The area of gathering learning traces using wearable devices also open up the scope of multi-modal learning analytics.
What are our goals?
Create multimodal interaction such as interactive drawing-based learning activities
Trace learning experience using multimodal data in an integrated platform
Sense-making of multimodal data in the specific reflective task context
What are our objectives?
Explore the multimodal interaction space and use wearable devices to collect multimodal data to make sense of learning experiences.
What technology are we using?
We use the interactive learning tools in the LEAF platform. We are also updating the functionalities of the GOAL system to cater to multi-modal learning analytics studies.
People involved
Dr. Rwitajit MAJUMDAR, Kyoto University, Japan
Dr. Brendan FLANAGAN, Kyoto University, Japan
Dr. Huiyong LI, Kyoto University, Japan
Dr. Hiroaki OGATA, Kyoto University, Japan
Ms. Duygu ŞAHIN, Kyoto University, Japan
Ms. Yuan Yuan YANG, Kyoto University, Japan
Related publications
Majumdar R., Şahin D., Yang Y. and Li H. (2021) Preparations for Multimodal Analytics of an Enactive Critical Thinking Episode. accepted in Embodied@ICCE2021.
Majumdar R., Yoshitake D., Flanagan B., Ogata H. (2021) ReDrEw: A Learning Analytics Enhanced Learning Design of a Drawing based Knowledge Organization Task. accepted in ICALT 2021.
Majumdar R., Şahin D., Kondo T., Li H., Yang Y.Y., Flanagan B. and Ogata H. (2021) Enabling Multimodal Reading Analytics through GOAL Platform. in the Companion Proceedings of LAK 2021