Research 

TCHERLY

Tcherly is an evidence-based teacher facing dashboard that aggregates online self-reports from students during lecture (online or offline). We designed a novel feedback seeking mechanism where students report on engagement and difficulty levels they experienced during a lecture, along with a quick appraisal of the reasons behind their experience. This feedback (DEBE) is aggregated to suggest improvable sections to teachers and help them design discussion questions. We intend to continue with this project to explore the metacognitive potential of such feedback for students, possibility of using DEBE to design an objective teacher evaluation system and to understand cognitive-affective dynamics during learning of complex information. We foresee two Tcherly verticals slowly emerging out of this research, one for flipped classroom orchestration and the other for in- class lectures, with overlapping but not identical affordance. Please contact Pankaj Chavan for more details.

Gaze Analytics

The use of eye tracking data has been shown to be capable of capturing learner attention and cognitive processing in specific to learning contexts. We investigate the applicability of eye tracking data in generating generalizable features that predict and explain learning outcomes in educational videos. A combination of eye tracking features, including fixation distribution, gaze synchrony, and pupil diameter, are used to characterise both the learner and the video from an information processing standpoint. The preliminary results have been promising and we are currently engaged in modelling the attentional and cognitive factors conditional to learner prior knowledge and the nature of educational videos. 

Please contact David John and Shiv Negi for more details.

Eye gaze interaction (EGI) tasks

Eye tracking can be used effectively to explore problem solving by individuals. The gaze interaction by itself may offer clues to the problem solving process. In this project we explore the possibility of using gaze interactions to design 'metaproblems', where the student (the learner) is not expected to solve problems directly but comment on the strategies, knowledge, errors and solutions of another individual (the model) based on their interpretation of the gaze pattern of the model. We are positioning EGI as something distinct from yet closely related to EMME (eye movement modeling example), which uses gaze data of experts to guide exploration by novices. Please contact David John for more details.

Expert Novice differences

Experts and novices are not a binary classification in the truest sense. Instead, it is a continuum where a novice slowly evolves into an expert. Although this is vaguely recognized in literature such a binary classification still happens to be quite common. One reason could be that it is difficult to track the changes - behavioural, cognitive and affective transformations - that a person goes through with time (T1, T2 etc.) during this evolution. We are tracking this process with the help of sensor and traditional data. Epistemic network analysis of such data is expected to reveal the slow yet deliberate process that unfolds in classrooms, or more generally in any training scenario. Please contact Amit Paikrao and Sonika Pal for more details.

Learning with Subtitled media

Use of subtitles is very common in educational media. The role of subtitle language is critical for sensemaking of such videos. The socioeconomic diversity in India has created a learning class that varies from highly conversant to extremely rudimentary in English language proficiency. We are studying the suitability of subtitle language vis-a-vis the level of english language proficiency. So far, we have found that native language subtitling might be needed for the vast majority of learners in India. Furthermore, we found that even those who are proficient in English do better with English subtitles (for videos with English voiceovers), than without. We have triangualted our findings using eye tracking, EEG and self-reports. Please contact Shiv Negi for more details. 

Multimodal Analytics

Spatial thinking is a cornerstone of many subjects like earth science, chemistry, engineering design, architecture, and mathematics. Traditionally, accuracy, concurrent think alouds and retrospective self-reports have been used to understand several aspects of spatial thinking such as strategy use, gender differences, etc. Nowadays we have affordances of collecting high frequency sensor data during problem solution to better understand group and indivdual differences in spatial thinking. We have collected data from 38 participants as they solve mental rotation problems with a large number of pre and post tests. The data include, human observation, survey, self-reports, interview, psychometric tests, eye tracking, electrodermal activity (EDA), electorencephalography (EEG), and facial emotion. We will use this data to answer some persistent questions on group and individual differences in spatial thinking. 

The dataset itself has been designed to serve multiple communities, namely, learning analytics, spatial thinking and multimodal analytics. This dataset will be made public in the near furture. 

Please contact Kabyashree Khanikar and Karishma Khan for more details or download the study design poster made by Karishma.