With the rise of Generative AI (GenAI) search tools, searching for information is becoming increasingly convenient. However, increased reliance on these tools raises concerns regarding how they impact users' thinking and learning.
This work explores how metacognitive prompts—that encourage people to pause, reflect, assess their comprehension and consider alternative perspectives—can help them engage more actively and critically with information generated by GenAI tools. Based on a formative study with 40 college students comparing their behaviors and thought processes with vs without metacognitive prompts, we developed an adaptive system called MetaCues that provides timely cues to guide learners in becoming more thoughtful and intentional in their search and learning process.
Paper to be presented at ASIS&T'25
Through two field studies (N=72 and 97), we explored whether data science learners, in guided collaboration with LLMs, can generate helpful hints for incorrect programming assignments while learning deeply through the hint-writing process. We compared three learning designs in which we varied whether and at what stage AI assistance is provided and evaluated them based on their impact on student learning, engagement, and the quality of generated hints.
We found that deferring AI assistance, by requiring students to write a hint on their own before they receive AI assistance, helps them write significantly higher quality hints and engage more actively. This underscores the importance of student-AI interaction designs that promote active student engagement with AI.
Paper 1 presented at LAK'24 and awarded Best Short Paper 🏆; Paper 2 under review.
To better support learners in introductory data science courses, we qualitatively analyzed 47 students' incorrect assignment submissions in a data manipulation course (covering pandas, NumPy, etc.), conducted a log analysis of student interactions in Jupyterlab, and interviewed data science instructors.
We categorized student mistakes into the categories shown below. For each mistake category, we provided actionable pedagogical recommendations and insights to develop scalable assessment and feedback generation tools. For instance, we suggested the features that can be extracted from data science code to detect student mistakes and provide feedback using machine learning methods.
These insights were translated into a personalized hint generation tool, developed by researchers at Microsoft, which was deployed across two semesters in the University of Michigan's Masters of Applied Data Science program.
Paper presented at SIGCSE'24
Learnersourcing is a pedagogically supported form of crowdsourcing where students generate educational artifacts such as questions or hints through pedagogical engagement. This work takes a student-centered approach to understanding how we can design learnersourcing systems where students remain motivated, learn deeply, and generate high-quality output.
Through a field experiment over 4 months with 3,661 students in the Coursera MOOC “Introduction to Data Science in Python”, we identified choice-based learnersourcing as a scalable personalized learning design for MOOCs. Further, we identified factors influencing student motivation to engage in learnersourcing and provided insights for designing learnersourcing activities that promote student agency.
Paper presented at L@S'21 and awarded Best Paper 🏆
We also wrote a paper (presented at L@S'22) contributing a student-centric framework for designing learnersourcing systems.
This work presents an automated and scalable method for generating visual assessments at different difficulty levels for early childhood learners.
First, to understand the process of creating visual assessments and the associated challenges, we interviewed primary school teachers. We found that creating Multiple Choice Questions (MCQs) of varying difficulty levels and finding relevant images for them can be very time-consuming. Based on our findings, we developed a novel approach using image semantics to generate visual MCQs, wherein options are presented as images. Further, we contributed a metric to measure the semantic similarity between two images, and used it to source images for MCQ options.