Comparing Crowdsourced Teacher Video Feedback

In this study we are comparing the video feedback of three different teachers on isomorphic questions. Designed as a randomized control trial, this work explores the effectiveness of crowdsourcing tutoring explanations at a small scale (N=3). All students receive an overview of the assignment and a 3 question pretest. Students who 'test out' of the pretest by getting each question correct are not included in the experiment and are directed to the posttest. Students who made at least one error in the pretest were randomly assigned to one of the conditions shown below. Three teachers (A, B, & C) were asked to create tutoring in the form of short videos (10-60 seconds). Each teacher created a set of hints for three problems, which the student can ask for on demand. Following their experience within the problem set, all students are directed to the posttest. We hope to compare differential learning gains based on teacher and singular or mixed teacher delivery.

Teacher A's Feedback

Teacher B's Feedback

Teacher C's Feedback

Mixed Feedback

Problem 1... Teacher A Hint Series

Problem 2... Teacher B Hint Series

Problem 3... Teacher C Hint Series

Control (Traditional Text Feedback)

Click on the image to enlarge

Selent, D., Heffernan, N. (In Preparation) Can we reliably tell the difference between two teachers' explanations?