Crowdsourcing Student Generated Videos: Comparing video with text, and comparing two different student videos

This study is currently in progress in ASSISTment. Its goal is to compare the helpfulness of text feedback vs. video feedback and difference in effectiveness of various video feedback of the same problem.

Problem Set

A preview of the problem set can be seen here.

Firstly, the student is asked to assure that they have proper access to the video (and audio) components of the problem set. If the student doesn't pass this video check, they are effectively removed from the study. After that, the problem set moves forwards as a normal skill builder where they must answer 3 correct in a row to continue.

Video check

Text Feedback

The students in this group are asked to answer a conditional question. At the moment, there is no difference in the subsequent actions if either they get it right or not. The student will continue to answer a few more questions until they get 3 right in a row.

Video Feedback

The students in this group are asked to answer a "marked" question. If they got it wrong, they are randomly provided a video from either Tom or Charlie. The results from the subsequent problems will tell us the difference in effectiveness between the two video feedback.

Tom's Video

Charlie's Video

The IQP Report of the two Students Duc and Long is here. This will soon be published as a student report available from the WPI library but for now to help others researchers its google doc version is available. That data is available here.