Through crowd sourcing in the educational system, student feedback can be provided by the crowd rather than one teacher alone. This will greatly speed up the process of giving feedback and advice as it is not limited to a single person. It will also lead to a higher accuracy in performance evaluation due to a difference in opinion and skill [Jiang, 2018]. Moreover, people will be able to give feedback to each other as well as respond to it to better their own course work. A course led by Cathy Davidson explored this topic. She made it so that students together would grade and provide feedback to each other. The goal was to teach students responsibility and how to correctly judge the quality of work that is being done. The outcome was expected to be that the system would create a clear standard which the crowd sourcing individuals would make. However, there are some limitations in providing student feedback through crowd sourcing students themselves [Davidson, 2009]. For one, it is tough to create a standard by which feedback will be given to all assignments. A large amount of redundancy will have to be performed to ensure high quality feedback. This also means that crowd sourcing may work well for small assignments, but not for larger ones. Finally, students giving feedback to each other is not an entirely fair process. The feedback may not correspond with what the teacher desires or what is truly being taught in the class [Weld].
Crowd sourcing for student feedback has advanced with the increasingly technological era our world is in. One example involves professor Ashok Goel from Georgia Tech. He designed an AI which would act as a teacher's assistant for the course he was teaching. This AI would learn from the questions that students posed to it and the oversight which real TA's would give it. In this way, both students and teacher's assistants made up the crowd. The crowd helped tweak the AI's responses to questions, as well as its interactions with the students. The feedback given to students by this particular AI turned out to be 97% accurate. Another example of an AI used in student feedback was in a study done by professor Ogan from Carnegie Mellon. The AI was to answer questions by purposefully making mistakes so students could learn the wrong way to solve problems. This turned out to be an ineffective way to provide feedback as it caused students to lose interest [Gambineri, 2017].
Image: Georgia Tech. “Ashok Goel.” Twitter.com, 10 Aug. 2018, twitter.com/georgiatech/status/1027983015933341696.Another example of crowd sourcing using AI technology to give feedback to students is the project called Caesar. This system was created by Mason Tang and overseen by Rob Miller at MIT. Although it focuses mainly on the division of data into chunks, crowd sourcing does come into play later on. After the pre-processor of Caesar partitions the code which students have written into chunks, the chunks are routed to the crowd for review. The crowd is made up of students, staff, and other users who have some background in coding. This is efficient as people in the crowd review different parts of a student's code and respond by giving feedback to that student. Due to the skill set of the crowd, this leads to high quality feedback in a short amount of time [Tang, 2011].
The final AI that utilizes crowd sourcing as a way to provide feedback is a little different than others. Rather than give students feedback, feedback is given to instructors. This AI is named Hubert. Hubert is basically a bot that chats with students in a private messaging format. In this case, the crowd is made up of students, while the people relaying information to the crowd are teachers. The teachers give information on their class and how it is led in an effort to obtain feedback from students on what can be improved in the course. Although this is done on a much smaller scale as compared to AI's that were previously discussed, it is still a form of crowd sourcing for feedback because the issue being addressed is the functionality of a particular course and this small task is given to a large amount of people to find if collectively they will come up with a better way to run the course [Lieberman, 2018].
[Jiang, 2018] Jiang, Yuchao, et al. “A Review on Crowdsourcing for Education: State of the Art of Literature and Practice.” June 2018.
[Davidson, 2009] Davidson, Cathy. “How To Crowdsource Grading.” HASTAC, 26 July 2009, www.hastac.org/blogs/cathy-davidson/2009/07/26/how-crowdsource-grading.
[Weld] Weld, Daniel S, et al. “Personalized Online Education — A Crowdsourcing Challenge.”
[Gambineri, 2017] Gambineri, Giacomo. “A Professor Built an AI Teaching Assistant for His Courses - and It Could Shape the Future of Education.” Business Insider, Business Insider, 22 Mar. 2017, www.businessinsider.com/a-professor-built-an-ai-teaching-assistant-for-his-courses-and-it-could-shape-the-future-of-education-2017-3.
[Tang, 2011] Tang, Mason. “Caesar: A Social Code Review Tool for Programming Education.” MIT, 22 Aug. 2011.
[Lieberman, 2018] Lieberman, Mark. “Inside Higher Ed.” Hubert AI Helps Instructors Sort and Process Student Evaluation Feedback, 7 Mar. 2018, www.insidehighered.com/digital-learning/article/2018/03/07/hubert-ai-helps-instructors-sort-and-process-student-evaluation.
Georgia Tech. “Ashok Goel.” Twitter.com, 10 Aug. 2018, twitter.com/georgiatech/status/1027983015933341696.