Shuchi Grover
This paper had a lot of information on various initiatives to teach AI to a K-12 audience and an analysis on ways to make AI education more effective and inclusive. A lot of the information in this paper will be very helpful as we start our projects, and this paper also puts forward information that we will need to consider as we find ways to make our tools most effective. As I read this paper, I was surprised that there was an emphasis on ML education in addition to AI education, instead of just mentioning AI education. However, later I noticed the paper mention that this was because of ML’s popularity and to emphasize the importance to include ML topics in AI education. I also found it interesting how this paper emphasizes “data agency” and “computational thinking” over just regular programming skills. As the world becomes increasingly data driven, the necessity of “data agency” becomes apparent and it is important that K-12 students learn about using/analyzing data in tandem with learning about the problem-solving skills that computational thinking entails. However, even basic coding education is absent in many schools, so it will be difficult to integrate AI education into K-12 public education. Since this issue is apparent, this paper also discussed integrating AI education into other subjects. This was a really interesting idea that would make it easier for schools to include AI education without needing a total restructuring of their curriculums. However, I wonder the extent to which AI literacy can be achieved by just integrating AI education into other subjects without just focusing on AI concepts and algorithms.
Saniya Vahedian Movahed and Fred Martin
The AI powered chatbot was tested with many students, and learning outcomes were measured in various ways, including observing student interaction with the tool, audio recordings of conversations with the students, and a survey (that was conducted after the students got to interact with the chatbot). Students’ use of the AI chatbot was also screen-recorded and analyzed. They measured student engagement by graphing the number of questions each student asked the AI chatbot, assuming that more questions indicate a higher engagement and interest in the AI chatbot. They also analyzed the conversations to qualitatively measure student trust in the AI chatbot, student curiosity, and the students’ tendency to anthropomorphize. Through this, they found that many students tested the AI chatbot by asking it questions they already knew the answers to, and if it gave the correct response, they trusted it. They also found that students were likely to anthropomorphize the AI chatbot by asking it questions as if they were talking to a person, instead of an AI. This indicated that the students were relating to the AI chatbot and trusted it. Furthermore, they conducted both post-surveys and interviews to gauge student learning. The post-survey and interviews were analyzed to see how students would integrate the AI chatbot into their lives, the extent to which the students trust the AI chatbot, the extent to which students were willing to tell secrets and confide in the AI chatbot, and how the students relate to the AI chatbot. This paper discusses how they measured student learning outcomes in great detail, especially discussing how they used qualitative data from interviews and surveys to measure certain things, like trust, curiosity, and anthropomorphizing. This will be very helpful as we design our own methods to test student learning outcomes with the tools we will develop in this course.