We were tasked with writing about how this study assessed student learning outcomes. However, from what I can tell, student learning was not directly measured in this study. Instead, the study focused on children's attitudes regarding the AI tool they interacted with.
One interesting way that they measured attitudes is by tracking the number of questions the students asked and about which topics they asked them. The researchers measured students' general engagement based on how many questions they asked the AI tool, with the majority of students asking one to three questions. They also measured what topics the students wanted to learn about most.
The researchers noticed that students who did not initially trust the AI tool tested it with a question to which they already knew the answer. When the tool answered correctly, they concluded that it was trustworthy and began asking questions to which they did not know the answers.
One interesting point was that two different post-survey methods were used. The researchers began by asking questions verbally, then used that to write a better post-survey questionnaire, which was used for quantitative analysis. This is a very interesting approach to the issue of what to ask students.
The researchers also posed an open-ended question: "How would you describe [the tool] to your friends?" They were able to categorize the responses into themes. The students' responses to this prompt help reinforce the quantitative findings from the other questions.
The first thing that jumped out at me from this paper is the explicit distinction between "AI Literacy" and "AI Education," where the former describes general, non-technical knowledge and awareness and the latter describes technical concepts that enable learners to understand the concepts behind AI to the point of being able to create and work with AI tools themselves, such as through APIs. This distinction is very important because the design concepts for an AI Literacy program are often going to be very different from an AI Education curriculum.
This paper draws attention to the issue of "misperceptions" about AI concepts and capabilities and the need to address them (in both students and educators). AI literacy and education must focus on clearing up these misperceptions and explaining AI's difference from humanity, its limitations, and how it works. This last point is especially challenging, as explaining in-depth the underlying mechanisms behind AI technologies is almost never feasible in a K12 environment and many things will have to be left as "black boxes."
Another interesting point of this paper is that because many concepts in AI are so new, there is not a lot of research regarding how best to teach AI to students. Because of that, Grover suggests using and building off of existing concepts for teaching Computer Science, about which more research has been conducted. One of these concepts states that, although AI and CS education is a new subject and should be treated as one, it must also be integrated into existing subjects by utilizing "teachers' pre-existing pedagogical and content expertise." This approach allows non-specialized teachers with already-full curricula to integrate AI concepts into even non-stem classes.