In Does Any AI-Based Activity Contribute To Develop AI Conception by Baldoni et al., the researchers present a yet unexplored concept of AI Educational research: a large scale educational difference test. Almost 200 children, with an average age of 11 years old, participated in the study. All of these children took part in a three-lesson programming course. Following the course, a pre-test was administered. Subsequently, roughly half of the 200 children continued with "standard teaching," while the other half received four lessons in AI using child-friendly programming languages. At the conclusion of these four lessons, all the children took the post assessment. Both the pre and post assessments consisted of computational thinking questions and understanding the concept of an artificial mind.
Baldoni et al. focused on determining if providing a specialized course on artificial intelligence would improve a child's understanding of artificial intelligence compared to a generalized programming languages course. To substantiate this, they placed a high focus on the number of students to support their hypothesis and utilized a combination of new metrics and well-cited metrics: AMS scale and AMOS, respectively, to obtain their results.
I enjoyed reading about Baldoni et al.'s research, using a large number of children instead of a single class size. While I understand the AI concepts presented to the children, I am unsure why these specific concepts were chosen, nor were we given access to the entirety of the post-assessment. Given that the assessments were likely to be general in nature, it seems intuitive of Baldoni et al. to have covered many high-level activities. However, other research (Davy Tsz Kit Ng1 et al.) has shown that children have a better understanding of AI when provided with a deep dive into artificial intelligence tasks.