Matteo Baldoni et al
This paper focused on a thorough study on whether a certain AI activity was helping develop AI conception in students more so than just the programming activities. They did so by conducting a case study on 12 5th-6th grade classes in Italy.
This study carefully measured student learning outcomes in computational thinking, conception of artificial mind, and conception of human mind. They conducted pretests and posttests with all the students in order to measure how the different activities that were offered affected student learning. The tests consisted of a computational thinking test, AMS scale, and AMOS 8-15 questionnaires. I found it interesting how each of the tests was very carefully selected/designed to measure the specific topics that were covered in the activities they conducted. The computational thinking test ensured to cover the programming concepts that were taught during the programming-based activities. The AMS scale measured how students viewed the robot’s mind (artificial mind), whereas the AMOS 8-15 questionnaire measured aspects relating to the human mind.
They also analyzed the results of the pretests and posttests for statistical significance before coming to conclusions. After we collect data at the school with our own projects, we should also check for statistical significance. However, we will need to consider how the limited number of participants will affect our results.
The results of this study were very fascinating – the two groups performed similarly on the pretests and posttests, indicating that the AI activities did not provide students with a stronger AI conception. This suggests that just introducing more AI activities will not foster a stronger AI literacy, so the activities that are used need to be carefully tested and selected in order to maximize the benefits they will have on students.