K-12 Machine Learning Article Summary
Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI Literacy in K-12: A Systematic Literature Review. International Journal of STEM Education, 10(29). https://doi.org/10.1186/s40594-023-00418-7
The article undertakes a systematic literature review to comprehensively explore the global landscape of teaching Artificial Intelligence to K-12 students. By using the Scopus database, this study seeks to provide a nuanced understanding of AI literacy initiatives in educational settings worldwide. With 179 documents were reviewed, the investigation spans various geographic locations, offering insights into methods, challenges, and effectiveness of AI education strategies on a global scale.
The journal talks about the initiatives in the United States, China, Singapore, Germany, and Canada which showcase the diverse approaches to integrating AI education in K-12. In the U.S., the AI4K12 scheme and the Massachusetts Institute of Technology's curriculum highlight collaborative efforts among developers, teachers, and students. China's Ministry of Education integrates AI into secondary school curricula, with initiatives like AI4Future emphasizing co-creation. Singapore's interactive AI learning program faces challenges due to a shortage of trained teachers. Germany's national initiative includes a 6-module course, while Canada focuses on philosophical, conceptual, and practical aspects.
The literature underscores various approaches to AI literacy, encompassing curriculum design, subject integration, student perspectives, teacher training, resource design, and gender diversity. Curriculum development models range from technical-scientific to learner-focused. Core competencies include understanding AI concepts, applications, and ethics/security. Proposals stress the involvement of teachers in co-creating curricula and advocate for computational thinking models. Traditional subject integration and short-term courses/modules are suggested for AI inclusion. Student-focused studies explore attitudes, intentions, and interests. Teacher training is pivotal, emphasizing competencies and co-designing educational proposals. Overall this article contains a lot of information to review for learning about approaches to teaching AI to K12 students since it sums up the culmination of research done on this subject for the past several years.
Three Concepts In AI/ML/DS
An area I would love to explore is the machine learning aspect and would like to create a tool regarding that to teach how it works to K-12 students. A basic concept would be a simple interactive story telling game that takes input from the student and incorporates that into the story in some way. For example, the student could make up a setting and character and procedural events could occur. I believe this would fit multiple themes in the five big ideas but it would most likely be concerning how computers can learn from data.
Another idea would be to have some sort of game powered by AI that could be dynamic, like a detective game where you can ask certain questions to figure out the mystery. Ideally the game would be able to reset itself with a procedural mystery aspect to the game, with the main focus being on the interactions with the suspects(program) and the investigator(player/student). I would imagine this would fit in with the fourth big idea since it would tackle the interaction between the agent and the student.
Another idea delving into the machine learning aspect would be some sort of game or system that allows the user to teach the program what an art piece looks like, sort of how stable diffusion works, and let the program attempt to replicate it. I wouldn't know how the program would work exactly on that last idea, but I believe allowing the student to tinker with the program would be important in teaching how the AI works.
Reflection on Pop-Bots
I found in interesting that the article takes the approach of focusing more on the perception of how kids viewed the robots, and worked from there to teach them about the basic concepts. It seems like this article would be really useful to come back to when developing the project in the future, especially the part towards the end where it gives recommendations such as opening up the black box and allowing the children to tinker with the algorithim. It is also interesting how they assessed the children using the theory of mind assesment, and most likely something to be considered while developing the project to ensure that the student learning from the program can understand how the algorithim learns and outputs whatever it outputs. Another part I found interesting was towards the begining of the article where it states that when a robot did a simple task, the children were more mechanical with their descriptions of the actions. But with more complex actions, they used more psychological words to describe the actions. I also thought it was interesting that the perceptions of the children were less consistent among themselves and individual experiences affected their perceptions of robots, trying to account for this will be challenging but is nice to know that the article describes what aspects could influence this. When we actually design our projects in the future, I will defenitely be reflecting on this article to assist in how I should be designing my program to ensure that it teaches the K12 students efficently.
Reflection on the Mahipal et al. Paper
This article was interesting and showcases an example that will most likely be similar to what we do for our project in the future. And with that, there is a lot to be learned from and utilized in the development of the project. What I found interesting was that there was frustration with a portion of the users when the artifical intelligence got something wrong, and the reason was because the tool was trained on using the entire canvas and not a small portion of it. While unfortunate and hard to forsee this sort of problem, it does reinforce the idea of pretesting our programs to ensure that the users of the program typically are accounted for. Something I will most likely do is show my program to some friends to see what issues could be had for something I had not accounted for. But with that said, I believe even if the program did not guess correctly for what was drawn, it seemed like that was useful in teaching how AI tools can be flawed and unreliable at times, and how the algorithim learns. With that understanding, it seems like the students can have a better understanding of what AI really is, and dispell some of the misconceptions that they might have of it. It also seems important to showcase a simple demonstration of the AI and then teach them about the basic concepts involved in the project, instead of outright telling them about the concepts before having a foundation of what they are utilizing.