Hi there 👋,
My name is Andrew Farrell and welcome to my website for Developing AI Tools for Children (AI4K)! I'm an undergraduate CS major at the University of Massachusetts Lowell working on the AI4K course to learn more about educational research and develop a tool which helps children learn about AI. In the upcoming months, I will be posting updates here on what I've been working on and learning about. Feel free to reach out to me on LinkedIn if you have any questions or check out my GitHub.
🌱
To build a great educational tool, the first thing to do is a literature review on what is already out there on the subject. I'll be reading two papers and talking a bit about what I found interesting. I'll also be brainstorming with ChatGPT to explore possible project avenues and sharing my favorite results along with the respective prompt down below.
Lee, I., Ali, S., Zhang, H., DiPaola, D., & Breazeal, C. (2021, March). Developing Middle School Students' AI Literacy. In Proceedings of the 52nd ACM technical symposium on computer science education (pp. 191-197)
The research conducted in this paper was very well done. When I first imagined teaching kids about AI literacy, I had only really considered the case for enabling students to become more technically informed on what AI is. I imagined gamifying the concepts of neural networks or some other AI related topic so it could be more easily understood and engaged with by a younger audience. Upon reading the article, I was very interested to learn about the curriculum's emphasis on not only a concrete technical understanding of AI concepts, but a more abstract overview of AI's implications on both ethics and future career opportunities. I was really impressed with how the team took conceptually difficult topics and made them accessible to the students through a variety of exercises. It was somewhat surprising to me how popular the activities revolving around ethics and bias within AI were among the children, ranking some of the highest in their post experience surveys. Not only were the students engaged in the discussions and exercises, but they were doing so while also learning remotely. I understand how challenging online learning can be for many children and to have a curriculum which has shown effective in an online environment may help make AI education more accessible to students in the future. The curriculum serves as a great example for giving kids an understanding of both how, and why, AI will be critical in the future. When designing my project for teaching about AI, I will give more consideration for how abstract ideas like bias can be conveyed to students instead of providing a technical understanding alone.
Long, D., & Magerko, B. (2020, April). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-16)
I found both the methods and conclusions of this paper to be of great interest. Firstly, I was unfamiliar with how a scoping study was executed and I enjoyed learning about how the data collection process operated. The authors used resources I would not have considered -- such as the syllabi of several collegiate level AI and ML courses -- to help pin down what exactly are the most important learning outcomes. This may be standard operating procedure for a study of this kind; however, I thought it was quite resourceful on the author’s part. As for the data collected, I think the most applicable sections to what I will be creating here came from the "Design Considerations" whereby the authors gave ideas for what may be involved when educating about AI. Some of the design considerations were more generic and could be applied to any type of teaching -- such as leveraging learners' interests -- however, there was one I felt has been at the crux of much AI education. The first design consideration, explainability, has been a challenge in AI for some time now. Blackbox algorithms may sometimes produce excellent function approximation, but they also leave anyone looking into the models’ workings a bit at a loss. Not only does this make debugging a model an onerous task, but it also makes it difficult to learn about a models reasoning. I believe this is one of the most important design considerations and should be given extensive thought for any type of AI related education. Overall, I believe the paper achieved its goal of giving a clear and robust definition of AI literacy which could be used in the future when designing educational material.
I want to create a tool which will help engage children to learn about what AI is and why it is significant. The student age would be about middle school and younger. Any ideas?
Here are a few ideas for creating a tool to engage middle school children in learning about AI:
Create interactive and fun quizzes and games that test children's knowledge of AI concepts, this will help them retain the information better.
Develop a virtual reality (VR) or augmented reality (AR) experience that allows children to explore different AI applications and see how they work in real life.
Use simple, easy-to-understand language and visuals to explain complex AI concepts, like using animations, infographics, and illustrations.
Provide children with a simple AI programming environment, like Scratch or Blockly, where they can create their own AI projects, this will help them understand the process of AI development and application.
Use storytelling and role-playing activities to help children understand the potential impact of AI on society, this will help them develop critical thinking skills.
Develop interactive AI-based characters, like chatbots or virtual assistants, which can answer children's questions and help them navigate the tool.
Use real-life examples and case studies of AI in action, like how AI is being used in healthcare, education or industry, this will help children understand the practical application of AI.
Create a community of children using the tool, where they can share their projects and experiences with each other, this will increase the engagement and motivation of the kids.
Incorporate hands-on learning activities such as coding challenges, AI-based games and puzzles, to keep the children engaged and interested.
Make the tool adaptive to the child's age, skill level and interest, so they can progress at their own pace, this will help them retain the information better.
Remember that the key to engaging children in learning about AI is to make it interactive, fun, and easy to understand.
I tried a few different prompts and GPT seems to pretty consistently emphasis the same core ideas for educating children. From the papers above, it was clear children responded well to ethical education towards AI. One of the options which might be well suited for such a topic is suggestion #5, allowing students to think about AI and it's effects from different point of views. This could have many implications for a tool such as maybe one where students engage with the challenges of bias in AI through using different curated datasets. I also really like the idea of hands on learning challenges and puzzles which could give kids a fun way to learn about the field.
Note: If you haven't checked out ChatGPT yet, go make an account and spend a few minutes playing with it. You might be surprised what it can do!
This week, I read 5 papers which each contribute to the field of AI for kids in different ways. Along with the reading, I also brainstormed a few concepts for possible project routes in the next few weeks. Finally, I worked on completing my IRB Human Subject Training.
📘📗📙📘📗📙
Convolutional Neural Networks (CNN) are a fascinating and complicated part of artificial intelligence. To be able to teach a child about either CNN's or even simpler Multi Layered Perceptions is a feat on its own. I think the part I appreciated the most in this paper was the use of paper examinations to test student understanding about how filters are related to particular feature maps. Firstly, this helped helped students make intuitive visual connections about how CNNs work. Second, since the examination just used paper and pencil, it can be utilized by students who might not have access to a portable computer in the classroom; though this is becoming more common over time. Much of what I consider to be effective teaching relies on the ability for coursework to be intuitive to students by building ideas and connections which were previously made. This paper is a good example of such. I would have liked to see a further explanation on what work was involved in the teaching of AI ethics and bias from the Gender Shades work and how it related to the DoodleIt image recognition visualization web app. It was exciting to read about the workshop which engaged K-12 educators and incentivized the use of DoodleIt in the classroom. It is one thing for computer science workers to want to educate others about AI, however without the interest of the educators themselves, no progress would be made towards preparing the next generations for AI. I'd be interested to see how educators of non-technical might incorporate conversations about AI into their coursework.
There will always need to be some level of simplification or, at the very least, modification to sufficiently complex subject material before it can be effectively taught to an individual or group. This is especially applicable when the target audience consists of young children who are still developing their perspective taking skills and. To this extent, measuring a student's performance on understanding an artificial intelligence agent is deeply tied to their ability to understand the thoughts and motives of other individuals. The Theory of Mind Assessment gave researchers valuable insights into how student's understood how an AI might 'think" and served as a strong benchmark prior to, and after, the PopBots AI activities. Together, the two assessments and the questionnaire, produced a very cohesive set of qualitative and quantitative data which shows how different age groups and individuals with different experiences perceive AI. As mentioned in the discussion section, student learning outcomes may have been better measured if students were given more time to engage with the PopBot exercises and outside of a group setting. One of the remarkable aspects of AI research is how it is of interest to sociologists, psychologists, neuroscientists, and computer scientists alike. Being such an interdisciplinary field, I think all research which aims to educate students about AI should certainly include topics which span outside of the fields technical aspects. One of the additional assessments I would have liked to seen in the study was a student reflection on how AI bias might occur and it's impacts on the interactions a student may have.
Initially reading the paper, I was surprised Theory of Mind (ToM) is not more commonly studied with regards to humans understanding robots. When designing machines for human-robot interaction (HRI), my assumption is that often times the question is "how can the robot best understand what the human is thinking and cooperate?" I personally do not have any research experience in HRI, but this gap in study would make intuitive sense. It goes without saying, humans are extremely complex and for a robot to understand the motives and goals of a human is a substantially difficult task; but it is certainly not the only part of interaction. Mutual ToM analysis seems as though it will become increasingly important in modern society as humans engage with automated drivers, assistants, and other robotic services on a daily basis. Taking a step back, I think the difference in research quantity between analyzing robot vs human Theory of Mind is also a testament to a change in perception towards robotics and artificial intelligence. Instead of treating robots as servants which must predict and respond to the actions of humans, they are slowly becoming more complex entities which humans must learn to understand for routine life. With black box machine learning models, there may come a day where a human understanding the ToM of a robot is on par with the difficulty of a robot understanding the ToM of a human. It will be interesting to see how this research area grow in the future as the world becomes more.
To address the growing gap between AI usage and AI understanding in daily life, researchers proposed a two part workshop which could be given as a part of K-12 education to middle school aged students. The proposed workshop is comprised of two distinct sections which are designed to give students with little to no technical background a foundational understanding of both a theoretical background of AI as well as a technical understanding of how it works. The entire workshop spans about three hours with the sections split into 90 minute intervals. In creating the workshop, considerations of the "Five Big Ideas in AI" by the AI for Kids initiative as well as a study of how to best promote neurological success both from course content and environment. Researches put a large emphasis on building technical activities which would allow students to collaborate with one another, be challenged by the material, and be assessed in a continuous and non-threatening way. The first part of the workshop was broken into 5 sections: 1) Introduction to AI, 2) examples of ML applications, 3) Supervised Learning, 4) Unsupervised Learning, and 5) Deep Learning. The technical portion of the workshop involved using Google's Teachable Machine and Google Colab Notebooks. Along with suggesting the integration of workshops into K-12 education, they also describe the need for changes in the relationship between industry use of AI and and it's presence in academic environments. More specifically, general industries which utilize AI should strengthen their bonds with educational institutions. Facilitating cooperation for AI education between multiple disciplines and industries may require research and funding, however it will be for the greater outcome of all students.
Communication is one of the ideas every child can relate to. In this project, students would learn about and train a model -- the level of technical detail involved in training would change depending on grade level -- which can recognize numbers and letters used in American sign language. Students would see a live camera view of what the computer sees as well as a box indicating the models prediction. This could spark conversations with kids about how AI is able to help individuals who may be deaf. This project relates to both how machine perceive the world as well as the possibility for AI to make society better.
There's a common programming technique called "Pair Programming" where two developers work together on a programming task. One developer, the 'navigator', gives guiding ideas and suggestions to the other developer, the 'driver', actively modifies the code. The developers switch roles on a timed interval. To build an engaging and meaningful interaction between student and AI, I think a project where students can have a back and forth experience with a model like stable diffusion to use img2img type technology. There is also the possibility for using something close to outpointing with DALL-E which takes some source image and attempts to use it's style to expand the image borders. By the end of the exercise, students would be able to save the image they created in collaboration with AI. While this would likely be a fun and creative exercise for K-12 students, there should also be material which covers.
In this project, I would like to have students to first critically assess where they interact with AI in their daily life. Many kids are now using apps like TikTok and YouTube on a daily basis and it would benefit understanding for classify the technologies around them as AI or not. Branching from this, students can build a basic recommender system using supervised learning techniques to see how the systems they interact with work on a more abstract level. This would give students a chance to recreate one of the applications they interact with on a daily basis.
For week 3, I did research about how bias is a common, yet challenging to eradicate, problem within AI systems. In the previous week, my peers selected papers revolving around teaching kids about AI and I'll be comparing and contrasting two of their paper choices with my own.
Bias has been a longstanding issue in the field of AI and will likely be something AI researchers must cope with in the coming decades. The underlyig problem comes from the fact that models are trained on real world data to complete a variety of tasks, but the real world is filled with different kinds of bias. In Joy Boulamwini's research, she highlighted a performance disparity in facial recognition models when the user of the model had light versus dark skin. Let's look at another example, this time it's an image recognition model trained to view images of stove tops and estimate the owners months income in USD. While stovetops are commonly used all around the world, there is more high quality training data of stovetops from wealthier countries than from poorer coutries, which in turn causes a preformance disparity linked to the wealth of an individual. This is a real world example and there are gruops working to make AI training data more diverse. A recently released datasets called "Dollar Street Commons" is a collection of images of everyday household items from homes around the world that visually captures socioeconomic diversity of traditionally underrepresented populations. Someone reading about the dataset might not think bias is all that important in this case since recognizing household items doesnt effect people, but it does. The same data inequalities can be seen for models trained on detecting valid resumes, models for autonomous driving, and much more. If society is going to continue integreating AI into daily life, bias needs to be a serious consideration for type of behavior expected from our models.
For my paper analysis, I will be examining “DoodleIt: A Novel Tool and Approach for Teaching How CNNs Perform Image Recognition” by Vaishali Mahipal et al, “Evaluation of an Online Intervention to Teach Artificial Intelligence with LearningML to 10-16-Year-Old Students” by Juan David Rodrigues-Garcia and others, and “Introducing the Fundamentals of Artificial Intelligence to K-12 Classrooms According to Educational Neuroscience Principles” by Thannassis Drivas and Spyridon Doukakis. All of the papers focus on developing AI education in one form or another. The paper I chose used neuroscience principles to develop a two-part, three hour workshop to teach kids abstract and technical details about AI. While the methodology for building a good workshop was there, there was no new novel technology being used and the workshop was only proposed, never carried out on students to see get insights into efficacy. The “DoodleIt” paper, on the other hand, similarly proposes a curriculum, except it uses novel techniques and is 18 hours long. The paper uses DoodleIt to teach kids about how machines can use image recognition and the ethics surrounding the topic. The third paper of interest developed “LearningML” which enabled young learners to design their own accessible ML models. Similar to DoodleIt, LearningML was a novel tool which allowed students to interact with machine learning models in a way they might not have had the opportunity to otherwise. In addition to student engagement, LearningML also acted as a an easy to use tool for teachers to use in educational settings. Each of these papers completes research in different ways, and all have their own respective merits. The main difference between DoodleIt and LearningML is the settings they would be used, DoodleIt being more of a child AI interaction research tool and LearningML falling more towards a tool for educators. In all three cases, emphasis was placed on maximizing the expected benefit to the students who might be interacting with the workshop/tool. The neuroscience focused paper made special considerations towards which engironments are the most engaging and enriching for students just as LearningML aimed to make the project both simplistic but powerful; traits also displayed by DoodleIt. It was interesting to see how the different papers went about collecting metrics for how their tool/workshop might be used. For instance, the neuroscience based AI workshop didn’t really collect any metrics on the 3 hour experience because no testing was done on students. Quantatative and qualitative data was collected from students interacting with the DoodleIt application in the form of paper assessments and interviews. Finally, much of the data collection done with the LearningML was in the form of question and answer.
🧠
For week 4, I researched different project directions and completed additional research trainings.