This paper is surprisingly relevant to our task at hand and proves useful in providing some design considerations in an AI project involving middle schoolers. It correctly identifies the growing importance of AI pedagogy and its understanding among younger children, and I was surprised by the quoted statistic that said “students who do not express STEM-related aspirations by the age of 10 are unlikely to develop them by the age of 14.” This reminds us how important it is to introduce STEM-related concepts, like AI, to children at a much younger age, and thus make our AI tools very child-accessible and engaging.
Its program DAILy uniquely incorporates bits of ethical consideration into all of the units that the participants went through, and provides some insight on the reactions and takeaways from this 10-14 age range. I especially find it very fascinating that most of the participants really liked activities regarding ethics: “10 out of the 11 students who volunteered to be interviewed after the workshop thought [the ethics activities were their favorites.” It leaves me wondering why so many of the children loved the ethics parts of the course. In building our AI tool, it will be important to consider this statistic, and I will likely try to incorporate some form of ethics considerations into our tool. We could perhaps make our tool centered around this, by putting child participants in some sort of ethical dilemma which they have to use AI to solve.
The immediately most surprising thing about this paper is its extensive abundance of literature review to come up with considerations and steps of a framework: 150 documents. This immense amount of reviewed documents, including many articles of gray literature (unpublished work) shows that its considerations have a significant research basis behind them. One of the things I appreciated most about the paper is its use of design considerations - which are very useful to keep in mind when creating any sort of project related to AI literacy, especially Developing AI Tools for Children. Many of these design considerations, as well as basic competencies, will profoundly aid us in planning our AI tool.
When applied to children, the paper’s most useful design consideration is its first: “including graphical visualizations, simulations, explanations of agent decision-making processes, or interactive demonstrations.” Without any sort of idea what is actually going on behind the scenes of an AI tool, curious children will naturally be left to believe that it is in fact some sort of truly intelligent thinker or magic behind the AI’s workings. The paper takes this into account, and acknowledges that many children, when having no idea how AI works, often overestimate its intelligence. I find interesting the paper’s placed importance on transparency in regards to AI ethics.
The most surprising competency that the paper takes into consideration is that computers learn from data. In a world with increased data collection and lack of privacy, I think it is especially important for people to realize that their data is helping AI become more powerful. This is a significant ethical consideration which I very much appreciate being included in the paper.
(This was performed with GPT-4)
Prompt: Pretend you are an elementary school teacher that has to create a syllabus for a class that teaches artificial intelligence to young students. How would you structure the class, and what would you focus on teaching the students?
Response:
Creating a syllabus for teaching artificial intelligence (AI) to elementary school students involves balancing between imparting foundational knowledge and ensuring the content is engaging and age-appropriate. The course should be structured to foster curiosity, encourage creative thinking, and provide hands-on experiences. Here's a proposed structure:
Grade Level: 4th to 5th Grade
Duration: 10 Weeks (1 Hour per Week)
Course Overview:
This course introduces young learners to the basics of artificial intelligence. Through interactive activities, students will learn how AI impacts our daily lives, explore basic concepts, and engage in creative projects that illustrate how AI works.
Week 1: Introduction to AI
Objective: Understand what AI is and identify its presence in everyday life.
Activities: Group discussion on AI in daily life, introduction to AI through storytelling.
Week 2: Understanding Intelligence in Humans and Machines
Objective: Differentiate between human intelligence and machine intelligence.
Activities: Interactive exercises comparing human and AI capabilities.
Week 3: Learning How Machines Learn
Objective: Grasp the basics of machine learning.
Activities: Simple demonstrations of machine learning (e.g., sorting games).
Week 4: Exploring AI in Games
Objective: See how AI works in computer games.
Activities: Play and analyze simple AI-based games.
Week 5: AI and Creativity
Objective: Discover how AI can be used in creative processes.
Activities: Experiment with AI tools for drawing or music composition.
Week 6: Robots and AI
Objective: Learn about the role of AI in robotics.
Activities: Demonstrations with a simple programmable robot.
Week 7: AI for Problem Solving
Objective: Understand how AI helps solve real-world problems.
Activities: Case studies of AI in weather prediction, healthcare, etc.
Week 8: Ethical Considerations in AI
Objective: Introduce basic ethical considerations of AI.
Activities: Group discussions on fairness, privacy, and the role of AI.
Week 9: Project Week
Objective: Apply learned concepts in a creative project.
Activities: Students start working on a simple AI-related project (like creating a story about an AI, drawing an AI concept, etc.).
Week 10: Project Presentation and Course Recap
Objective: Showcase projects and review key learnings.
Activities: Project presentations, course recap, and feedback session.
Participation in discussions and activities.
Completion of a simple project demonstrating understanding of AI concepts.
Age-appropriate books and videos on AI.
Access to simple AI tools (safe for children).
Basic programmable robots or AI-based game software.
This syllabus aims to demystify AI for young students, making it accessible and interesting, while also laying a foundation for future learning in this rapidly evolving field.
end of response
This paper approaches AI literacy from a different perspective - considering it in philosophical terms. It was written in 2005, so AI was not really a developed field, and the writers of the paper were more thinking about robots, not neural network/machine learning AI. They considered AI anything that could perform a humanoid task. The paper emphasizes the creation of concept maps in order to help students better understand AI tasks. Some staple concepts that tie into the concept map should be
Perception - robots’ ability to perceive their environments and turn it into meaningful data, such as sensing a light beam, a waved hand, or the click of a mouse
Processing - The AI’s ability to do calculations with the data, representing it in various formats
Response - The AI’s ability to create an accurate response, be it an accurate classification/regression or robotic movement
These concepts can be discussed with an older (8-12) class and have their philosophical implications dissected. Part of an extensive AI literacy repertoire is AI ethics, and approaching AI ethics from a philosophical perspective may prove effective, although barely any research has been done in regards to it.
Idea 1: Text generation is an interesting concept which I have seen scant research on in regards to how children would interact with it. For example, a participant would enter some writing of theirs, or of multiple different people, teachers, or students, and then the participant would try to guess which person the AI model is imitating. We could monitor the child’s input responses and then try and guess how the AI model would create a response. A questionnaire would then be handed out, gauging students’ abilities to understand the AI model’s capabilities at generating text, and asking if other generative models, like ChatGPT, are then also intelligent.
This model would propagate Big Idea #3 - showing that computers can learn from data. Simply by giving the model an example of one’s writing, it can learn how to imitate that person, hopefully being able to let the participant deduce which person it is imitating. However, this application will also likely demonstrate Big Idea #4 - the model will probably not be able to produce fully believable or even human-like text (if we create a model from scratch instead of using ChatGPT.) If students have an imperfect experience with the AI tool, they will likely believe that the model is sub-human in terms of intelligence and depends purely on how it is developed.
Idea 2:
Teaching an AI character, such as a miniature avatar, how to perform movements such as walking, running, or crawling, would be a fascinating experience for the user. The user could first program individual limb movements, or give a text-based input for what type of movement the character could do. Although I do not yet know how to technically implement this, the AI tool could then use some pretrained model of video game character movement to implement the user’s requests. For young children, this would be an especially engaging activity, as they would relate to common video games that they enjoy, and they would be immersed in the process of programming a character that can move how they want.
Again, Big Idea #3 ties into this activity, as the data about how the student wants the character to move will directly affect how the character actually moves, as the AI takes in data about movement and will output corresponding actions. We could possibly implement Big Idea #1 by having the character immersed in some environment and require “vision” of the terrain around it in order to traverse it with the assigned movement, and have some sort of visualization on the screen of what the AI is “seeing”. Users watching the AI character traverse the terrain would realize that it senses the virtual world around it and can make decisions accordingly.
Idea 3:
We can also expose students to an activity where they face ethical challenges and explore the possible dangerous sides of an untrained AI model. We develop a tool in which the participant trains the AI model to discriminate between “good guys” and “bad guys”, perhaps depending on their attire or any property which they choose. We essentially give the AI model a supervised classification task in which the student user has power to affect its decision making capabilities.
Students may feel extra pressure to make sure to train the model correctly, fearing “harming” any characters that they sympathize with.
This application explores Big Idea #5 - AI applications can sometimes affect others negatively. In this case, the application is not hurting anyone real, but it simulates a possible real-world situation in which an AI system is required to identify threats among a population and make decisions based off of those identifications. Children using this technology will be given a post-usage questionnaire in which their understanding of ethical implications will be gauged, and we will test the effectiveness of an ethical game like this in introducing future child learners to technology with ethical implications.
The researchers in this study based their entire research and frame of reference on children's understanding of a “Theory of Mind” - an understanding of the cognitive behavior of a person or inanimate object, and being able to realize its capabilities in an applied setting. In splitting up the assessment of this Theory of Mind into three separate areas - Knowledge-Based Systems, Supervised Machine Learning, and Generative AI, child participants were able to explore the vast range of things that AI can do, all done with a very simple interface - little lego PopBots and Scratch Block programming, facilitating the children's engagement with the tool. The researchers considered relevant questions regarding the Theory of Mind such as “How did children's perceptions of “thinking machines” change after they engaged in educational AI activities?”
This research also gave light to many misconceptions that children have about AI. Naturally, children that were more successful in training the robot tended to end up estimating the robot as smarter than them, and vice versa for those who performed poorly. This divide in opinions is illustrated perfectly in the dialogue between two students who differ in opinions on whether the robot is like a toy or like a person. While one student believed that the robot had both humanoid and toy-like properties, another believed that these robots are just like toys and do not possess any real humanoid characteristics. Overall, this study demonstrated that elementary students do not tend to possess full understanding of other beings’ intellectual capabilities, and that many beliefs about robots may be influenced by pop culture or parents’ beliefs.
The tool created for this paper immediately reminded me of Google’s “Quick, Draw!” Although self-reportedly less effective than Google’s, most likely due to a lack of the technological superiority that the researchers possessed in comparison to some of the largest training databases on Earth. However, its flaws seemed to be the interesting part to me. Students were quick to pick up on the convolutional neural network’s flaws, noting specific flaws that the network made, such as frequently guessing “cake” when the input was not actually so. It seems like the students involved in the activity were old and mature enough to realize that the AI tool was not actually intelligent like a human, and was imperfect and manipulable.
Unlike other neural network tools, I found it unique that DoodleIt showed the convolutional kernel layers to the user, demonstrating what the tool is actually doing. This takes away the black box aspect of the tool from the user, and allows them to realize that the AI is simply an algorithm, just like any other computer program they have coded before. I find it interesting that students made predictions for why the AI tool made decisions in a certain way: for example, one participant guessed the reason for why the neural network tended to guess “cat” frequently, owing it to the way the neural network layers were set up.
Bias in AI
Bias in AI is still widespread, and a clear cause of that can be seen right at my school and in computer science departments in general: males tend to go into the field of computer science much more often than females do. This may be due to a variety of factors, including a preexisting lack of women in the field of computer science, leading to women being discouraged from joining a field they believe is predominated by males. Another explanation for a lack of women in the field of computer science may be a lack of encouragement in pre-higher education, especially in high school, for young women to take on computer science as a major in college. Preexisting stereotypes that women shouldn’t do STEM may affect their decision-making in regards to pursuing computer science.
This overall lack of women in CS fields, including AI, could explain many of the biases that Dr. Boulamwini explored. The developers could fail to put an equal emphasis on women in AI face detection because the developers are themselves men, and lack diverse perspectives from women. Similarly, as the field of CS is primarily dominated by Whites and Asians, they may fail to develop technology that takes into account other races, leading to its common failure to detect darker faces.
Project Plan Draft
This project will attempt to introduce the ethics of AI to younger children by showing them how an AI-powered character can interact with a virtual "nature environment", where children can change factors in the natural environment, affecting the way the AI behaves.
It will especially focus on the "computer vision" Big Idea in AI, where the child user will be able to see what the AI sees, such as the AI-vision models sometimes shown in ads for self-driving cars. As this is a fairly complex system, the AI will likely make some mistakes, and the child will be able to see how those mistakes affect the AI's interaction with the environment.
Children will interact with the system by altering a simulated natural environment in various ways - changing the terrain, amount of resources, or by putting limitations or roadblocks on the AI bot itself. They will be able to observe the effects that these actions have on the AI's decision making.
We will gather questionnaire data on what the children think of the AI's decision-making abilities, as well as how well it can "see" its environment. We hope to solidify our understanding of what children think of computer vision models, especially in applications such as navigating a simulation of a natural environment.
One thing that I find interesting about Shuchi Grover’s paper is its emphasis on computational thinking, or as stated in the paper, CT. The paper discusses aspects of CT that transcend programming, including being able to think through problems designing or using an AI tool, which will be an important consideration in developing this AI tool. I also find it interesting that the paper mentions integrating this framework into other fields of study such as data science and cybersecurity. I like that the paper deviates from just AI literacy in terms of computational thinking. I am curious to see the possibilities that AI has in the field of cybersecurity, as it could lead to a lot of future security issues or fixes.
I also appreciate the various existing projects that the study mentions, which could be used as inspiration for our upcoming project. The Find the Minimum game appeals to me in its simplicity of design, as well as its effectiveness in showing young learners how gradient descent works. I hope to develop a tool with the same creativity as this one.
The paper’s mentioning of the speed with which the field of AI is developing is especially relatable to me as an amateur AI educator - with new AI products being introduced every single day, it can prove very difficult to teach an up-to-date class about artificial intelligence to a class.
The Ask me Anything chatbot was developed with the intention of gauging students’ trust in AI’s ability to hold a conversation or answer questions asked by the students. The study gauged many different aspects of the student’s interaction with the chatbot, including how many questions they asked, how much they think that the bot can be used in daily life, and how much trust they have in the bot. The bot’s sanitized responses made sure that it did not output anything inappropriate, and the robust GPT-3.5 backbone of the chatbot guaranteed that it did not output extremely incorrect responses. This being said, there was a possibility for slightly incorrect responses, due to the imperfect nature of any AI chatbot, which could lead to decreased trust or overall evaluation of the bot as unintelligent among student users.
I appreciate the accommodations made for younger users (first grade) which may have trouble typing inputs or understanding post-survey questions - by firstly helping the students manually type in their question, and simplifying the post survey questions so that the students could answer them more easily. These are all accommodations that we need to take into account when designing a bot of our own.
The method adopted by the Italian researchers is a simple example of a two-group study, one of which is the control group and the other is the group in which the actual efficacy of the AI teaching method is being tested. The exclusion of some children was necessary, as some of their circumstances prohibited them from contributing meaningfully to the research, and could introduce some bias to the research that the researchers didn’t want, which is one consideration we need to make when running our studies. Our research period with each child test subject will be only about 5 minutes, so the set of 2 hours lessons may not apply here as much.
The researchers were able to collect precise data on the impact that the AI activities had on the children who participated by using different tests such as the computational thinking test or the AMOS belief questionnaire. They were able to collect basic beliefs from the children about what intelligence is, what they think AI can do, and test their ability to use the skills they had learned to solve problems in the field of AI. I think that this style of research is very straightforward and that while the scale of our research will be smaller, we should aim to mirror this research as close as possible.