Although this study was conducted with much younger kids (4-6 years old), many of its concepts and procedures can be useful for us in developing our tools.
The very simple games that they created could easily be adapted and expanded for an older audience. Although they teach relatively simple concepts in very simple ways, even that little knowledge of the concepts can impact the way a child thinks about the devices and programs they interact with. I especially enjoyed the supervised machine learning activity, and I like the way that it simplified the AI to have only two possible categories. I also appreciate the fact that in each activity the AI explains the reasoning behind its actions.
Also, the assessments that the paper shows, both for the theory of mind scores and the AI understanding scores, seem very useful. They give us a good framework to reference when we eventually have to write our own assessment questions, though of course our questions will have to be written for an older and smarter audience.
One interesting thing about the results is that the Pre-K children (4-5 years old) on average agreed with the statements that robots were more like toys than people and that robots were smarter than themselves. The paper goes on to say that the "children did not see a contradiction in a toy being more intelligent than themselves."
One thing that stood out to me about this paper is that the methodology is much more in-depth than other papers I have read. I actually feel like if I had enough time I could create a tool very similar to this one.
Also, I found the lengthy student interview sections very helpful. I think that it is important to talk with the students and think through their reasoning in order to improve the tool and better connect with students.
I also really liked the idea of asking students open-ended questions with no real "right" answer, like they did in this paper. If you just tell students the right answer to a question, they will not have the practice of working through it and are therefore less likely to remember it and less likely to be able to reason through a slightly different but related question in the future.
Another thing that stood out to me was the kernel paper activity. Although the concepts being taught exist almost entirely within computers, the ability to control two physical objects and see how they interact is likely to leave a more lasting impact than even a computer visual. The kernel paper activity also allowed learners to put themselves in the shoes of the AI and see some of the logic behind how and why it gives the results it does. This exercise could itself be expanded into an entire study, wherein students could semi-manually construct a simple CNN using filters they understand.
This paper describes and analyzes the results of three online workshops designed by the research team and co-taught by middle school teachers around the US. The activities in the workshops were designed according to three principles: active learning, embedded ethics, and low barriers to access.
Active learning is a method of instruction in which "students drive the learning process." It is shown to yield better results than passively learning, especially in STEM. The exercises in this study were designed to use active learning "through hands-on activities, often discovery activities," then built on the ideas they encountered with further discussion and questions. An example of its use is that students were put in the role of a given algorithm and left to draw conclusions about its reasoning.
Embedded ethics is simply teaching technical and ethical concepts at the same time, rather than separating them. This has the benefit of contextualizing applications of technologies and connecting them to current real-world issues.
Low barriers to access means Having low barriers to access is important in technology education because the goal is to teach everyone, even if they do not have a STEM background, even if they do not have a computer to work on, and even if they are just not interested in computing. Children will continue to interact with these technologies more and more regardless, so they all need to be taught.
Another thing that I noticed may be helpful for the development of our project is the inclusion in appendices 4, 5, and 6 of the questionnaires used in the studies. Since we will need to create our own questionnaires, these can serve as a good starting point.
The paper provides in appendices 1, 2, and 3, many of which could be adapted for use in our context. Many of the activities take 30-60 minutes, so they would have to be shortened to fit within the constraints of this project, but there are a lot of good ideas in here.
You can read the paper here: Williams et al 2022
This week I was tasked to "think of three concepts in AI/ML/DS you might be interested in developing a project around," and to "write ~250 words describing each of these three AI concepts and situating each of them in the 'Five Big Ideas' themes developed by Touretsky et al."
One idea I had was a game to demonstrate how content delivery networks on social media sites operate.
The student would act as the content delivery algorithm, serving videos to users and attempting to maximize given metrics such as watch-time and like and comment behavior. Each video would have different attributes, such as "educational," "funny," "sad," etc., and each user would have different interests depending at least partially on their demographics.
After the player spends some time serving content to users themselves, they will teach a simple goal-oriented AI to do it for them. They will set priorities for metrics (e.g. comments>watch-time>likes) and watch as the AI learns from the users actions and changes its behavior to maximize those metrics.
At the end, we might prompt the user to think about how the goals of the algorithm effect the content any given user is served, and the ethical/moral and privacy ramifications of these types of networks.
This fits well within Touretsky et al's Big Idea #3: Computers can learn from data.
My third idea involves kids filtering data into categories for an AI then testing its accuracy with a test set.
The student would spend a few minutes sorting the data and feeding it to the AI, and the AI would show its model at every step. After a certain number of samples, the student would be able to test the AI, perhaps with their own handwriting. they would be able to see the AI's decision and its confidence.
Using this, we could demonstrate how the AI builds its model and how it learns from training data. We could also demonstrate how the size and quality of a dataset impacts the model's accuracy.
We could use this activity to bring attention to real-world parallels such as AI's use in medical imaging, and we could also make a point about bias (e.g. what if all the samples were from right-handed writers?).
This idea would also fit into Big Idea #3: Computers can learn from data, though it also could fit within Big Idea #2, Agents maintain models/representations of the world and use them for reasoning.
Another idea is to have the kids interact and learn from multiple increasingly sophisticated AI programs.
For example, we could first define the term Artificial Intelligence, then give some examples of rule-based programs such as graph search algorithms. We could then introduce more sophisticated programs and show that AI is a very broad term, with many programs fitting into its bounds in some way. We could then introduce the concept of machine learning as a subset of AI, and teach the students the basic workings of common machine learning algorithms, such as convolutional neural networks and generative adversarial networks.