I was surprised to see that the authors were able to find a way to teach children CNN’s in an understandable way. I’m not familiar with CNN’s myself, but it seems to be a complex topic. This software can even be helpful for older individuals that are interested in learning the basics of CNN’s.
Although the software wasn’t perfect, as Students 3 and 4 have expressed, it was nice that it showed an accurate representation of what was going on behind closed doors. I also liked the idea of supplementing the project with a pen and paper exercise (kernel paper activity). This seemed like a good way to get the children to understand what they were doing on Doodlet. Even better, no knowledge of matrix multiplication or complicated computation was required to do the worksheet. This idea of keeping the material abstract and less technical, I think is a great way to get children introduced to these complicated concepts. Allowing the students to be able to draw for themselves, I’m sure increased their curiosity and engagement with the application.
This study consisted of 4 students; as mentioned towards the end of the paper, this wasn’t enough to draw any statistical conclusions. I would be curious to see the results of this project on a larger group of students and how they react to it. Including students from all backgrounds, including non STEM children, I think can be very beneficial for a study.
A total of three different assessments were given to the children during this research. A Perception of Robots questionnaire was given to the children to assess their overall understanding of robots. The questions were simple and straight to the point. This test was given to them before and after they completed all activities in order to see how their answers changed after going through PopBots.
Since the children that used PopBots were Pre-K and Kindergarten, the authors had to keep in mind that some children had not yet developed Theory of Mind skills. They converted Wellman and Liu’s Theory of Mind assessment to match questions related to their PopBots activities(Theory of Mind Assessment). The researchers included this test, since it’s important in how they perceive the PopBOts activities and questions from the other exams as well.
The children were also given a set of multiple choice questions (PopBots Assessments), which tested their knowledge on the specific topic that they were learning. Each of these questions had an underlying motive that would show the researchers how well the children understood each concept. This exam was given to them after they completed the activities. I particularly liked this assessment because the questions are simple and fun, while also giving the researcher important data on how well the children understand the AI topics.
Overall, I believe this paper is a good reference in conducting research and collecting valuable data. There are many important takeaways, including the assessments themselves, their order in which they were given, the style of assessment (graphical and verbal), analysis of data (Chi-square goodness of fit test, Wilcoxon), along with collecting supplemental data (recording the learner’s reactions throughout activities). It’s also important to keep in mind the number of students involved (80 students). I hope when we do our own research, we have enough kids to provide statistical conclusions.
This paper explores whether or not children (age range 10-13) are able to understand machine learning concepts. There are many black boxed concepts in ML, some more complex than others, some more crucial than others. The researchers decided to open the black box on two of those concepts: Data Labeling and Evaluation. They explored whether children are able to understand ML, through only Data Labeling, through only Evaluation, or if it requires both Data Labeling and Evaluation. To conduct this experiment, the authors used a system that recognized hand gestures through an input device, which sent data to their software, and finally used an interface for interaction. A total of 30 children participated in this project. For Data Labeling, the participants would perform a motion with their hand, holding the input device, and train the system on what the motion was (circle, circle and square, no motion). For Evaluation, the children would give the model new data and see how it reacted after being previously trained. As mentioned previously, the participant group was tested by allowing them to only perform Data Labeling, only perform Evaluation, and perform both Data Labeling and Evaluation and see the results. After a series of questions and interviews (pre, during, post) The researchers found that there was only significant understanding in ML concepts when the children participated in both Data Labeling and Evaluation. The authors contribute much of this success also to the hands-on, dynamic, interactive nature of the project. The children were given the opportunity to tinker, debug, and play around with the model, which I also think is extremely important to the learning process. The authors leave this as the final sentence: “... we encourage educators to provide children with more opportunities for direct experience with ML building blocks in an iterative process of trial-and-error.”.
I really like the idea of a sandbox environment for learning. For example, if we look at MIT’s Scratch, the user has freedom to build whatever they want, while providing a friendly interface This gives them the ability to tinker, experiment and express creativity as they please. The same idea I would like to incorporate, but in regard to machine learning and data science. The idea is the user can create a blank robot, then pick from a wide range of data sets that are available in the application. Keep in mind, this should all be high level and graphical to make it simple and understandable for the young learner. Once they choose a data set, they feed it to the robot (here they can learn to split data into training and testing), next they can train their robot (supervised learning, reinforcement learning, unsupervised learning) in order to achieve their goal. Whether they want their robot to predict future weather forecasts, or be able to identify their favorite cartoon from a picture, the idea is to leave that up to the user. I really believe this type of freedom, in contrast to a software that guides them through a set of activities, can be very beneficial to a young learner. Not to say the latter is ineffective, not at all; but I know if I was in middle school again, I would love to tinker and play around with something like this, even now I would. This idea incorporates the following ideas from the Five Big Ideas in AI: computers can learn from data, social impact, representation and reasoning.
The idea here is to teach children a little bit about computer vision. First, the user would turn on their webcam, where their face is visible. Then, the user would program a blank system (not really blank, just abstracted) and show the model how to tell if their eye is open, closed, or winking. Perhaps this could be done by using convolutions, convolutional neural networks, viola jones face recognition, or any other algorithm that suits well. The user would program this model graphically, something intuitive and easy for a child. Perhaps the user could choose different kernels depending on whether they want the system to detect a closed eye, open eye, or winking eye. The main idea is to give the learner some sort of multiple choice questions, where each choice would result in the system being programmed in a different way. A set of correct choices would then result in the system being correctly programmed and successfully distinguish between the different eye motions. As mentioned previously, a lot of the technical details of this system would be black boxed, but the goal is to open some black boxes to the user and give them an idea of how a computer can derive meaning from a photograph/video. In regard to the implementation, I am not sure. This is still a very high level concept. But I do like the idea of the child programming, debugging and figuring it out on their own. Perhaps this could be supplemented by a pen and paper worksheet that the learner would complete beforehand to give them a simple understanding of the concepts they will see in this application. This idea incorporates the following ideas from the Five Big Ideas in AI: computers can learn from data, social impact, representation and reasoning, computers perceive the world using sensors.
The idea here is to teach children about the basics of machine learning. The user would be introduced to Data Labeling and Evaluation, as these two concepts have shown success in Hitron et al. Can Children Understand Machine Learning Concepts? The Effect of Uncovering Black Boxes. First, the user would draw whatever they wanted on the canvas, label it, then give it to the ML model (Data Labeling). After this, they can move on to the next stage where they draw the same thing once again and use this to see if the trained model can recognize their drawing (Evaluation). In theory, the user would go back and forth to test, debug, and try new things until the model successfully recognizes their drawings. The user can learn the importance of large sample sizes (drawing the same thing multiple times), learn about negative examples (draw something that is not what they want recognized and labeling it as such), along with the bigger topics of Data Labeling and Evaluation. While we would uncover some black boxes, the ML model would have far more technical implementation than just the user’s work. Now, there is a huge problem with this idea: since the user has freedom to draw whatever they want, we would not have the sample size required to properly predict their drawing. As a few of their drawings are obviously not enough for a ML model to make predictions. However, there are a few potential solutions. Restricting what they can draw, and having the model already trained on those drawings. Another idea is keeping their artistic freedom, and with each drawing, randomly generate a sufficient amount of similar drawings to create the sample size. Each of these solutions would need further thought, but they at least give this overall project idea some hope of conception. This idea incorporates the following ideas from the Five Big Ideas in AI: computers can learn from data, social impact, representation and reasoning.