Safinah Ali, Daniella DiPaola, Irene Lee, Jenna Hong, and Cynthia Breazeal.
Tom Hitron, Yoav Orlev, Iddo Wald, Ariel Shamir, Hadas Erel, and Oren Zuckerman.
Georgios Karalekas, Stavros Vologiannidis, and John Kalomiros.
The paper I selected last week was Exploring Generative Models with Middle School Students (Ali et al). The other two papers I read that were selected by my peers were Can Children Understand Machine Learning Concepts? The Effect of Uncovering Black Boxes (Hitron et al) and Teaching Machine Learning in K-12 Using Robotics (Karalekas et al). The latter two papers focused on ways to teach machine learning to children, whereas the first paper focused on generative AI.
All 3 papers discussed some design principles that they ensured to keep in mind to make their educational tools effective and widely accessible. Ali et al and Hitron et al both mentioned creating tools that require no prior background knowledge, as well as the necessity of simplifying certain things to prevent overwhelming the students. While Ali et al places a huge focus on simplifying concepts to make them more understandable, Hitron et al explored ways they could uncover some black boxes to create a stronger, more detailed level of understanding among the students. Nonetheless, Hitron et al still emphasized the necessity to keep certain black boxes hidden to prevent overwhelming the children. Also, Ali et al and Karalekas et al mentioned the significance of trying to make their tools more accessible. Hitron et al was about using robotics to teach machine learning, so even though robots can pose some issues, they discussed ways to make the robots more easily usable in classrooms. For example, they discussed using smaller robots with less parts, as well as eliminating the need for wireless connection to make robots easier to use in classrooms.
Unlike Karalekas et al, Ali et al and Hitron et al created educational tools and tested them out with children, and they both outlined the methods they used to evaluate student learning. They both made use of pretests and posttests to test whether or not students understood the specific things they were trying to teach. In addition to pretests and posttests, Ali et al also described how they had some assessments embedded within the different activities they did with the children and how they conducted some interviews to verify the results of the posttests. The Hitron et al study also made use of interviews, but only for a portion of the participating students. In addition, in the Ali et al paper, they described how their pretests and posttests made use of true/false questions. On the other hand, the pretest and posttest used in the Hitron et al paper made use of more open-ended questions. Additionally, both used carefully constructed questions to measure whether understanding of specific objectives were achieved.
The Ali et al paper and Karalekas et al paper both discussed the importance of letting the students be creative and apply what they’ve learned. Karalekas et al discussed the importance of making the robotics systems more open source and allowing the students to play around with them in extra ways. The Ali et al paper described how they used Bloom’s Taxonomy to guide the formation of their activities, starting from ‘remember’ and ‘understand’, then ‘apply’ and ‘analyze’, and finally ‘evaluate’ and ‘create.’ I think it’s important to note that Ali et al and Karalekas et al both include the significance of the ‘create’ tasks, giving the students a chance to be creative in order to enhance their understanding of whatever topics they cover.
Dr. Boulamwini’s Gender Shades research discussed AI bias in detecting gender due to limitations of training data and provoked a conversation on AI bias. While AI bias has lessened, it is still prevalent today in various circumstances. For example, healthcare applications of AI that aid in medical diagnosis typically have a lower accuracy for patients of color. A specific example of AI bias in healthcare is with detecting skin cancer. Since training datasets are of predominantly of lighter skinned populations, the algorithms struggle to accurately diagnose patients of color. Unless training data becomes more representative of the total population, AI use won’t be equal for all people. In addition, AI algorithms that filter through prospective candidates for a job position and make recommendations on who to hire are susceptible to gender bias and racial bias due to past employment patterns. This can reinforce existing inequalities in employment patterns by being biased against women and minority groups. A specific example of this was an AI algorithm that Amazon was using to filter through candidates. It was found to be biased against women since it was trained on an extensive amount of resumes, more of which belonged to men than women. Furthermore, AI is also susceptible to bias with regards to ageism. For example, generative AI that is used to generate images of people working in certain professions is much less likely to generate an image of an older person; it will instead generate an image of someone who is younger and usually male.
After reading about AI bias, I can see that it is present in various AI applications due to biases present in training data. Going forward, we need to be careful when collecting training data and ensure that it is representative of the total population in attempts to mitigate AI bias.