One area that stuck out to me a year after reading "The Alignment Problem" by Brian Christian was the widespread utilization of biased algorithms in impactful situations. For example, the author discusses how there are algorithms at play when discerning things like areas for policing or parol. Historical records show that black americans are policed and incarcerated more frequently than white americans and a machine learning algorithm, that only learns patterns in data, reinforces and creates a positive feedback loop.
Artificial intelligence in healthcare applications has a high bias against minority communities. In this case many of the algorithms can require that ethnic or racial minorities must be significantly more ill than a white patient of similar status to receive a similar diagnosis. There has also been bias in algorithms meant for healthcare professionals. In a paper published in 2023, a model for determining skill in various surgical activities was developed (Kiyasseh). This model was able to accurately determine skill however exhibited bias where it would underskill or overskill at different rates across a group of surgeons.
Financial services utilize artificial intelligence to determine credit scores and loan approvals. These algorithms use historical data to determine if someone is likely to payback their loan or receive a loan at all, this historical data is highly skewed such that two equal applicants will not have the same result based on race.
Bias in machine learning algorithms is everywhere, from facial recognition, to the criminal justice system, to healthcare, and beyond. Addressing this bias requires extensive efforts from the developers and implementors, from diversifying training data, as in the healthcare example, to transparency of the algorithms, such as in the policing example. If AI algorithms are going to be used to decide on important factors in someone's life they must be as equal and fair as possible.
https://www.nature.com/articles/s41551-023-01010-8
I compared the paper I previously read, Teaching Machine Learning in K-12 Using Robotics, to two new papers: Fostering Secondary School Students’ AI Literacy through Making AI-Driven Recycling Bins and Children as Creators, Thinkers and Citizens in an AI-driven future. The three papers were all relatively recent, with the deepfake paper being from 2021, and the other two from 2023. All three focus their work on improving AI literacy and understanding through interactive means; however, only the two new papers involved studies with children. As such, the comparison between the new papers and the first paper will come first, followed by discussion of only the two new papers.
The robotics paper posits that hands-on learning, via robotics, is important for children because they can see the machine learning algorithms they are training unfold in the real world before them. This requires an in-person space with enough room for a robot, which they suggest should realistically be stationary to save space. The recycling bin paper places the children in a maker-space environment, aiming to foster creativity, collaboration, and critical thinking. The recycling bin paper found it a positive experience for the students to design and create something in the physical world, aiding children's understanding of machine learning. The research in the deepfake paper was conducted entirely online via a virtual conference, and its subject matter was media, which is difficult to manifest in a physical environment. However, the deepfake paper did show a variety of media to the children to interact with such as music, images, and videos. The deepfake paper was done over five days, while the recycling bin paper was twelve lessons, each forty minutes long, over the course of two months.
The first obvious comparison between the recycling bin paper and the deepfake paper lies in the participants. Both papers involved roughly 35 middle-school-aged children. The recycling bin paper was focused in Korea, but there was no mention of how the students were selected or the backgrounds they may have had. In the deepfake paper, there were almost equal numbers of female and male students from different states in the United States. Almost all of the students attended Title-1 public schools, almost 70% of the students were from demographic groups that are underrepresented in stem. One possible flaw with the selection of students was that this research was part of a virtual workshop titled "Amazon Future Engineers program". This may introduce bias into the results as the students may be more inclined to learn about these subjects outside of the workshop.
Both papers employed interactivity to engage the students and attempted to foster a better learning environment. Having the students interact with the idea and discuss it after each of the activities. Both papers utilized qualitative and quantitative metrics, giving knowledge assessments before and after activities, and having interviews or discussions on the materials after the final activity had concluded. The learning metrics are difficult to compare as the concept being taught was different between the two. The recycling bin paper demonstrated a greater understanding of machine learning after their activities. The students in the deepfake paper had a better understanding of deepfakes and how they spread misinformation; however, they were no more successful at detecting deepfakes after the activity than they were at the beginning.