Joy Boulamwini's Gender Shades brings attention to racial and gender biases in certain commercially available facial recognition technologies, and although the biases in those programs have largely been addressed, many still exist and need to be dealt with. These biases are mainly due to the training datasets of the models, which include disproportionately white and male data.
One example was found in the image generation AI, Midjourney. When prompted with certain job titles, it was much more likely to generate men than women. It also favored younger people for certain non-specialized job titles.
Predictive policing tools, which are supposed to anticipate crime hot spots and allocate police patrols accordingly, have been shown to be more likely to flag black neighborhoods as potential high-crime areas. This reinforces existing biases and contributes to the feedback loop wherein people in heavily policed areas are arrested more, leading to the area being even more heavily policed, and so on. Many of these tools have started using victim reports to curtail this, but research has shown that, for various reasons, this data is also skewed. Without significantly more data, it is unlikely that these tools will become unbiased to the point that they can be used without fear of making things worse.
In all, though there is still a long way to go, significant work has been done toward eliminating bias in AI technologies across the field.
For this assignment, I chose to compare my chosen paper with those chosen by Cesar and Durga.
AI+Ethics Curricula for Middle School Youth: Lessons Learned from Three Project‑Based Curricula
(Randi Williams, Safinah Ali, Nisha Devasia, Daniella DiPaola, Jenna Hong, Stephen P. Kaputsos, Brian Jordan, Cynthia Breazeal)
You can read this paper here: Williams et al 2022
Exploring Generative Models with Middle School Students
(Safinah Ali, Daniella DiPaola, Irene Lee, Jenna Hong, Cynthia Breazeal)
You can read this paper here: Ali et al 2021
Can Children Understand Machine Learning Concepts? The Effect of Uncovering Black Boxes
(Tom Hitron, Yoav Orlev, Iddo Wald, Ariel Shamir, Hadas Erel, Oren Zuckerman)
You can read this paper here: Hitron et al 2019