Dr. Joy Buolamwini's Gender Shades project
Dr. Joy Buolamwini is a computer scientist and well known for her work on bias in facial recognition technology, particularly concerning racial and gender disparities. One of her popular works is the Gender Shades project. In the Gender Shades project, Dr. Buolamwini and her team evaluated commercial facial recognition systems from IBM, Microsoft, and Face++ for their accuracy in classifying gender and skin type. The key findings revealed significant disparities in accuracy based on both gender and skin color:
Gender Bias - The study found that facial recognition systems exhibited higher accuracy in classifying the gender of lighter-skinned individuals compared to darker-skinned individuals. In particular, the error rates were significantly higher for darker-skinned women, indicating a gender and skin color bias in these systems.
Skin Type Bias - The research also highlighted biases in the systems concerning different skin types. The error rates were lower for individuals with lighter skin types, while those with darker skin types experienced higher error rates. This raised concerns about the equitable performance of facial recognition technology across diverse populations.
Dr. Buolamwini's work played a crucial role in raising awareness about the potential biases embedded in facial recognition technology. It prompted discussions on the need for more inclusive and representative datasets during the development of such systems. Her research has contributed to a growing movement advocating for ethical AI practices and more rigorous scrutiny of the societal impacts of emerging technologies. As a result of her advocacy and the work of others in the field, there has been increased attention to addressing biases in facial recognition systems, with some companies and organizations working to improve the fairness and accuracy of these technologies. Her research has limitations and biases present in widely used facial recognition systems. Nevertheless, challenges still persist, and ongoing efforts are required to ensure that facial recognition systems are developed and deployed in a manner that respects diversity and avoids reinforcing societal biases.
There are some areas where biases persist, impacting various aspects of society.
Facial Recognition Technology: Some of the systems still found to perform less accurately for people with darker skin tones and women. This bias is due to the lack of diversity in the datasets used for training these systems which lead to inaccurate and unfair identification results.
Criminal Justice and Predictive Policing: AI algorithms used in criminal justice systems for risk assessment and predictive policing have faced scrutiny for perpetuating racial biases. If historical arrest data reflects systemic biases, the algorithms may recommend harsher sentences or increased police presence in already over-policed communities, exacerbating existing inequalities.
Healthcare Diagnostics: Bias can be present in AI systems used for healthcare diagnostics. If the training data is not representative of diverse populations, the diagnostic accuracy may vary across different demographic groups. This can lead to disparities in healthcare outcomes, with some groups receiving less accurate diagnoses or treatment recommendations.
Recruitment and Hiring Algorithms: AI systems involved in recruitment processes may inadvertently reinforce gender or racial biases present in historical hiring data. If the training data includes patterns of discrimination, the algorithms may learn and perpetuate those biases, affecting who gets selected for job opportunities.
Financial Services: AI systems used in assessing creditworthiness for loans can inadvertently discriminate against certain demographic groups. If historical lending practices have been biased, the algorithms may replicate these biases, leading to unequal access to financial services for specific populations.
To address these bias, AI researchers require a focused effort to ensure representative and unbiased datasets, transparency in algorithmic decision-making, and ongoing evaluation of the systems to identify and mitigate any unintended biases. The ethical development and deployment of AI technologies must prioritize fairness and accountability to avoid reinforcing existing social inequalities.
Paper 1: Children as creators, thinkers and citizens in an AI-driven future
Safinah Ali a,*, Daniella DiPaola a, Irene Lee a, Victor Sindato a, Grace Kim a, Ryan Blumofe b, Cynthia Breazeal a
Paper 2: Exploring Generative Models with Middle School Students
Safinah Ali a,*, Daniella DiPaola a, Irene Lee a, Cynthia Breazeal a, Jenna Hong
Similarities:
Paper 1: Presents generative AI techniques as a tool for creation, while also focusing on the critical discussion about their societal and ethical implications, and encouraging pro-activeness in being responsible consumers, creators and stakeholders of technology.
Paper 2: Introduced a generative model learning trajectory (LT), educational materials, and interactive activities for young learners with a focus on GANs, creation, and application of machine-generated media, and its ethical implications.
Differences:
Paper 1: Learning Activities: Introduced students to generative modeling techniques through examples of generative media, including Deepfakes. Students get a brief introduction to the two neural networks that make up a GAN, a commonly used generative algorithm, and how they are used to create Deepfakes. They also practice techniques to detect Deepfakes. Then, through a news-sharing simulation application, students witness what misinformation is, how misinformation spreads, and how Deepfakes can fuel that spread. Finally, students voice their opinions on what policies should be in place to regulate the presence of Deepfakes on social media platforms.
Paper 2: Conducted 5 learning activities with Targeted Cognitive Skill (BTL) and Generative modeling learning trajectory (LT) learning goals.