In the paper, the significance of involving underrepresented groups in STEM and computing, such as minorities and women, during the development of AI technologies is discussed. Due to their historical exclusion from these fields, these groups can contribute to ensuring that AI industries of the future are inclusive and equitable. A cooperative approach between underrepresented communities and their allies can reduce the likelihood of bias in AI technologies that can negatively impact marginalized communities.
The paper also describes a 30-hour curriculum that is designed to educate students about AI and its implications. A variety of topics are covered in the program, including an introduction to artificial intelligence, logic systems, supervised learning, neural networks, and GANs, among others. The purpose of these units is to investigate algorithmic bias and its causes in order to discover how it impacts society and how it can be mitigated. The curriculum also emphasizes the importance of technical skills development and adaptability in today's job market, and is designed to increase students' awareness of AI-related careers and to help them identify their own strengths and interests.
I looked into their full curriculum implemented by them at: https://raise.mit.edu/daily/index.html
Researchers also pointed out that the results are consistent with literature that suggests women and underrepresented minorities are often interested in social and ethical issues. Students from these groups may be particularly suited to careers in artificial intelligence because of the curriculum's engaging nature.
A few points that I found interesting about this paper:
This paper discusses various tactics that can be used to promote critical engagement with data and machine learning (ML) in order to help people better understand the limitations and origins of data and the potential biases that can be present in ML systems.
Hautea et al. suggest that young learners should be encouraged to creatively engage with data that is collected about them online. Whereas D’Ignazio and Sulmont et al. suggests that educators should carefully select the datasets they use in class, favoring datasets that are low-dimensional when introducing concepts, “messy” when demonstrating issues of bias, and that are personally relevant so that learners can easily relate to and understand them. They also suggest writing "data biographies" as a way of helping learners better understand the limitations and origins of data.
The paper also addresses concerns and issues surrounding the usage of AI, such as the idea of "the singularity", which refers to the time when machine intelligence surpasses human intelligence and the potential harm it may cause to people. The paper also highlights the lack of diversity in the CS workforce and gender diversity in AI, and how it can affect who systems are developed for and the potential biases that can be present in AI systems.
The AI systems which make decisions based on feeded data, may contain skewed human decisions or represent historical or social inequities. Amazon being one of the tech tycoons in the world, suffered with a hiring algorithm in 2015 which was found to be biased against women. Since, the number of resumes received from men outnumbered women, it was trained to favor men over women. Similarly, a bias in Google Search was also observed. While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. Not only this, in 2017 a Facebook algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents. Aren't these examples mind boggling? They are. Even the leading tech companies show bias.
Additionally, the authors discuss the "Eliza Effect", which is a phenomenon where simple techniques produce effects that appear complex and humans attribute more intelligence to these systems than they actually possess.
The paper also addresses how factors such as the framing of an explanation given by an AI agent and early experiences with technology can affect how humans interpret explanations and improve or inhibit children's ability to accurately assess what types of problems a computer can solve. Overall, the paper aims to raise awareness of the potential issues and concerns surrounding AI and promote critical engagement with data and ML to help people better understand and use these technologies.