The AI systems which make decisions based on fed data, may contain skewed human decisions or represent historical or social inequities. Amazon being one of the tech tycoons in the world, suffered with a hiring algorithm in 2015 which was found to be biased against women. Since, the number of resumes received from men outnumbered women, it was trained to favor men over women. Similarly, a bias in Google Search was also observed. While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. Not only this, in 2017 a Facebook algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents.
Another example is predictive policing algorithms, which are used by law enforcement to predict crime hot spots and allocate resources accordingly. These algorithms have been found to perpetuate existing racial biases and reinforce racial stereotypes, leading to over-policing of communities of color. The article "Predictive policing algorithms are racist. They need to be dismantled.", from MIT Technology Review, highlights issue of bias in predictive policing algorithms and the efforts to dismantle such biases in these systems. Predictive policing algorithms are computer programs used by law enforcement to predict crime hot spots and allocate resources accordingly. However, these algorithms have been criticized for perpetuating existing racial biases and reinforcing racial stereotypes, leading to over-policing of communities of color. The article highlights the need to address the biases in predictive policing algorithms by using unbiased data and transparent algorithms.
The article from Brookings Institution discusses the potential for bias in AI systems used in education. It highlights that as AI systems are increasingly being adopted in schools, there is a risk that they will perpetuate and even amplify existing biases in the education system. The article notes that AI systems used for student assessment and placement, such as those used to identify students at risk of falling behind in their studies, are particularly prone to biases. This is because these systems are trained on historical data that reflects existing disparities in the education system, leading the algorithms to flag certain students, such as low-income students and students of color, for interventions at higher rates than their peers.
The article is about a study that explores the impact of a workshop on AI and the creation of programming projects with a platform called LearningML on the knowledge of AI among students aged 10 to 16. The study was conducted online due to the COVID-19 pandemic and involved 135 participants who completed all phases of the learning experience. The authors created a test to assess the participants' AI knowledge and found that the initiative had a positive impact, particularly for those who were less familiar with the topic. LearningML can be seen as a promising platform for teaching AI in K-12 environments and that researchers and educators can use the assessment instrument to evaluate future educational interventions. The article concludes that ML fundamentals can be taught to children in the age range 10-16, through hands-on activities with LearningML