I tried to find some recent work that covers AI bias and I found this paper that was published last year. This paper discusses the issues and root causes of AI bias in healthcare systems, more specifically in radiology. According to this article, there are many documented failures of AI algorithms in regard to radiology and chest radiology. This in turn, leads to distrust from the humans that are using these systems. A big theme in this paper is that bias does not always stem from technical issues, systematic bias which is outside the control of the AI systems has a large effect on AI bias. We see this from the race imbalance of people who seek medical care, to the bias already presented in data from different institutions. In regard to solving AI bias in this particular domain, how should we go about it? The answer is not so technically straightforward as we can see from this quote taken from the paper: “We urge caution on relying on only mathematical approaches of fairness evaluation (e.g. relying solely on fairness through unawareness, demographic parity, or equalized odds or opportunity), as these approaches may overlook nuanced biases and fail to address systemic issues.” Along with this, bias occurs at every level when designing AI (see Figure 1 in linked paper), data collection, data preparation, model development, model deployment, and model evaluation. With this in mind, I believe it’s safe to say that solving AI bias is not an easy solution, it will take work from everyone: the individuals collecting data, to the developers working on AI, and the domain specific individuals who use and evaluate these systems.
For this synthesis, I am comparing the following three papers:
Developing Machine Learning Algorithm Literacy with Novel Plugged and Unplugged Approaches
Can Children Understand Machine Learning Concepts? The Effect of Uncovering Black Boxes.
Organizing a K-12 AI Curriculum using Philosophy of the Mind
All of these papers share the common goal of AI education, but have different research methodology in achieving so.
Starting with the first paper, the authors are exposing children aged 12 to 13 to popular machine learning algorithms: decision trees and k-nearest neighbors. Students participated in a series of activities across eleven sessions. These activities were dynamic, and interactive; using props to engage more with the students. Along with this, the researchers used familiar examples throughout the curriculum(pasta and penguins), which they believe led to the students being more engaged. While there were a set of questions given to the students in order to evaluate their understanding after the activities were completed, this pilot study consisted of only four students. Therefore, no statistical conclusions could be achieved. The authors do leave us with some of the responses from the children, confirming what is evident in the paper: the teaching of machine learning algorithms to middle schoolers was successful. This paper concludes with the intent of having more participants in a future study to arrive at quantitative results.
In Hitron et al. we see similarities to Ma et al.; the objectives are to expose children to machine learning concepts. Hitron et al. does this by opening up the black boxes of Data Labeling and Evaluation. Similar to the first paper discussed, Hitron et al. use an interactive and hands on approach to learning. In fact, they credit much of their success to this. Using an input sensor device, students would draw shapes with the device and label them accordingly to the system (Data Labeling). Students would then make the same motions once again and see if the model was able to properly predict it based on the previous training (Evaluation) done by the participants. The children in this study were about the same age as the first paper. However, in contrast to Ma et al., a total of 30 kids participated in this research and statistical data was presented. As mentioned previously, the paper also found success in teaching these machine learning concepts to children.
Finally, we take a look at Ellis et al., they explored merging philosophy into AI education. It’s obvious this approach is already much different from the first two papers. According to the paper, by introducing philosophical concepts into AI education, this would lead to more student engagement. Intuitively, this makes sense as the philosophical side of it can attract a wider variety of students, perhaps some that are non-STEM. Although this paper does not include human subject research with children, I thought it would be appropriate to study a different approach to the methodologies of the previous two papers. The authors of this paper, develop a concept map along with several activities that can act as a guide for introducing AI concepts to high schoolers. Despite this being aimed at an older audience, after reading this paper, I think that exposing even younger students to some philosophical and ethical questions can be quite useful. If nothing more than to introduce non-STEM children into the subject.
Hitron et al., and Ma et al. both showed similar approaches to teaching the machine learning concepts, with some key differences. Ellis et al. introduce a new pathology to AI education that is very different from the first two papers. Perhaps combining the theory of Ellis et al. with the work of the first two papers can lead to some interesting results.