By Meera Vinod
As the use of AI based solutions becomes more pervasive in every industry, it becomes even more imperative to ensure that the algorithms that constitute them are fair and just to the humans that face the consequences of their decisions. This becomes especially important in the healthcare field where algorithms are increasingly being put to use in making important medical care decisions for the patients. In this article, we look at how algorithm design can lead to biased outputs and briefly go over some potential solutions to these problems.
Faulty algorithms at work:
At the heart of AI solutions are algorithms that are trained to make judgements given an input scenario and make an appropriate decision without human intervention. Using training datasets as reference, the machine learning model learns to identify the right solution to the problem. But herein lies the problem. Many datasets do not accurately represent the target user base that they would eventually serve. For example: Chest X-rays trained with images from male patients have been found to underperform at reading X-rays of female patients. Similarly, skin cancer detection algorithms which were trained on fair skinned people were found to have lower accuracy on people of color.
Sometimes biases and inequalities from the real world carry over into the datasets. The ML model learns from these inequalities and feeds them back into their decision making process. An algorithm of this type was studied by Dr. Ziad Obermayer and team from UC Berkeley. They studied a widely used algorithm by hospitals and insurance companies for identifying high risk patients for a tailored program. The program was intended to single out high-risk patients so that they can access trained staff and lab facilities more frequently and thereby reduce unexpected and expensive visits to the emergency care centre. To train this algorithm the machine learning engineers used a patient's past health care costs to predict candidates for this program with the assumption that patients who have had higher medical costs would likely have been sicker to begin with. But this decision led the algorithm to be biased against black patients. Black patients who had the same level of critical health problems as white patients were predicted to have lower critical care needs by the algorithm. This led this algorithm to predict fewer critically sick black patients as being eligible for the program compared to their white counterparts. There are multiple reasons for how this came about. For one, the majority of black patients being economically disadvantaged have more barriers to accessing quality healthcare and tend to have fewer follow up visits, even with insurance. The prevalence of distrust in the healthcare system among African American communities (tracing back to the Tuskegee study) also is another factor deterring the community from accessing health care proportional to their needs. The outcome of this study shows how biased algorithms can lead to less than desired outcomes for patients of color.
Correcting for biases:
Inculcating a sense of data empathy during the process of data collection is one way of reducing bias in AI solutions. Data empathy refers to gaining a complete understanding of the context surrounding the data being collected, and the various collection processes involved. This will alert data scientists to possible flaws and biases in the data and help them make corrective steps to eliminate biases while training the model. Interdisciplinary collaboration between machine learning engineers who build the model and experts who have a deep knowledge in the concerned domain will help achieve this faster. Companies should also work on hiring diverse employees for the positions of data scientists, machine learning engineers and in leadership positions who can bring a diverse perspective of life experiences to help bring more empathy into the data collection process.
One of the biggest public concerns regarding the use of AI is the extensive need for data collection and monitoring of user’s activities. Companies can improve public confidence by making the working of the algorithms transparent. Governments also can play a role in this space. Private companies would be less likely to publicise their algorithms for the sake of transparency. By mandating bias assessments similar to corporate audits, governments can regulate the unfair use of AI.
It is impossible to do away with AI given how much it has integrated into various human workflows. But we can take corrective measures to make algorithms fairer and be a force of good in our society.
References:
Dissecting racial bias in an algorithm used to manage the health of populations
https://www.science.org/doi/full/10.1126/science.aax2342
Responsible AI practises - Google ai
https://ai.google/responsibilities/responsible-ai-practices/
Understanding AI ethics and safety
Scientific American - Health care AI systems are biased.
https://www.scientificamerican.com/article/health-care-ai-systems-are-biased/
Insights into AI and healthcare by Booz Allen institute
https://www.boozallen.com/c/insight/blog/ai-bias-in-healthcare.html