By Jakob Kraiger
Artificial intelligence (AI) is a popular topic. This is also true in the healthcare space. Our healthcare system is facing multiple challenges, one of the biggest being the aging population. In addition, it is dealing with more complex patients. Due to these challenges the current healthcare spending (in the US already 18% of the GDP) (1) might not be able to sustain the quality of care in the future. Furthermore, we see a tremendous shortfall of doctors, nurses and midwives globally till the year 2030. (2) AI is one way of supporting us with these challenges. AI and the automation of processes in the daily healthcare delivery can transform medicine and help sustain or even improve current standards by making medicine better, more accessible, predictive and more effective for patients. (3,4) It might help us to improve medical systems around the world especially where resources are scarce. (5) However, there are comprehensible concerns, dangers and important aspects we have to keep in mind while adopting AI as a new and promising technology in our healthcare system. This article focuses on two paramount issues: privacy concerns and algorithmic biases and how to overcome these through human-centered design in the adoption of AI in healthcare.
Humans are biased and so also the hospital data collection system: “Data embeds historical bias and historical practice.” (6) The result of systematic biases in healthcare and their leveraged usage through algorithmic decision support can be devastating and increase healthcare inequalities (7): In 2019 a study from UC Berkeley’s Prof. Ziad Obermeyer revealed that a widely used algorithm (the investigated software was used with around 70 million patients per year in the US alone) was discriminating against black people. The software was used to identify patients that needed extra help, such as follow-up visits with their doctors, due to the complexity of their disease. The bias occurred because the algorithm used health cost as a proxy for health needs. As less money is spent by black people compared to the white population, the algorithm was falsely concluding that the black population was healthier. (8) Therefore, white people received on average more healthcare. This tremendous failure was only revealed because of the attentiveness of one scientific research group. “Algorithms can reproduce and even scale up the racial biases in our society and in our data sets.” (9) One methodology to decrease these pitfalls is human-centered design: Human-centered design in healthcare-AI requires us to “struggle with the implications of taking our racist, sexist, homophobic, ageist, and ‘othering’ past and automating inequality into the future, which is what an AI system using historical data to predict the future will do.“ (7)
Not only in the scientific world but also in the healthcare-industry these topics are important: Many firms adopt AI-tools in their company structure right now. Therefore, conversations about potential algorithmic harm need to take place in every firm and not only in academia. “The healthcare AI product development cycle should require explicitly evaluating every product for justice, equity, diversity, and inclusion before it can advance to market.“ (7) Before deploying a new algorithm, we have to ask “for whom does this fail”, or “for whom does this deliver algorithmic harm”? (10) “We need checks in our system that make us pause to consider if the biases are going to cause unintended harm.” (7) These evaluations put the human into the center of the development. This is what human-centered design is all about.
The above-mentioned algorithm (Obermeyer et al.) was widely used in the US, which revealed that these systems are already widespread and the adoption of algorithmic decision support, as part of our health-care, is the unpleasant truth. Unpleasant because all of this happens without informing the population. Technology has been outpacing regulations in the last decades. This problem is part of our daily life and becomes apparent when news of the next big data leak hits the country. However, there is an important difference between the privacy concerns with your text-messaging-provider and your movie-streaming platform, or your health data. Privacy concerns about your electronic health records are more than justified, since your health data can be a terrible weapon against you in wrong hands: Imagine if your insurance company learnt about the increased likelihood of you getting an incurable disease, due to a data leak around your sequenced genome (or due to a partnership of the genome sequencing start-up and your insurance company). Perhaps the insurance company might decide that the treatment for your condition is too expensive and thus cancel your membership. Human-centered design is a way to mitigate these risks: Researchers in Germany and London have developed a technology that puts the human first: Their algorithm protects patient’s data while being trained with health information. This should not be the exception but rather the default. Co-author of the study Daniel Rueckert says: “Guaranteeing the privacy and security of healthcare data is crucial for the development and deployment of large-scale machine learning models.” (11)
Due to its power and the variety of possible use cases, we have to ask the question what AI should actually be used for in the healthcare space? Artificial intelligence in healthcare should not only be designed by what is technically feasible. It should be centered around the human need to actually improve patient care and the working conditions for healthcare providers whilst guaranteeing the privacy and security of the healthcare data. Currently, unrealistic expectations, the hype about machine-learning and AI, and the effective product-marketing of companies bring tools into the clinic before they are ready. (12) Human-centered design in AI should help avoid these early releases and lead to questions like: “Hang on here. What are we building? And should we use it this way?” (13) Therefore, “now is the time for conversations and start focusing on humans of different backgrounds, lifestyles, experiences because those who are most vulnerable are being left behind even faster as technology and digital transformation processes accelerate.” (13) Any AI model should be developed and attempt to solve meaningful humanity-centric problems. It is therefore crucial that healthcare providers, medical doctors and computer scientists work together on the most pressing problems.
Once we have protected our privacy and secure justice, equity, diversity, and inclusion with the help of human-centered design we can enjoy full-heartedly the advantages of artificial intelligence in healthcare.
References
1) NHE Fact Sheet. www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NHE-Fact-Sheet. Accessed 13 October 2021.
2) Global Strategy on human resources for health: Workforce 2030, World Health Organization, 2016, https://www.who.int/ hrh/resources/pub_globstrathrh-2030/en/. Accessed 13 October 2021.
3) Jonathan Tyler, Sung Won Choi, Muneesh Tewari, Real-time, personalized medicine through wearable sensors and dynamic predictive modeling: A new paradigm for clinical medicine. Current Opinion in Systems Biology. 2020
4) Transforming healthcare with AI: The impact on the workforce and organizations
https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/transforming-healthcare-with-ai?cid=eml-web. Accessed 13 October 2021
5) Yu, KH., Beam, A.L. & Kohane, I.S. Artificial intelligence in healthcare. Nat Biomed Eng 2. 2018
6) Cathy O’Neil, https://mitsloan.mit.edu/ideas-made-to-matter/how-can-human-centered-ai-fight-bias-machines-and-people. Feb 2021. Accessed 25 November
7) Katharine Miller, https://hai.stanford.edu/news/ai-health-how-prioritize-humans. Nov 2021. Accessed 19 November 2021
8) Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Oct 2019
9) Dissecting Algorithmic Bias, Ziad Obermeyer, AI for Good Discovery, https://www.youtube.com/watch?v=U5MlyFsMi-E. Accessed 14 November 2021
10) Georgios Kaissis et al., "End-to-end privacy preserving deep learning on multi-institutional medical imaging". Nature Machine Intelligence. May 2021
12) Will Douglas Heaven, https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/. July 2021. Accessed 16 November
13) Dr. Renee Richardson Gosline, https://mitsloan.mit.edu/ideas-made-to-matter/how-can-human-centered-ai-fight-bias-machines-and-people. Feb 2021. Accessed 25 November