Artificially intelligent technologies only continue to grow in popularity as their capabilities prove to revolutionize numerous sectors world, from entertainment, to law enforcement, to healthcare. Within each area, AI plays unique roles while increasing the efficacy of human work. However, simultaneously, implications arise as the technology designed to create ease becomes a potential source of complications. This complexity of the application of AI deserves deliberate attention in healthcare as it entangles in one of the most important part of a person’s life: their health. While AI in healthcare can aid in the development of a more successful health experience for doctors, nurses, and patients alike, it may do more damage than expected as it compromises the privacy and rights of a patient.
Artificial intelligence has been introduced to healthcare as early as the 1980’s with the emergence of Robotic Assisted Surgical Systems (RASS), as seen in the video below, to administer minimally-invasive surgical procedures (Väänänen, et al., 4). The introduction of such technology revolutionized the surgical world and increased the efficacy and safety of many medical procedures. Since then, artificially intelligent technology has only continued to be imagined, designed, and integrated in the world of health, and the degree to which the public experience the entanglement between AI and healthcare will only continue to exponentially grow.
The projected increase of AI in health is a genuine result of necessity to ensure the sustainability of care. With “aging populations, congested hospitals, and shortages of health workers,” artificial intelligence provides solutions to these troublesome challenges (Thomason, 1). AI allows for delegation that can result in faster processing, multitasking, and more time to dedicate to patient care. The application of this technology is nearly endless: virtual nursing assistance, medical consultation, administration and workflow, automatic and preliminary diagnosis, health monitoring, medical imaging analysis, and more (Väänänen, 2). From this perspective, artificial intelligence completes the ultimate dream team for the efficacy of healthcare. By supporting medical practices with AI, hospitals, clinics, and doctors can reduce costs, provide preventative treatment, offer more accurate diagnosis, and ease the work burden for workers (Väänänen, 11). These technologies are undeniably revolutionizing how health treatment is approached and has been designed with the intent of benefiting the patient.
Whether or not artificial intelligence benefits the doctor-patient relationship is debated in the field and across disciplines. Of course, those who support the integration of AI in healthcare believe that it can improve this doctor-patient relationship. By alleviating the “burden of performing the numerous tedious, repetitive, and often difficult tasks that physicians face,” AI gives healthcare providers the ability to designate more time to the interactions with their patient (Aminololama-Shakeri et al., 309). As patients experience more developed relationships with their doctors, their trust in the treatment they are receiving will be strengthened thus making them healthier and more hopeful. However, there is another side to this coin. While providers may have more time for their patients, there are claims that attest to the actual diminution of trust between the doctor and patient. For example, because of the continuous adaptation of the algorithms used in these technologies, it’s not possible for physicians to ever be able to understand let alone explain to their patients how the decision-making of AI functions (Reddy, et al., 492). A doctor working independently can walk a patient through a medical conclusion or diagnosis, however if an algorithm is to determine it, there is very little ability to illustrate the reasoning. This “lack of transparency” is what is often referred to as the “black box issue” (Reddy, et al., 492).
Tarleton Gillespie, in “The Relevance of Algorithms,” describes this problem as the hidden nature of criteria used by algorithms to make decisions (Gillespie qtd. by Kish, 09/22/21). The vulnerability of being a patient demands a foundation of trust with their doctor and the expectation that they will provide the most appropriate and affective solution based on reasonable, scientific conclusions. The inability to elaborate on the diagnosis made by an AI algorithm undermines this credibility…if a physician cannot explain why they have decided it is tuberculosis, how is a patient to trust their diagnosis?
As AI continues to be used and the relationship between medical conclusions and doctors becomes strained, healthcare providers may become over-reliant on this technology and evolve into a mere translator of information from the machine to the patient (Reddy, et al., 492). This in turn would diminish the doctor-patient relationship immensely. Therefore, while a doctor may find themselves with more time to devote to their patient communication, they also may experience a loss of trust and connection with the patient, compromising the doctor-patient relationship altogether.
Additionally, the work conducted by artificial intelligence does not stay within the bounds of the examination room. To function, algorithms require large amounts of data to process, analyze, and compute. Therefore, the information collected from a patient must be stored in databases to contribute to the operation of artificial intelligence. This is a very large amount of information, in fact, “30% of the world’s data is generated by the healthcare industry” (Thomason, 2). More recently, hospitals and healthcare facilities have realized that this data is untapped profit. Once monetized, patient data becomes the new “healthcare currency” sold to researchers, medical manufacturers, pharmaceutical companies, governments, public organizations, private companies, and more (Thomason, 2).
This marketplace of data brings in revenue for the institutions responsible for collecting data and provides valuable information for those who purchase it. This exchange can improve healthcare management and create a tailored, personal patient experience whether that be through hospital visitations, medical diagnosis, or pharmaceutical prescriptions (Thomason, 2). Additionally, as healthcare becomes increasingly in-depth, these vast amounts of data contribute greatly to medical advancement. Many companies have already monetized their access to health data. Facebook has developed a search tool for local affordable care, Apple Watch provides health metrics that can be sent to a provider, Google created a visual AI that diagnoses skin conditions, and Amazon has AmazonCare, AmazonPharmacy, and AmazonDx (Thomason, 2). This private company involvement in health will only continue to grow as data is available.
Even more importantly, the lines of ethics begin to blur as the patient gets lost in this marketplace of medical data. In this context, the patient becomes the source of revenue, rather than the subject of care and attention. Their personal, medical information is now identified as “free raw material for the translation into behavioral data,” also called surveillance capitalism (Zuboff, 8). This commodification of human information creates drives among industries to push for profit. This concept of surveillance capitalism is explored further in this short video:
This implication of data extraction can already be found in the pharmaceutical industry as doctors prescribe and diagnose under the influence of the latest drug sold to them by their sales representative. By allowing profitability in patient information, institutions will be more susceptible to “nudging” or “modifying” the interpretations of their patients’ behaviors to increase revenue (Zuboff qtd. by Kish, 10/04/21). This becomes very dangerous as it can lead to misdiagnosis or maltreatment. This pulls into question the ethical grounds of artificial intelligence in healthcare, the inevitable data commodification, and the rights to privacy of the patient.
Finally, not only do patients become extraction wells for data, they also are not receiving the “perfect, new-age” care they think they are. AI algorithms are tricky in the public sphere because of their ability to disguise themselves as altruistic and incapable of mistake. When approaching such technology, there is this “promise of algorithmic objectivity” where users or participants believe the results they are receiving are “fair, accurate, and free from subjectivity” (Gillespie, 179). However, the opposite is quite true. Like mentioned previously, algorithms rely entirely on the data they receive…no data, no algorithmic operation. With that being said, whatever information is fed into an artificially intelligent technology is the foundation on which its conclusions will be made. Data that is lacking or incomplete will result in algorithmic answers that are lacking or incomplete.
This AI bias is a big issue in healthcare and can be dangerous. It is often the result of societal discrimination like poor access to health care and insufficiently small samples such as minority group that can “entrench or exacerbate health disparities” (Reddy, et al., 492).
Inequalities already exist in healthcare, and data extracted from this environment ensures the overestimating or underestimating of differences across geography, gender, race, and class.
Artificial intelligence can very well deepen the problems the health sector faces while under the guise of mending it.
It cannot be denied that artificial intelligence lends a very helpful hand in healthcare by promoting the success of health institutions. While this research has shed light on the deep, dark negative implications of this technology, the conclusion that AI should be eliminated from healthcare should not be drawn. There is great potential for this new revolution in health that can lead to expansive, life-changing research, more accurate care, and a better distribution of responsibilities. However, the issues intertwined in AI cannot be ignored either. It is absolutely necessary that preexistent and developing healthcare AI is closely analyzed and revised to take into consideration the potential biases, privacy infringements, and delicacy of the doctor-patient relationship that comes with this technological revolution. This will require the assembling of multi-disciplinary professionals to approach healthcare AI from all different directions to eliminate as many implications as possible.
Works Cited
Aminololama-Shakeri, Shadi, and Javier E. López. “The Doctor-Patient Relationship with Artificial Intelligence.” American Journal of Roentgenology, vol. 212, no. 2, 2019, pp. 308–310., https://doi.org/10.2214/ajr.18.20509.
Kish, Zenia. “Gillespie’s Relevance of Algorithms.” University of Tulsa, lecture, 09/22/21.
Kish, Zenia. “Zuboff’s Surveillance Capitalism.” University of Tulsa, lecture, 10/04/21.
Reddy, Sandeep, et al. “A Governance Model for the Application of AI in Health Care.” Journal of the American Medical Informatics Association, vol. 27, no. 3, 2019, pp. 491–497., https://doi.org/10.1093/jamia/ocz192.
Tarleton Gillespie, “The Relevance of Algorithms” in Media Technologies: Essays on Communication, Materiality, and Society, ed. Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot, Cambridge, MA: MIT Press, 2014: pp. 167-193.
Thomason, Jane. “Big Tech, Big Data and the New World of Digital Health.” Global Health Journal, 2021, https://doi.org/10.1016/j.glohj.2021.11.003.
Väänänen, Antti, et al. “Ai in Healthcare: A Narrative Review.” F1000Research, vol. 10, 2021, https://doi.org/10.12688/f1000research.26997.2.
Zuboff, Shoshana. “Introduction” in The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York: Public Affairs, 2019, pp. 3-24.