Search this site
Embedded Files
ENGL1180Z Fall 2024
  • Home
    • Breakfast Stories
    • C Barbella
      • Second Page
    • N Bhimireddy
    • J Chen
    • D Kao
    • R Kartik
    • I Khurana
    • M Leport
    • A Nair
    • T Okanlomo
    • C Rhatigan
    • O Spielman
    • M Talikoff
    • K Yang
    • C Yew
ENGL1180Z Fall 2024
  • Home
    • Breakfast Stories
    • C Barbella
      • Second Page
    • N Bhimireddy
    • J Chen
    • D Kao
    • R Kartik
    • I Khurana
    • M Leport
    • A Nair
    • T Okanlomo
    • C Rhatigan
    • O Spielman
    • M Talikoff
    • K Yang
    • C Yew
  • More
    • Home
      • Breakfast Stories
      • C Barbella
        • Second Page
      • N Bhimireddy
      • J Chen
      • D Kao
      • R Kartik
      • I Khurana
      • M Leport
      • A Nair
      • T Okanlomo
      • C Rhatigan
      • O Spielman
      • M Talikoff
      • K Yang
      • C Yew

AI and the Future of Medicine

Decades ago, Dr. Antony Chu was called in to conduct an ‘interrogation’—-an investigation of data collected from the machines monitoring a toddler who had just died. The baby had worn a pacemaker, a battery-powered chest implant that was prescribed earlier to prevent the heart from beating too slowly. The family had requested the interrogation to understand why the pacemaker had failed to keep their child alive.

Chu discovered that the cause of death was the opposite of the original problem: The baby had experienced too-rapid heart beats, and had died suddenly. 

“We’re very fragile bioelectric organisms,” says Chu, a cardiologist and researcher who specializes in the heart’s electrical system. 

He recounts this story from his early clinical practice to help explain the motivation behind his current work. The child was given a pacemaker due to the heart not beating quickly enough, but if the child’s pacemaker had been a defibrillator—a device that applies electric shocks to the heart to restore heartbeat—when the rapid heart beats started, the child would have survived. At the time, Chu felt disturbed, and wondered why the reason for this preventable death was not addressed by already existing medical care.


Today, Chu has turned to artificial intelligence for new medical interventions that might enable better diagnoses and risk-assessments, which could provide preventative, lifesaving care. And he is not the only one. Doctors and patients alike have seen potential in AI, pointing to the technology’s unparalleled capabilities to monitor, interpret, calculate, and scale as the solution to many problems and bottlenecks that pains healthcare in America. 

Recent research points in a similar direction. A 2024 study showed that large language models (LLM) like ChatGPT achieved groundbreaking scores in ‘diagnostic reasoning’— thinking through medical diagnoses and producing thorough and correct answers. It outperformed all doctors, even doctors who had access to these LLMs for support. In 2023, a young boy received a correct diagnosis from ChatGPT after seeing 17 doctors with no improvement, and eventually underwent surgery for the diagnosis, recovering well. But not all AI are LLMs, and most AI that have medical uses don’t generate clinical decisions. Non-LLM AI has been manifesting itself in medicine since at least the 1970s. Yet, a wide spectrum of thought currently exists among physicians about if AI can enhance, and replace, the traditional medical care model.

KardioStatus Team | Left to right: Emily Wang, digital health analyst; Antony Chu, CEO; Anshul Parulkar, COO | Source: KardioStatus

Chu founded his startup KardioStatus in Providence, RI. The company is striving for “better living through biometrics,” the slogan on its punchy red, white, and yellow website. They are developing AI-based computer software that measures and interprets biometric data of individual patients through wearable technologies which is then given to an AI to predict certain health outcomes 

“It’s like predicting weather,” Chu says. Appropriately, Chu calls the strategy forecasting: For each patient, the technology will produce a likelihood of future health issues. If a patient has too little oxygen in their blood, for example, the device will alert the patient’s doctor. 

      This risk-analysis is crucial for preventative medical care. If the young child who had died had been monitored—and was predicted at-risk for increased heart rate—perhaps their caretakers and physicians would have had more time and knowledge to take action, Chu explains. 

But KardioStatus aims to aid doctors in decision making, not to replace them, as opposed to some other AI projects. Chu’s tone softened as he reflected on stories of his mentors. He said that these doctors would give him insights every day that “you could not validate for decades” without extensive research, yet the doctors were always right. Chu is committed to prioritizing physician expertise and their sovereignty in making clinical decisions. He wants KardioStatus to facilitate the clinical intuition that he believes is the most important attribute in the best doctors he has seen.

Yet, some doctors believe AI is a more powerful tool on its own. Adam Rodman, an internal medicine doctor and medical historian, researches LLMs and recently designed an experiment that found ChatGPT-4 to outperform doctors in diagnostic reasoning. The doctors who were given resources such as ChatGPT only improved slightly from the doctors who did not have access, but ChatGPT achieved higher scores than both groups. 

“Diagnosis is probably a task that is going to be performed better by computers than humans,” Rodman concluded. 

While replacing doctors seems like an impossible scenario — unlikely to happen in the near future since it would require significant change in laws and regulations — Rodman challenges the idea that AI cannot match or even surmount doctor’s intuition. For him, LLMs have “intuition” too. Where AI models have disadvantages, doctors are not perfect either. “Humans get our intuition by all the flawed ways that we operate. We have heuristics. We are very biased… and we never get any feedback,“ Rodman says. 

If AI can diagnose patients more accurately than humans, then the most rational conclusion to maximize patient outcome is to let the AI diagnose, Rodman believes. “We [doctors] have a very high opinion of ourselves, and we're reaching a point where I don't think we can make that assumption, we actually need to test that now.”


Scientists and physicians have been trying to automate medicine for decades. One example is the electrocardiogram machine, which are the heart-rate monitors seen in mainstream media like movies and TV shows. These electrocardiogram machines provide an interpretation of the heart rate measurements automatically. This is the result of an “algorithm that is based on some version of AI” says Dr. Gaurav Choudhary, the director of research in the cardiology department at Brown Health. 

For decades, doctors have been reviewing the electrocardiogram interpretations and choosing when to agree with the findings. Though less noticeable and cutting-edge as current AI developments, basic AI algorithms and principals have been making a crucial impact in clinical medicine, especially cardiology, for many years. In another example, a computer system called AAPHelp was created in the 1970s to diagnose appendicitis. While it achieved great results in randomized controlled trials where the AI helped decrease error and mortality rates, it couldn’t gain traction because it was only effective for a specific use case–acute abdominal pain–and could not generalize like a doctor can.

The current landscape of AI applications in medicine is broad, from LLMs that provide medical expertise to devices that act more like conventional medical tools. KardioStatus is one example of how non-diagnostic AI has been integrating into medicine. While KardioStatus is just starting up, medical institutions in Rhode Island have already begun applying AI directly into fieldwork, especially in the cardiology department. 

Dr. Daniel Philbin is very familiar with these developments as the Director of Clinical Cardiac Electrophysiology at the Brown University Health Cardiovascular Institute. In 2022, he was one of the physicians who participated in an international clinical trial study for the first AI-based interventional cardiac electrophysiology tool approved by the FDA, Volta Medical’s VX1. 

Volta Medical VX1's user interface
Source: Volta Medical

VX1's mapping system
Source: Volta Medical

        The VX1 is specifically used to assist ablation treatments for atrial fibrillation (AF), a heart condition of irregular and rapid heart rhythm. It is the most common heart rhythm disorder, with 10.5 million adults affected in the United States. Ablations are minimally invasive procedures that use small burning techniques to scar heart tissue to treat the causes of irregular heartbeats.

When operating on AF, there are many medical signals and factors to observe and analyze — an amount of information that is very challenging for a doctor to assess in real time. The VX1 helps by finding tissues that are likely causing AF so the operator can more easily recognize and target these problem areas during treatment. 

This past May, Volta Medical released details on the performance of its clinical trial, Tailored-AF, at a conference Dr. Philbin attended. Around 370 patients, 26 treatment centers, and 51 physicians worldwide were involved in this study. “The suppression of AF in the Volta-treated group was strikingly better,” says  Philbin, with 66% of patients who received ablations guided by the VX1 experiencing no more acute AF compared to 15% of patients from the control trial that did not use the VX1.

“It’s early– in my world, 300 patients is a pretty small trial,” says Dr. Philbin. But, the result is still promising, and his team already makes use of the technology to treat patients. He believes he has seen the device make a significant difference in patients’ health outcomes

“I never surrender judgment to VX1,” Dr. Philbin stresses,”It’s giving me information, but I’m never letting the VX1 guide the procedure.” 

There have been robotic systems created in the past in an effort to conduct AF care operations, but Dr. Philbin notes that they have mostly “fallen out of favor,” due to few uses and the high cost. “No matter how you tell [AI] to judge contact [during an operation]…it’s just not as good as I am, at least right now.”

Philbin explains that the medical field is always hungry for solutions which is why he is exploring AI in his clinical practice.“It's not like this is the only attempt for this problem that's been made,” he says.  


“Even with good insurance, it takes people months to see doctors, and with a lot of diagnostic errors, and long hospital stays,” says Rodman. He believes that medicine needs to take AI seriously to open doors for deep, impactful transformation of the current medical care model. “The system we have now is fragmented,” he says, “it’s the most expensive healthcare system in the world, that gets worse outcomes, where people no longer have primary care doctors, and they can't get access to care.” He warns that the healthcare field needs to be realistic about this grim reality, and use it for comparison when thinking about what the goal of using AI can be.

There are many possibilities for what a world with more integrated, diagnostic AI could look like. Rodman paints  a picture: “I imagine a world where you've got back pain, and your first stop isn't necessarily the doctor. It's an AI system that's offered by your hospital system,” he says. The AI system can triage the patient and make recommendations for next steps, such as physical therapy. On the other hand, it could also assess if the issue is more concerning, and quickly direct the patient to the emergency room, performing check-ups between visits. 

“Rethinking the role of the human could lead to better care for all of our patients,” Rodman says. 

      Dave deBronkart, a patient advocate and activist, supports the rise of powerful LLMs that can help patients take charge of their treatment plans. The idea of empowering patients during the difficult health trials they endure is personal to deBronkart. In 2007, doctors found tumors in his lungs, bones, and muscle tissue. He was diagnosed with stage IV kidney cancer and was given a median time of 24 weeks to live. 

Dave deBronkart | Source: Wikipedia

In his TEDxTalk from 2011, deBronkart recounted how his doctor "prescribed" him a website where cancer patients gather and converse. It was through this online forum that peers quickly told deBronkart to seek specialist care, that there was a treatment–high-dosage interleukin–which sometimes helps kidney cancer. “Most hospitals don’t offer it, so they won’t even tell you it exists,” deBronkart recites his peers saying. The online patients provided him the contact of several doctors near him, and eventually the interleukin treatment saved his life. “How amazing is that?” deBronkart exclaims, as the crowd roared with applause.

He survived his stage IV cancer diagnosis. In an interview with deBronkart, he stressed that his oncologist said “an important part of how that happened is that [he] was proactively engaged and involved, and instead of being passive.” deBronkart calls himself e-Patient Dave, the ‘e’ standing for an array of descriptors–engaged, empowered, equipped, and enabled. 

While deBronkart was able to receive lifesaving care through the information provided by patient networks, he emphasizes that this critical knowledge is still difficult for patients to access. He believes that AI can alleviate this issue. 

Recently, deBronkart copy-pasted his visit notes from a doctor’s appointment into ChatGPT asking for a summary and list of action items for him as a patient. He wanted to do this because the note had “lots of information to digest,” and was difficult to read due to the “archaic” note taking capabilities (such as lacking bold facing) of the medical notetaking systems that doctors use. ChatGPT produced a relevant list of to-dos immediately, with details and descriptions about the health issues that he and his doctor are working on. For deBronkart, this is only one example of how ChatGPT is producing an “unprecedented amount of patient autonomy.” AI can also equip patients by teaching them how to treat minor ailments, such as when you have a cold or cut your finger. In many ways, AI provides assistance that mirrors the support that deBronkart received when consulting online patient communities during his battle with cancer. 

But while deBronkart willingly offered his personal health information to ChatGPT, some worry about what putting that data in the hands of tech corporations could mean in the long-term. 

“That's the most personal data you will ever have,” Choudhary says, so it must be treated carefully. This is a policy issue too —  “There are laws around HIPAA,” a strictly enforced patient privacy law that guards the sharing of patient medical data. ChatGPT is not a HIPAA-compliant device. 

Even if AI models are not directly trained on a patient’s profile, an individual’s medical history can still be revealing. “If you have enough data points on somebody, you really don't need to know their name, or where they live, or their date of birth,” says Choudhary. “You can identify that person based on the driving from this place to this place every day.” The same principle applies to medical data, even without identifying information, there may still be enough to say “This is the same person.”

            “I think privacy is number one,” says Choudhary. “But you know, the generations are changing, people who are growing up now are much more liberal about sharing than the people that were there before. I think society is changing. Expectations are changing, culture is changing. So I don't know where it'll go,” Choudhary says. However, his concerns stand unwavering. 

       deBronkart echoes Choudhary’s concerns. “It's something we [the patient community] have been talking about forever,” he says, pausing at the thought. He knows that it’s difficult to trust what would be done to your data once you give it away. He advises to never give any data to the internet that you don’t want your ‘enemy’ to see.


This conversation also brings into question the role of physicians. After his speeches, deBronkart often gets to interact with doctors in his audience. When he spoke about AI equipping patients, he was approached by one physician who said the idea of patients getting any useful information from AI was “ridiculous.” DeBronkart remembers another saying, “I was trained that my value to society is that I know things that patients don't know. If a patient knows something that I don't know, then who am I?“ 

      Rodman recognizes that a future with integrated AI decision making systems will be a tremendous change, but he has hope that his job will transform in a way that maximizes both the impact of AI and the provider. There are many aspects of the physician’s role that remain crucial, even in a world where AI can accurately diagnose. “Helping to coordinate care,” Rodman says, or helping patients understand how their treatments might fit in with their lifestyle and life circumstances. 

          “Those are all very important things, in some ways more important, that doctors do.” 

Commentary

I started out writing this piece from my own skepticism of AI's efficacy in medicine, especially at the point of clinical care and diagnosis. While I wanted to hear that AI can be useful in medicine somehow, I doubted that current AI is complex and refined enough. Because of this, I questioned all my interviewees about the cumbersome task of incorporating novel AI tools into an already-busy physician workflow,  and if AI that is only effective in one speciality could be worth integrating into a complex medical system. My skepticisms were proven both right and wrong, and my understanding of the challenges and successes of AI in medicine has expanded beyond what I could have imagined. Researching local and university news about AI applications in medicine led me to many inspirational scientists and physicians who gave me invaluable insights across a wide scope of  current medical AI developments. Because of this, it was difficult to synthesize all that I have learned together into one cohesive story. I was also unprepared to respond to insights that contradicted my initial skepticisms, which limited my ability to ask follow-up questions. I realized that there's a lot to say on this topic, and more nuance than I can explain for now. I am deeply indebted to all of the people who gave me their time, expertise, and wisdom. I'm grateful for their meaningful stories and for their curiosity and integrity when facing the possibilities and consequences of AI. 

Audio Piece 

Access a short story about a Brown University's experience using ChatGPT for medical advice here

Interview Sources

  1.  9/26/2024: Dr. Antony Chu

  2.  9/27/2024: Dr. Gaurav Choudhary

  3. 10/5/2024: Emily Wang

  4. 10/10/2024: Dr. Daniel Philbin

  5. 10/21/2024: Dr. Carsten Eickhoff

  6. 12/3/2024: Dr. Adam Rodman

  7. 12/7/2024: Dave deBronkart

References


TED. “Dave DeBronkart: Meet E-Patient Dave.” YouTube, 1 July 2011, www.youtube.com/watch?v=oTxvic-NnAM. Accessed 29 Feb. 2020.

‌

Goh, Ethan, et al. “Large Language Model Influence on Diagnostic Reasoning.” JAMA Network Open, vol. 7, no. 10, 28 Oct. 2024, p. e2440969, https://doi.org/10.1001/jamanetworkopen.2024.40969.

‌

Hofmann, Courtney Morales. “A Letter from the Founder.” Pcmedfolio.com, 16 Aug. 2024, www.pcmedfolio.com/blog/a

letter-from-the-founder. Accessed 16 Dec. 2024.

‌

Kaul, Vivek, et al. “History of Artificial Intelligence in Medicine.” Gastrointestinal Endoscopy, vol. 92, no. 4, 1 Oct. 2020, pp. 807–812, www.giejournal.org/article/S0016-5107(20)34466-7/fulltext, https://doi.org/10.1016/j.gie.2020.06.040.


Rodman, Adam. “Can AI Make Medicine More Human?” Harvard.edu, 16 Oct. 2024, magazine.hms.harvard.edu/articles/can-ai-make-medicine-   more-human.

‌

Liu, Rong, et al. “A Review of Medical Artificial Intelligence.” Global Health Journal, vol. 4, no. 2, May 2020, https://doi.org/10.1016/j.glohj.2020.04.002.

“How Many People Have A-Fib? Three Times More than We Thought.” How Many People Have A-Fib? Three Times

More than We Thought | UC San Francisco, 12 Sept. 2024, www.ucsf.edu/news/2024/09/428416/how-many-people-have-fib-three-times-more-we-thought.

‌

“Volta Medical Presents Results from the First Transatlantic Randomized Controlled Trial Comparing AI-Assisted Ablation

Procedure with the Conventional Treatment for Persistent Atrial Fibrillation Patients.” Volta-Medical.eu, 2023, www.volta-medical.eu/press-releases/volta-medical-presents-results-from-the-first-transatlantic-randomized-controlled-trial. Accessed 16 Dec. 2024.


Alder, Steve. “Is ChatGPT HIPAA Compliant?” The HIPAA Journal, 15 Dec. 2023, www.hipaajournal.com/is-chatgpt

hipaa-compliant/.

‌

deBronkart, Dave. “Include Patient Users in Co-Creation of AI and Related Policy. The Need Is Urgent! #PatientsUseAI.”

E-Patient Dave, 2 Apr. 2024, www.epatientdave.com/2024/04/02/include-patient-users-in-governing-ai-the-need-is-urgent-patientsuseai/. Accessed 16 Dec. 2024.


Report abuse
Page details
Page updated
Report abuse