Generative AI has been making a splash in the industry for a year, and healthcare providers and investors are asking: how big of an impact will AI have in our sector, and how quickly?
In my DigitalHealth InvestorTalk Show, I discussed this with Ron Razmi, MD, a cardiologist-turned-AI investor and author of the new book AI Doctor. I took away that healthcare is a special and highly regulated industry so GenAI’s impact will likely play out differently and more slowly than in other industries.
A benefit of GenAI is that often provides better, more direct answers to questions than Search, which it replaces. However, GenAI’s mechanism is often a “black box,” which is to say that the engineers outside the black box can’t explain why the box produces the content that it does. This is because GenAI’s work product is the result of millions of regressions optimized from the source content and no single person can completely understand how the process works. Furthermore, GenAI can produce “hallucinations,” which is the term for when the AI agent seems to make up facts and conclusions.
In the context of healthcare, the “black box” characteristic of GenAI is concerning where lives are on the line, and the “hallucination” characteristic is unacceptable.
Dr. Ramzi proposes a spectrum for predicting the near-term impact of GenAI on the healthcare delivery process that is based on the criterion of risk-to-human life.
Opportunities where there is a low risk to human life could see low product-development costs and the rapid development of GenAI solutions. On the low-risk side of the spectrum would be opportunities to automate billing issues, such as prior authorization and payment disputes. A little more risky would be a low-acuity diabetic patient getting AI coaching at the moment based on their own fresh personal data.
However, opportunities with a high risk to human life would see regulatory barriers, high dev costs, and slow development of solutions. A high-risk scenario would involve changing the diagnosis and care plan of a patient with heart failure.
Across the whole spectrum of risk-to-life in the US healthcare system, GenAI holds the promise of more care, better care, and more affordable care, and we get hints of this when we see GenAI models routinely scoring higher than most medical school students on MCATS and higher on board exams than specialty physicians. But in spite of this, in the US system, the doctor is still responsible for the patient, not an AI agent and this is unlikely to change.
GenAI remains simply a tool for the doctor, and the doctor remains responsible and liable for the whole picture, the care of the patient, and the care outcomes. When an AI agent makes a recommendation that is sub-optimal or mistaken, the doctor is responsible for taking its advice. Doctors are responsible for patient outcomes and for which support tools (such as a blood pressure monitor or GenAI decision support tool).
For more discussion of the issues: check out our episode “The Good, The Bad, and the Ugly of AI in Healthcare” on the DigitalHealth InvestorTalk Show podcast on Apple and Spotify.
About Steven Wardell
Steven Wardell is a former Wall Street digital health analyst and the managing partner of Wardell Advisors, which advises digital health companies on fundraising, growth, business development, and strategic alternatives. Follow him at X.com/StevenWardell.