
AI is poised to transform health care. How do we get it right?
When it comes to using artificial intelligence in health care, the question is no longer if it will happen — or even when. AI is here. The relevant questions now are the degree to which AI will influence medicine, what forms that influence will take, how doctors can maintain oversight of patient care, and what kinds of guardrails are needed to minimize the potential for harm.
Getting the answers right will require a new approach to medicine throughout the health care system. “AI is going to do wonderful things in health care, but the human needs to change with it,” said Pratap Khedkar, CEO at the management consulting and technology firm ZS, during STAT’s recent Breakthrough Summit in San Francisco. “How does the human change in order to let AI play its role?”
Why AI is needed now
When Khedkar began working with AI over 20 years ago, his primary focus was adapting it for B2B applications. But it didn’t take him long to recognize AI’s enormous potential in health care.
“The data around a patient — what medications they take, what the results are, and so on — didn’t exist in the ’90s,” Khedkar said. “But by the early 2000s, there was a pivot, and it became apparent that data and analytics were going to change the way health care works. ZS leaned into this very strongly. We don’t deliver health care, but our vision is to improve health outcomes for all.”
For that to happen, health care professionals must change their approach. “Patients are looking for three things from the care system: competence, compassion, and coordination,” Khedkar said. “Doctors over-index on competence. The entire system is set up to be that way, starting in medical school. But it’s becoming really hard to keep up with that. Medical knowledge used to double every 10 years. Now it doubles every 73 days. You can’t expect even the best, most competent human in the world to keep up. We’re losing the competence war when it comes to information.”
Compassion is diminishing along with competence
The inability to keep pace with data overload is having a devastating impact on the compassion component of health care. Making matters worse, most doctors are unaware. In a survey of more than 9,500 health care consumers across six global economies, ZS posed a simple question: “Do you really feel cared for?” Across the board, consumers were about half as likely to say yes as doctors presumed.

“There’s a care gap,” Khedkar said. “All the health care stakeholders have the best intentions. They think they’re caring for patients, but the patients are not getting that. They have a lousy experience, and a quarter of them in the U.S. avoid the health care system as long as possible. Compassion feels like a soft thing, but it’s really, really important.”
Paradoxically, a greater reliance on AI could help the medical community offer more compassionate care. “What AI is doing is stepping up to be the helper on the competence dimension because the human can’t keep up anyway,” Khedkar said. And while AI can analyze data to provide a quicker, more accurate diagnosis, “patients still want the news to come from their human doctor,” he added.
Restoring the capacity for competence and compassion — not to mention alleviating the swelling administrative burden — could help doctors and their staff feel more human and foster the person-to-person connection. This, in turn, has the potential to dramatically reduce feelings of burnout that are affecting over half the doctors in the United States.
An 80/20 vision of the future
One of the biggest challenges with implementing AI in health care is shaping — or reshaping — public perception. Recent headlines about ChatGPT and its potential for misuse have sounded alarms in a variety of professions, from medicine to academia to cybersecurity. Experts in the medical profession need to change the narrative.
Something like this has happened before. In 2012, Sun Microsystems cofounder Vinod Khosla caused a stir in the medical community when he posted a blog that appeared to suggest that AI would soon make 80% of doctors obsolete.
What Khosla actually meant, however, was that AI could eventually perform 80% of current patient interactions, including routine consultations regarding things like diet. That would allow doctors to focus on the patient interactions that are critical to making more timely, comprehensive, and accurate diagnoses of serious illnesses. “AI,” Khosla summarized, “should be an assistant to the physician.”
As ZS’s Khedkar noted, “The focus should be on making specific connections. Right now it’s a broken process, and we all know that. The only real data that matters is at the patient level.” The importance of tackling the right data with the right protections in place — all while keeping public perception in mind — is critical to fine-tuning the approach of organizations like ZS as they work at the intersection of health care and technology.
Building those guardrails
A first step toward introducing adequate safeguards to implementing AI in health care is to create an accurate picture of the actual threats. Again, the outcry among both traditional media and social media over the potential abuse of ChatGPT has created a distorted sense of the risk it poses.
As Suchi Saria, associate professor of machine learning and health care at Johns Hopkins University, pointed out, ChatGPT is built around language models. In health care, by contrast, “You need the ability to not just learn from words, but also from large-scale, high-dimensional, multimodal data,” she said.
News reports also raised alarms about AI’s propensity for error. That fear, said Nigam Shah, professor of medicine at Stanford University and chief data scientist for Stanford Health Care, “presumes that right now we’re error-free, which is not the case.” He noted that scribes can make mistakes and doctors can misinterpret patient input, among other fallibilities in the current system.
Building trust is the real issue. “Trust doesn’t come from explainability,” Khedkar said. “The patient, or even the doctor, doesn’t need to understand every last bit of an algorithm. You just have to be able to trust the larger edifice — the organization.”
Khedkar said the ultimate goal should be for the medical profession to use AI to create an infrastructure that’s as trustworthy as the Federal Aviation Administration. “As long as all of that is set and working,” he said, “I don’t need to know how the plane works.”
Applying old standards to new technology
Of course, reaching that goal will require rigorous validation of AI applications — hardly a revolutionary idea in the medical field. It starts with carefully curating the sources that AI draws from. Many of the ChatGPT errors cited in news reports have resulted from using unreliable sources. Typically, a reporter simply asked a question and ChatGPT based the answer on whatever information it found on the internet. That’s not how the medical profession works.
Drawing solely from properly vetted medical literature, on the other hand, could not only minimize the possibility of errors but also enable generative AI to organize and summarize vast bodies of research much faster and more efficiently than a human could.
“Much of the problem right now is that 80% of the data in health care is unstructured,” Khedkar said. “Before, we didn’t have a way of getting beyond that structured 20%, which contained all the useful numerical data.”
Khedkar said the ultimate breakthrough will come from combining generative AI with predictive AI. “Generative AI can use your EMR — 20 years of notes — to pull out the 10 numbers that matter to your patient,” he said. Combining those 10 numbers with the 15 numbers that matter from the overall medical database “gives you a better prediction,” Khedkar said. “Now you’re able to help the patient in a way that is much more efficient but also more effective.”
By enabling doctors to draw meaningful connections between their own observations and interactions with a patient and the vast body of medical research relevant to that patient’s history, AI makes good doctors better — an ideal cohesion of powerful dual forces, human and machine.
For that to happen, health care professionals must change their approach. “Patients are looking for three things from the care system: competence, compassion, and coordination,” Khedkar said. “Doctors over-index on competence. The entire system is set up to be that way, starting in medical school. But it’s becoming really hard to keep up with that. Medical knowledge used to double every 10 years. Now it doubles every 73 days. You can’t expect even the best, most competent human in the world to keep up. We’re losing the competence war when it comes to information.”