Skip to main content

Are Health Systems Mature Enough to Use AI Properly?

Analysis  |  By Eric Wicklund  
   December 06, 2023

As healthcare leaders rush to implement AI tools, some are questioning whether they’re equipped, both technically and organizationally, to use the technology.

Healthcare organizations are rushing to launch AI programs, often to ease administrative workflows or address care management gaps, but are they really ready to use the technology effectively and responsibly?

“There seems to be a proliferation of AI across every industry,” says Shane Thielman, CHCIO, FACHE, corporate senior vice president and chief information officer at Scripps Health, which has launched several AI initiatives recently. “But healthcare is different. You need to have corridors for testing with safeguards in place. And things are moving so fast” that some health systems aren’t planning properly.

Shane Thielman, CHCIO, FACHE, corporate senior vice president and chief information officer, Scripps Health. Photo courtesy Scripps Health.

“We’re being careful and cautious,” he adds. “We’re not using AI today [on anything] that doesn’t have a human in the loop.”

Understanding what AI can and can’t do is tricky, even for the experts. In healthcare, that means not only understanding what an organization needs to have in place before using the technology, but measuring an organization’s AI maturity. Executives need to know what they know and what they don’t know.

[See also: Healthcare Moves Forward With AI Pilots, Partnerships.]

Among those developing maturity models is MI10, a for-profit consultancy launched by Anthony Chang, MD, MBA, MPH, MS, chief intelligence and innovation officer at the Children’s Hospital of California and founder of the AIMed conference. The company’s model, called MIQ, uses 11 factors, both technological and human (along with one factor called ‘intangibles’), to measure a health system’s readiness and maturity, giving out a number on a scale of 1 to 100.

According to Arlen Meyers, president and CEO of the Society of Physician Entrepreneurs, a professor emeritus at the University of Colorado School of Medicine and Colorado School of Public Health, and a strategy advisor to MI10, the MIQ tool was used to evaluate dozens of health systems across the country, and found many that hadn’t even met readiness standards yet. Those systems scored between 26 and 88, with a median score of only 56.

“Our understanding and intelligence is that most hospitals don’t even know how to start,” he says. “And many don’t know where they are now” on AI maturity.

Meyers says healthcare organizations across the country are developing their own AI innovation centers. Some, like Vanderbilt University, have established an AI advisory board, and others, like Duke Health and Microsoft, are collaborating to launch centers of excellence that include a deep dive into AI ethics. Still others, he says, are relying on maturity models created by advisory firms and think tanks that sit outside the healthcare ecosystem.

“There are several descriptions of what have been referred to as maturity models,” he says. “I don’t think anybody has been able to validate the assumption that any of these models are accurate.”

Putting AI to Work

At Scripps Health, Thielman and David Wetherhold, MD, the San Diego-based health system’s chief medical information officer, say they’re taking a slow and methodical approach to developing and using AI. They’ve created a team of executives, clinical leaders, and experts from the legal, IT, audit and compliance, and security departments to focus on governance.

“One of the first questions we ask is whether this is actually fixing a problem or is this just technology for technology’s sake,” says Wetherhold.

Wetherhold says many healthcare applications for AI at present focus on deterministic models, or tools that summarize large volumes of data. That’s great for improving back office and administrative tasks, he says. But the evolution of AI tools will move toward probabilistic computing, in which the technology maps outcomes and gauges likely results.

And that’s where things get tricky. In large-language models, AI tools could create hallucinations, or patterns or objects that are nonexistent and therefore inaccurate. As healthcare organizations move toward using AI in clinical settings, that could be dangerous.

With that in mind, Wetherhold and Thielman say healthcare organizations have to understand how to design prompts, which are basically the directions given to AI tools on how to gather and disseminate data. Health systems that fail to pay attention to prompt engineering run the risk of designing faulty AI tools that can cause damage.

“There really is no such thing as a clean data model,” Wetherhold points out. “It all comes down to how you ask the questions.”

Curb Your Enthusiasm

On the opposite coast, Atlanta-based Emory Healthcare is deep into AI development, thanks in part to a partnership with its EHR platform, Epic. Alistair Erskine, MD, MBA, the health system’s chief information and digital health officer, says a generative AI tool developed by Abridge is being used by more than 100 doctors in more than 25 specialties on any given week, with more than 500 enrolled to use the tool.

Erskine says enthusiasm for the technology is high, so much so that he’s “having to hold people back.” But he feels the hype around the technology is overblown, and the health system is creating its own guardrails to make sure AI is used properly.

[See also: Will Policy, Regulations Issues Stifle AI's Advances in Healthcare?] 

That includes the Emory Empathetic AI for Health Institute (AI Health), an initiative launched in early 2023 to guide the health system and “shape the artificial intelligence revolution to better human health, generate economic value, and promote social justice.”

Erskine says he’s working with AI Health to make sure AI readiness and ethics are part of the game plan. For example, if someone using the technology is asked to defend the results and replies with, ‘It’s what the AI told me,’ that means there’s more work to be done on enforcing the tenet that AI augments but doesn’t replace the human.

“We do tell the doctors to review everything,’ he points out.

Alistair Erskine, MD, MBA, chief information and digital health officer, Emory Healthcare. Photo courtesy Emory Healthcare.

Erskine says the health system is very much attuned to both technical and organizational readiness to use AI. There’s a clear understanding of inherent bias in the technology, he says, but the technology is also evolving and maturing itself. Doctors understand, he says, that more work put into the tools, the better the results.

“More comprehensive notes support higher levels of billing,” he says.

And that’s where Erskine sees the most benefits right now. Doctors spend too much time going over their notes and working in the EMR, time that should be spent with patients. AI tools can do that work faster and better, building a more complete patient record, reducing workflow pressures and stress, and creating more opportunities for care management, care coordination, and reimbursement.

“The chance to shave two hours a day for a clinician is vital,” he says. “That’s a huge amount of time.”

“Our understanding and intelligence is that most hospitals don’t even know how to start. And many don’t know where they are now [on AI maturity].”

Eric Wicklund is the associate content manager and senior editor for Innovation, Technology, and Pharma for HealthLeaders.


KEY TAKEAWAYS

Health systems are developing and launching AI programs at a fast pace, aiming to address back office and administrative issues and fill care gaps.

Some experts within the healthcare industry say many organizations don’t understand the challenges of using AI or have the maturity to manage how it’s used.

Healthcare organizations need to carefully plan how to develop and use AI, with a management strategy that addresses who, how, and why the technology is used and how to address errors and misuse.


Get the latest on healthcare leadership in your inbox.