The Cleveland-based health system does not shy away from taking time to ensure that an AI tool is a good investment and is the right solution for its patient population.
When it comes to AI tool adoption, The MetroHealth System has a robust validation process. Health system executives are willing to take the time to review whether the tool will work regardless of its track record.
Faced with a plethora of available AI tools. executives should be cautious when adopting these solutions, according to Yasir Tarabichi, MD, chief health AI officer at MetroHealth. He is among nearly a dozen executives participating in the HealthLeaders AI in Clinical Care Mastermind program.
"How we implement our AI models is unique because we are a little slower in validating them than others and are extremely careful in validating them," Tarabichi says.
As a clinical informaticist, he focuses on the concept of the learning health system when it comes to AI tool adoption.
"You are constantly developing a huge repository of data both in terms of patients and their conditions, as well as the things we are doing in our health system," Tarabichi says. "For example, how are we communicating with patients, what are the protocols we are activating, what are the clinical pathways we are leveraging, and what are the medications we are using?"
The key is taking the learning health system concept and actualizing it, according to Tarabichi.
"There is often a gap in this area," he says. "A lot of organizations talk about being a learning health system and learning from their data. They do research. They look back and they say this worked or that did not work."
A learning health system conducts tests in real time in its patient population to identify whether a change in what they do makes a meaningful impact, Tarabichi explains.
"We need to be able to do that in an agile fashion," he says. "We need to understand whether something is working."
An example is the process that the health system used to adopt a predictive tool for sepsis.
"When we took on our AI sepsis model from a vendor, it was being used by several organizations, and everybody said it worked," Tarabichi says. "When we evaluated how others were using this AI model, we approached it with a grain of salt. We were not entirely sure that this predictive model was going to work for us."
With quality oversight and a multidisciplinary group, MetroHealth developed a quality improvement process, where patients who came into the emergency room either got to be on the AI tool's scoring system or didn't.
"We set up a response team for sepsis," Tarabichi says. "We made sure everybody knew their cues and what they needed to do in the standard practice, using clinical pharmacists as the main driver. We ran the model, and we compared the data. We wanted to know how patients who got the score did and how patients who did not get the score did."
The validation process found the AI sepsis model was effective for MetroHealth's patient population.
"By the end of the study, which was a couple of months, we found that the patients who got a score got antibiotics faster than patients who did not get a score, which is important in the treatment of sepsis," Tarabichi says. "We even showed decreased mortality in the hospital associated with that outcome."
This validation process showed that people and process are at least as important as the technology, according to Tarabichi.
"The technology was a catalyst that drove the process, but what really mattered was getting the team to think about how they would use this new information and how it would drive what they do at the point of care," he says.
The AI sepsis model is designed with clinical care teams in mind, Tarabichi says.
"Our sepsis predictive algorithm provides information about the patient's risk for sepsis in a place on the chart where emergency room providers typically look to see how a patient is doing overall," he says. "It sends an interruptive alert only to the clinical pharmacists who actually want that information. They want to be stopped in their tracks when a patient comes in who could have sepsis."
Yasir Tarabichi, MD, is chief health AI officer at The MetroHealth System. Photo courtesy of The MetroHealth System.
Understanding the AI tool life cycle
Paying attention to the life cycle of AI tool implementation is another hallmark of MetroHealth's approach to AI.
Tarabichi encourages his counterparts at other health systems and hospitals to look at the frameworks for the AI life cycle that have been set out by the Coalition for Health AI (CHAI) and the Health AI Partnership.
"Thinking about the life cycle of the solution means by the time you have launched the solution you have already figured out whether it works, whether it is biased, whether it is fair, and how you are going to use it," he says.
This includes knowing when a solution should be terminated.
"What are the criteria for success and when do you need to sunset an AI tool?" Tarabichi says. "The big thing we have found in the informatics and change management world is we have done a good job of turning things on, but we do not do a good job of turning things off."
Understanding the life cycle of an AI tool is critical, Tarabichi says.
"You need to have an offramp and to understand how you are monitoring an AI tool," he says. "Do not implement a solution if you have not thought about the life cycle."
The HealthLeaders Mastermind program is an exclusive series of calls and events with healthcare executives. This Mastermind series features ideas, solutions, and insights on excelling in your AI programs.
To inquire about participating in an upcoming Mastermind series or attending a HealtLeaders Exchange event, email us at exchange@healthleadersmedia.com.
Christopher Cheney is the CMO editor at HealthLeaders.
KEY TAKEAWAYS
Just because an AI tool works at one health system does not mean that it will work at another health system.
The MetroHealth System has used quality oversight and multidisciplinary teams to validate the performance of AI tools.
Health systems should be aware of the life cycle of an AI tool, including being prepared to terminate use of a solution if necessary.