At Froedtert and the Medical College of Wisconsin, executives are making sure their AI strategy aligns with governance and clinical expectations.
At the end of the day, AI is all about using mathematics to improve healthcare.
It’s not a foreign concept, but many healthcare executives are still trying to grasp the strategy. Numbers gathered, assessed and used properly can improve clinical care. AI just pushes that to a new level.
“A lot of the same rules still apply to this newer set of tools and technologies,” says Anai Kothari, MD, MS, an assistant professor in the Division of Surgical Oncology at the Medical College of Wisconsin (MCW) and AI Lead at Inception Health, part of the Froedtert and MCW network. “It's not that we have to reinvent the wheel necessarily. … We have a foundation to build on.”
The issue with math is that it needs to be methodical, and that’s a challenge given the broad adoption of AI in healthcare and other industries. In many cases AI adoption is running side-by-side with governance, forcing executives to develop guidelines on the fly and make sure they’re followed.
“Once it's deployed, how do you assure that the model outputs align with what you initially intended them to do?” asks Kothari, whose health system is taking part in HealthLeaders’ AI in Clinical care Mastermind program. “And then taking it a step forward, before you put it in front of patients and clinicians, how do you best assess and measure the performance? Those are things that we're thinking about every day.”
Kothari notes that health systems gather and use data every day and have done so for decades. By integrating AI into that process, more data is assessed at a faster rate, and clinicians are freed of the tasks involved with analyzing data and are focused more on the results. But clinicians aren’t mathematicians, and they have to realize there are steps to be taken to get from data to results.
Anai Kothari, assistant professor in the Division of Surgical Oncology at the Medical College of Wisconsin and AI lead at Inception Health, part of the Froedtert and MCW network. Photo courtesy MCW.
“When you put a tool into production, we go through a lot of steps to make sure that it's ready for that moment,” he adds “But at the end of the day, these are still models. And I joke, It's math. The underlying piece of this is it's mathematics.”
Kothari says the game plan for AI early on is to start small and specific to prove value. The advent of generative AI, he says, will allow healthcare leaders to expand the platform.
“You train a model, you figure out what's the right use case, and then you target that very specific use case,” he says. “What generative AI tools allow for us to do is take these models and maybe apply them to multiple, different use cases, [though] not always the ones that they were initially trained to do. We’re starting to see some of those applications in the health system.”
The challenge, he says, lies in being precise with the data. AI programs must be reliable, in that the data going in produces the same desired result. Any variance in outcomes leads to incorrect outcomes, even hallucinations.
And that’s where governance comes into play. Froedtert ThedaCare Health has a separate AI governance committee, composed of a wide range of health system stakeholders. It’s their job to review every AI program, including those in the planning stages, and set up steps to move from evaluation to deployment to ongoing governance.
One of the biggest challenges, Kothari says, is creating monitoring strategies that will catch hallucinations and other data errors in AI programs before they cause problems. This, he says, requires continuous monitoring, a departure from the old practice of checking the technology and then letting it do its thing. Aside from acclimating staff to the new routines, it also takes to time develop the right protocols.
Another responsibility is ensuring the data used in AI programs is up to date. During those regular data updates, he says, there will be an emphasis on ensuring privacy and security of PHI, and aligning those updates with clinical use. This isn’t always easy for clinicians who’ve become accustomed to taking the tools and running with them.
“We’re surprised more often than not in terms of how an AI tool is going to look once we have it in the hands of users,” Kothari notes.
Finally, there’s the rapid development of AI in the public space, including AI-enhanced health and wellness tools. Patients are embracing AI as quickly as their care teams, and in some cases are asking or even demanding that their doctors use the technology as well. Kothari says doctors should be talking to their patients about AI.
“I feel like that’s something that can be encouraged,” he says. “As long as we educate patients and their families that it’s OK to use those tools, at least as a first step, and to talk through those things together [with their doctor]. Patients should be empowered in their own healthcare, and AI may provide an opportunity to work together with patients and their families.”
Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.
KEY TAKEAWAYS
Froedtert and the Medical College of Wisconsin are taking part in HealthLeaders' AI in Clinical Care Mastermind program, which digs into how health systems and hospitals are integrating and managing AI tools in clinical programs.
AI must be managed carefully and methodically, says Anai Kothari, AI lead at Inception Health, part of the Froedtert and MCW network, with an understanding that the technology is constantly evolving and needs constant monitoring.
Oftentimes the perception of a new AI tool or program changes dramatically once it's installed, especially in generative AI, where one idea may blossom into several potential uses.