If you haven't given much thought about how to harness the potential of AI at your healthcare organization, you'd better start. Ignoring this budding technology could be at your own peril.
This article first appeared in the September 2017 issue of HealthLeaders magazine.
Promises and prognostications about the potential of artificial intelligence are being made in Silicon Valley, the Boston area, and other high-tech hot spots.
Most hospitals and other clinical sites, however, do not exist in these cutting-edge environs. For many of the clinicians and hospital executives in this country who are busy enough grappling with a complex mix of challenges in a changing industry, AI in healthcare represents the latest technological buzzword, the hyped-up, futuristic stuff of drawing boards and tech magazines.
The skepticism toward another technological innovation is understandable in an industry that is still struggling to identify an exact return on investment for the massive spending on the electronic health records mandate over the past decade.
While experts in this budding era of AI and machine learning in healthcare warn against falling victim to the hype, they also caution against ignoring the inevitable and profound changes that the new technology will bring to every corner of healthcare.
From diagnosing individual patients to monitoring population health, from staff scheduling to financial projections to patient throughput, AI is coming, it's going to disrupt the way you do business, and it's going to happen faster than you think. If you haven't given much thought about how your healthcare business will enter this new world, you'd better start.
Mercy's journey into AI
Todd Stewart, MD, an internist and vice president of clinical integrated solutions at St. Louis–based Mercy, a Catholic health system with 43 acute care and specialty hospitals and more than 700 physician practices and outpatient facilities, says Mercy began its AI journey three years ago, with a focus on nettlesome procedural challenges around standardizing care pathways.
"We started with some of the highest-volume, high-dollar procedures, like total hip/total knee. Across Mercy we do thousands of those," Stewart says. "The idea was that if we standardize the process, limit care variation, there is strong evidence that we could extract financial value and also patient value in terms of lower mortality, lower length of stay, better outcomes."
Like most health systems since the advent of electronic health records, Mercy had at its disposal mountains of data around outcomes, use, and supply chain. It was time to put that data to work.
"Can we use machine learning algorithms to lower length of stay? What cohort of patients had really good length of stay compared with others? Can we use our existing data to help guide some of these best practices?" Stewart says. "That type of approach with machine learning is precisely what you can get. It can show you relationships that are almost impossible to find except by luck with just humans. What we found was that it absolutely added to the value process."
On a second front, Mercy tapped AI to improve patient throughput, analyzing patient logistics and flow data compiled throughout the health system. "Where do we have bottlenecks of patient flow and how can we improve those?" Stewart says. "We started in the emergency departments, and it's moved into the other care locations in the hospital, from the ED to the ICU and inpatient and outpatient beds and the OR. We're applying supervised algorithms to predict when we are going to have problems and then being able to know how best to respond to those to prevent and manage them."
The data on patient flow that was sifted from AI is now applied systemwide at Mercy under a three-year project called CaRevolution.
"The idea there is to apply the supervised machine learning algorithms so we can identify where patients are unnecessarily having to wait between care locations and trying to eliminate that so patients don't have to wait six hours in the ER," Stewart says. "Standardizing the process of caring for people, that plays into the flow, that goes back to the care pathways work."
AI algorithms have begun tracking invoices at Mercy, and forecasting monthly inpatient and outpatient claims and collections. "What proportion of those invoices will we collect and what do we need to book into what category? We've gone from 70% forecasting accuracy to greater than 95% accuracy in a few weeks," Stewart says.
He says benefits of the improved insights through AI are savings between $14 million to $17 million tied to Mercy's clinical pathways in fiscal year 2016 (or roughly $800 savings per case for patients treated on a pathway), and in a pilot, Mercy saw a 24-minute reduction in emergency room wait time and reduced length of stay by 20% for patients treated in the ED.
"Mercy covers four states, so we have a lot of variety in the payer landscape. How do we think about all of those payers and groups of patients as a portfolio? Which ones do we need to take other action on from a financial standpoint? Are there more complex patients in this particular insurance group? Do we need to do more case management? We have a couple of large ACOs with some financial risk. How do we quantitate that risk across the portfolio of people we are supposed to take care of? It's a number of variables that you have to analyze to try to find trends and meaning, and it is just exponentially difficult to do."
Mercy is also using AI to target sepsis. "This is one of the most difficult problems, and any clinician will tell you there are definitely some unsolved areas in sepsis that we just don't fundamentally understand," Stewart says. "This is another area where machine learning can be highly beneficial in finding patterns that have eluded the best and the brightest of human analysts."
Going forward, Mercy will use AI to anticipate the financial fallout from the churn in the insurance markets, owing to proposed changes to Medicaid, the health insurance exchanges created under the Patient Protection and Affordable Care Act, and the potential for upheaval if reforms proposed in Congress become law.
While there is always a risk when you're an early adopter, Stewart says Mercy is careful of overreach.
"There is a way to do this without excessive risk," he says. "Dedicate a small group of people and a limited amount of capital and a time frame to explore. Even if it fails miserably you've contained it and then you're dedicated to learning why it failed. Use very straightforward, well-contained use cases. Don't overgeneralize and think we're going to be experts in machine learning. Take very small use cases that have a high probability of machine learning getting you to areas that human intelligence can't do. Understand that it may be an R&D thing. It may be a 100% loss financially, but you will start to learn what this technology is and the promise it holds and do better your second iteration."
Healthy skeptics
Those on the leading edge of the AI movement see its vast potential for medicine within the decade and sooner, but they take their dose of wonder and hope with a sprinkling of skepticism. AI may someday play a role in curing cancer, but that's not going to happen next week.
"It's appropriate to be cautious. I certainly am," says Isaac Kohane, MD, PhD, Marion V. Nelson professor of biomedical informatics and chairman of the department of biomedical informatics at Harvard Medical School. "Almost 30 years ago, my PhD in computer science focused on the topic of medical applications for artificial intelligence. Back in the day we used to call it expert systems. Those were very clearly overhyped, and they're not being widely used now."
Kohane warns that the promise of AI could be overwhelmed by the hype in the suddenly crowded vendor space. "The loudest talkers may not be the best performers," he says. "If the loudest talkers who are not the best performers get the limelight and they fail, it is going to put the hopes that a lot of us have for this technology at risk—not because the technologies are bad, but because people will lose interest and optimism and a willingness to invest."
Michael Blum, MD, professor of medicine and cardiology and associate vice chancellor for informatics at the University of California–San Francisco (UCSF), says he sees great promise in AI, but that more than 20 years of practicing medicine, and training as an engineer before that, have kept him grounded.
"I have seen many silver bullets that were going to revolutionize medicine, and there have been many well-known, well-hyped technologies that have come before this," he says. "These are all tools that go into the tool kit, and when they are used appropriately with available assets they can sometimes be very effective. But whenever something is getting to be incredibly popular and talked about in the lay press all the time, the likelihood of it truly transforming healthcare probably goes down."
Blum, who also serves as director for the Center for Diagnostic Health Innovation at UCSF, views machine learning at this stage in its development as another source to help clinicians improve outcomes.
"There was a lot of talk that big data was going to transform healthcare as it did other industries, but it turns out that big data is just another tool. Big data will power artificial intelligence development, but in and of itself it is not going to transform healthcare," he says. "Having said that, I am much more optimistic about the capabilities of these technologies than I have been in quite some time in terms of how they are going to transform the way we work. They have the ability to allow mundane and limited complexity tasks to be done by machines already, which allows providers to go to the human side of care, spend more time with patients, and deliver better care without having to worry about a lot of the minutia that the computer can take care of."
Farzad Mostashari, MD, the former national coordinator for health information technology at the Department of Health and Human Services, warns that the prodigious amounts of data churned from machine learning algorithms are only as good as the data that goes in. For AI to work, Mostashari says the algorithms must be given specific tasks that rely upon accurate data. "You have to be able to set up the problem really well and clean the data in such a way that it most neatly answers the question that you are trying to answer," he says.
"The more you turn over the wheel, as it were, to an autonomous driver, the more important it is to tell the driver clearly where you're going, and what the question is you want answered, and for that machine to have really clear maps and data about the world," says Mostashari, who is the founding CEO of Aledade, Inc., which operates ACOs in 16 states.
"One of the challenges for healthcare is to be careful not to just blindly throw these methods at problems without having done that prework," he says. "It's tempting—and I have seen this for less-trained people who don't really understand the healthcare context—to find a bunch of data somewhere and throw the machine at it. And then what? Then you get some answers, but you have no idea if there was some aberration in the data or how you defined your outcomes that led you to that false conclusion, and no way of really interrogating the black box to say ‘Why did you come up with this answer?' "
Clinicians don't want to be given a specific list of recommendations or a care regimen with no context, Mostashari says; they want to know why.
"Don't just tell me that this patient probably has this diagnosis. I also have a processor. It is called a brain, and I also want to process and learn why you think this patient is high risk," he says. "So there is a need, I believe, in healthcare where you are not asking the AI to just do it—just be an autonomous car and take me there. We want humans and machines to be greater than either machine or human alone. In those situations, the human and the machine have to be able to understand and trust one another. It requires more transparency than some of the traditional AI methods."
Dip your toe or dive in?
While there is potential for what AI will do for healthcare delivery, and how soon, most experts warn against a two-footed leap into the new technology.
"Anyone who tells you that's the way to go is trying to sell you something you don't need to buy," Harvard's Kohane says. "You want to test, experiment, but you don't say, ‘It's going to work on the first time around so I want to implement a whole system.' That is super high risk and often a failure. Starting small is definitely the best advice."
Anil Jain, MD, a part-time internist with Cleveland Clinic, and vice president/CMO of IBM Watson Health, says a Big Bang implementation of AI likely will not work for most healthcare providers. "With things like AI, you want to do it in phases, and with pilots where those who would benefit the most and those who are going to be able to give you an honest assessment of what is good and what is bad and what is working and what is not are able to do that," Jain says.
"I would start by looking at some of those products that are on the market today, where I can help my oncologist or my primary care doctors do a better job immediately by looking at the analytics and cognitive insights that can be brought in," he says. "Either you wait for the broader EHR market to deliver something meaningful, or you get into these pilots and procure these solutions that do these things, knowing that as the solution evolves, you will too. The nice thing about cloud-based solutions is that as these solutions get enhanced, you are not having to reinstall things and the training cycle is not going to be significant."
Kohane says a good place to begin the clinical AI journey would be around image interpretation, analysis, and diagnoses in oncology, pathology, and ophthalmology, which have undergone rapid improvement over the past five years, and which he believes will create disruptive change within the next two or three years.
"This really unexpectedly strong performance in image recognition is only going to improve the productivity of doctors," Kohane says. "I don't think ophthalmologists would mind if they had to spend less time screening retinas and spend more time productively engaged in the operating room or with a patient. If I am a surgeon and I want to have a fast, high-quality read on an x-ray of a patient who I am seeing in the operating room right now, maybe I don't need to have a radiologist read that anymore, or wait for the pathologist to read it. They can run it through a program."
The beauty of these highly complex, cloud-based algorithms, Kohane says, is that they can function at the clinical site on commodity-level hardware that costs a few thousand dollars. "In some sense, we are heading in this direction anyway, using humans instead of computers," he says. "A lot of hospitals use off-shore radiologists in another part of the world, like India or Australia, so that overnight the x-rays are read by doctors. But what if you didn't even have to wait for it to be done overnight? You could have it done for much less money, much faster."
Comparison to EHR
With respect to the potential impact on care delivery, the advent of machine learning in medicine can be somewhat compared to the rollout of electronic health records over the past decade. Harvard's Kohane says he believes the process will go smoother this time.
"Although we were thrilled that the HITECH Act invested in the process, it was pretty clear at the time that the available shovel-ready technology was state of the art for the 1980s and it was not going to be comparable to what our kids were using for video games. That was a predicable outcome," Kohane says. "This looks different. This will be adopted because it gives productivity and financial gain and accuracy right away when you implement it, as opposed to the promissory note around EHRs, which has not yet really shown itself to be robust."
IBM Watson Health's Jain was involved in the EHR rollout at Cleveland Clinic, and he says "absolute lessons" from that experience can apply to any kind of technology innovation.
"There's the hype cycle aspect that we all deal with, whether it's a new smartphone or a new car, or even a marriage," he says. "Initially there are high expectations, then you sort of ramp down to what it really means to be doing this, and you get into a groove where you begin to understand the reality of how people need to use this tool in their daily work as a hospital or provider. What we have to do as an industry is make sure we don't get stuck in the trough of disillusionment or on the peak of great expectations, but that we get our patients to the plateau of productivity."
Jain also sees the important distinctions.
"Whereas EHR was just an electronic form of the paper medical record, at least originally, to help documentation and billing, AI is a fundamental change in the way we do things. We don't want to lose sight of the fact that you can't just plop something like AI in and keep everything else the same. You have to transform the other parts," he says. "If all you do is put a new solution in place without understanding the impact on the other moving parts, we as an industry will lose. We have to think about transformation as more than technology. It is also people and process, perhaps even politics and governance. Where technology becomes an enabler, AI is the glue that connects all those things. It cannot be thought of in a vacuum.
"An example would be looking at the role that AI may play in assisting clinicians in interacting with patients and documenting their care. Today, studies show that for every hour of direct patient care, two hours are spent in desk and EMR work. With AI, clinicians should be able to spend more time interacting with patients, with insights being presented to clinicians in a contextually aware manner rather than having clinicians hunting and gathering data from complex EHRs to find patterns. That will change the way that medical assistants, clinicians, and care managers interact in the workflow—this newfound direct patient-facing time could be an opportunity for building relationships and reinforcing needed behavior change in some patients."
Mercy's Stewart says the EHR rollout called for a big, up-front investment and a flip-of-the-switch implementation that won't be necessary for AI.
"There was no way to piecemeal an entry into the EHR space. That type of approach to technology can be very painful. The lesson learned is that if you can avoid that type of faith-based investment, try to avoid it," he says. "Approaching machine learning as finding the biggest, baddest single vendor out there and betting the farm—that would make me very uncomfortable. The alternative is more cautiously step by step, with a smaller financial outlay, the focus being on early results and learning."
While acknowledging the snares and pitfalls of the HITECH Act, Mostashari says it's important to remember that today's promise of machine learning would not be possible without data created by EHRs. "Ten years ago the stuff of healthcare was not electronic. It was trapped in dead trees. What we were trying to do was really jump-start the transition that might have taken decades and compressed it into a four- to five-year period," he says. "To this day, I believe that you couldn't have had that without a strong role for government. In the case of AI, I don't see that same parallel. The private sector is fully capable of using these tools on this data platform that has already been built to solve market problems."
Blum says the EHR rollout demonstrated a need to anticipate what new workflows would look like, and that's an important lesson for AI. "You can't build the EHR as a stand-alone that doesn't talk to anyone else. There needs to be data moving in and out of the EHR to other applications, and the AI algorithms are a class of those applications," he says. "You can't think that the whole process is done once you've implemented the EHR because there are many pieces that aren't touched. Advanced analytics like AI are not going to be intrinsically delivered by an EHR vendor. They're going to be powered by the EHRs' data, and they need to interact with the providers who are using the EHR. So you have to think carefully about how that is going to work out."
Metrics & ROI
For all the hope and promise of AI, at some point there has to be a return on investment, and so far that's been difficult to ascertain. How do you project costs and potential savings around an unproven technology? For that matter, how do you measure results to ensure you're on the right track?
"It's a tough question," Stewart concedes. "I think about ROI in two ways: qualitative and quantitative. Most people want to focus on quantitative ROI."
As noted earlier, Mercy uses machine learning algorithms as part of a systemwide initiative to identify the total value of standardizing the care process, but AI was only one component of a large project with many moving parts. "We set a goal for three years to save $50 million. Fiscal '18 will be the third year of the project, and we are on track to hit that goal," Stewart says. "Now, out of that $50 million in savings, what proportion of that comes from our machine learning work versus what comes from standardizing the processes and operational things? That is the extreme difficulty in knowing how much the machine learning contributed to that overall savings. That's where it gets almost impossible to really know. Honestly, we just don't really attempt to do that."
Stewart says Mercy does not look at machine learning in isolation. "It's a tool. It is qualitative. You look at the process, you talk to the people who are using it, it definitely has a value," he says. "We aren't going to try to go down to the penny or dollar for what proportion of that was from machine learning. When they reduced the level of ‘not seens' in the ED by X percent, how many dollars did that translate to relative to the spend on the machine learning side? I don't know.
"If you add up all the spend on the machine learning side for that one use case it may be a negative in terms of the return," he says. "But knowing that we can apply that same protocol going forward to many other use cases that are highly beneficial, it's more of qualitative. We know there is value there, and this is stuff we have to commit to organizationally if we're going to have that ability."
As for metrics, Stewart says he believes that the overall success of initiatives in which AI played a role—such as standardizing care regimens systemwide—can provide a good sense that the new technology is on the right track.
"It's a lot like the concept of what is the ROI for an EHR. It's difficult to measure. You can measure dollars spent, but it is much harder to quantitate dollars on the back side, where it puts you in a competitive position; we have to have that EHR data," he says. "More or less we are viewing machine learning as similar."
Blum says the uncertainty around ROI and appropriate metrics in AI provides a good reason for a slow, incremental approach to implementation.
"For instance, with a collapsed lung, you can do fairly straightforward measurements before and after the use of the algorithm to see both how accurate the algorithm is in its recognition, how many times it is finding things more quickly, and how many times it is alerting providers to true findings versus false positives or false negatives," he says. "Then, you can look at how much sooner on average those findings are getting communicated to the providers and treatments are getting put in place than what the historical data looks like. There is a fair amount of historical data from emergency departments and ICUs on how long it takes x-rays to be interpreted after they're shot. Those are more straightforward examples."
Obviously, improving the speed and accuracy of diagnostics is critical in the care of every individual patient. When AI can extrapolate those findings to a patient population, the potential for improving care outcomes and costs savings becomes enormous. Blum says that day is not yet here, but within three years or sooner.
"When you look at improving diagnostic accuracy, complex decision-making for populations is going to require more sophisticated and larger looks at outcomes and process measures along the way in order to determine how accurate they are," he says.
Because providers will want a better understanding of ROI before they invest in these algorithms, Blum says vendors will have to do a better job providing evidence supporting their claims.
"Each time someone comes out with a new algorithm, it will have to come with ‘Here's how it was validated,' so you're sure it works. That validation will show that not only does it work well, but here is the measure that shows how much it improved care from how things were done previously," he says. "The Food and Drug Administration is very interested in this. They will be playing a role in how these algorithms are developed, how they are validated, and when they pass a threshold that they need to be regulated or don't need to be regulated, and when they do pass that threshold how they are going to be validated and can demonstrate that they're effective in improving care."
Stewart says providers who wait for competitors to make the ROI airtight or the technology iron-clad proven before they wade into AI will find themselves at a disadvantage.
"It's something you have to realize organizationally," he says. "It is something absolutely critical that your competition and the rest of the world is going to do, and it does shift the curve so significantly that if you are not adept in this space you are not going to be competitive in the very near future."
When to take the AI plunge
Harvard's Kohane says we'll know that AI has gone mainstream in healthcare when AI companies from outside of traditional care delivery begin poaching patients. For example, Kohane says 23andMe, the personal genomics and biotechnology company, was at first justifiably derided by the genomics community, which challenged the accuracy of the data used by the company. Then they got better, fast.
"They've matured. Now they have large population data sets and they have improved their algorithms, and they have FDA approval to move ahead and provide more clinical advice," Kohane says. "That is clearly happening outside of healthcare and geneticists, and consumers are driving that forward. You're seeing AI/machine learning applied to genomic data sets more and more, and if you start seeing interesting combinations of genomics and wearable and clinical data being directly marketed to patients, that is a good sign that it's ripe to get into it before you're disrupted."
Mercy's Stewart says the decision on when to jump into AI will be "highly individualized" for every provider.
"You'll have to assess your internal capabilities to do this," he says. "Right now, it would not be my recommendation to buy the hype and the buzz of the hundred startups in the past 12 months who will come in and say, ‘We are going to come in and revolutionize your operations with machine learning and AI.' It's still early, so I would be very conservative against those claims."
Skepticism and caution are not a pass to sit on your hands and do nothing.
"It seems as far-fetched as quantum mechanics or astrophysics, but it's really not. If you've got some internal capabilities to start thinking about some of these problems in your facility, there are some machine learning algorithms that you can initiate with Amazon cloud. Just send it and they give you an answer back," Stewart says.
"The accessibility of the technology now is unbelievably different than it was 18 months ago, and that is going to change. The rate of change is not linear; it is exponential. This is going to get easier and easier and more accessible for these small-focus use cases."
While AI has yet to transform medicine, Blum says those who ignore the new technology do so at their own peril.
"It is easy to get skeptical and say AI has been around for 50 years, but it hasn't changed much," he says. "Realize that there are autonomous driving cars. There is Siri and Alexa. While it is easy to poke fun and say they're imperfect and we have to be perfect in healthcare, the reality is these technologies are rapidly improving."
"There have been several technological breakthroughs over the past five years that are powering that. Storage is very cheap and computing power continues to get cheaper very quickly, so we are very much at the cusp of seeing a transition here," Blum says. "The same way that iPhones were a novelty one year and the next year everyone had them, you are going to see the same thing with AI. These will be a novelty. They'll be at the academic medical centers, and then fairly quickly they are going to be deployed as general purpose applications via commercially available cloud services."
Leap of faith
At some point, clinical and hospital leaders will have to make the leap to AI. For Blum, the leap is a necessary step for every provider, but one that must be carefully balanced.
"In the situation we are in, where there is so much financial pressure on healthcare organizations and every advantage can make a difference, the ability to deliver care more
cost-effectively and quickly makes a huge difference," he says. "At the same time, you don't want to make large investments that don't pay back quickly. I wouldn't recommend building a multimillion dollar program to start developing AI. In a smaller hospital, I would be paying attention when you see these things being offered by the well-established vendors, and they look like they can be integrated into your workflows and technology environment—that is going to be the time to jump in quickly and not lose too much of the advantage."
Stewart says that to some extent a leap of faith is necessary, "but you can inform your faith."
"How do you walk on ice?" he says. "Well, you don't sprint out like a dog and hope everything works out. You start at the thickest edge, and you're careful with your next step and you're paying attention, and if something cracks you step back. There is a way to do this with caution rather than reckless abandon. We don't want to play poker and take $200 million and push all the chips in and say we are going to have that kind of faith and hope it works out."
Stewart says look for "small chunks of specific problems" and speak to vendors who can address "$100,000 problems for a few thousand dollars."
"I'd do it in a well-controlled fashion that gives me the early learning and a comfort level with it," he says. "I might be willing to tiptoe on that ice."
John Commins is the news editor for HealthLeaders.