Skip to main content

3 Questions You Must Ask Before Investing in AI

Analysis  |  By Mandy Roth  
   July 03, 2018

Eager to experience the advantages artificial intelligence promises to deliver, healthcare executives may leap before looking into issues that could create future liabilities.

As artificial intelligence (AI) makes deeper inroads into healthcare, health systems may embrace innovation without knowing what questions to ask to protect against potential liability and patient care issues that may occur down the road.

A new report from Accenture indicates, "As AI continues to play a greater role in decision-making, four-fifths (81%) of health executives said they are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions." In addition, 86% "have not yet invested in capabilities to verify data sources across their most critical systems."

At the same time, momentum for these solutions is mounting. A report from ABI Research indicates that  in 2021, AI-based predictive analytics models will save North American hospitals $21 billion.

As health leaders explore AI solutions, here are three essential questions executives should ask vendors—and themselves—to better protect the enterprise and its patients from unintended consequences.

1. What assumptions were made when this AI solution was built?

AI algorithms contain built-in assumptions that influence data output.

For example, Kaveh Safavi, MD, JD, head of Accenture’s global health practice says, "If you're trying to help somebody make a decision about what kind of care to get, it matters whether you're going to apply a risk-benefit analysis or a cost-benefit analysis."

The recommendation the system produces could vary, depending on whether the assumption focuses on a health benefit versus the economic cost associated with it.

Understanding these assumptions leads to another consideration. Is your organization committed to transparency? Are you willing to explain these assumptions, including the objectives of decisions using AI, as well as how patient data will be used?

Building trust is paramount to gaining acceptance of new technology. Privacy concerns about personal and health data usage are at an all-time high, even hindering consumer adoption of mobile and digital health tools, according to the 8th annual Industry Pulse survey by Change Healthcare and the HealthCare Executive Group (HCEG). Many people are uncomfortable with technology they don't yet understand.

While Accenture reports that 94% of health executives believe that treating customers as partners is important or very important to gaining consumer trust, Dr. Safavi says that few healthcare organizations have made public disclosures about their use of AI and related data usage.

"It really goes to the issue of responsible and explainable AI," says Dr. Safavi.

"Being able to explain the process used to arrive at a decision can be critical to trust, safety, and compliance," according to the Accenture report. In addition, it cautions that "healthcare organizations must raise AI systems to act responsibly as AI represents the business in every action that it takes."

2. What biases does the data contain?

GIGO—garbage in, garbage out—was once a popular way to describe the consequences of poor data input. With AI, that flaw is magnified. 

"Inaccurate data leads to corrupted insights and skewed decisions," according to the Accenture report. "In healthcare, these vulnerabilities can do great harm because data underpins medical decisions, treatment plans, and even whether an insurance claim is accepted or denied." For example, erroneous data in an EMR could lead to misdiagnosis or mistreatment of a patient.

Defects are often related to biases, which may not be apparent at first glance. It is crucial to not only explore the source of data, but to probe deeply into the parameters that define it. 

Assess the completeness of a data set, says Dr. Safavi. For example, perhaps your data only encompasses patients aged 65 and older, but your population spans all ages. Therefore, the data contains a bias, and results cannot be accurately projected for patients younger than the original data threshold, he says.

Other biases could inadvertently be built into the data, perhaps not controlling for racial or socioeconomic factors.

AI will perpetuate and possibly exacerbate the issue because "that bias [is] baked into the computer; that's how it got trained," says Dr. Safavi. "Because "the people using it don't know the data might be biased, they would have no way of knowing if that conclusion is right or wrong."

It's essential to understand the source of data that will be used in your system and question its accuracy, appropriateness, and biases up front.

3. How will we apply this technology and ensure that it does no harm?

One important distinction in healthcare systems is whether AI will be used for decisions impacting patient care or hospital operations. Both require careful oversight, but the impact is different.   

When AI is used for clinical applications, "In the end, the person held accountable is the person using that technology," cautions Dr. Safavi.

"Therefore, the clinician is on the hook for the decision. If there's any implementation of [AI] technology to directly make a recommendation for a patient—a direct diagnosis or treatment recommendation—we're going to have to go through the same kind of vetting that a drug goes through, where there are actually experiments and trials done. The efficacy is tested and validated. I don't think you're going to see a lot of people taking an AI agent that is used for diagnosis or treatment quickly into [practice]," he says.

John Couris, MS, president and CEO of Tampa General Hospital, has experience overseeing AI used in clinical decisions, as well as operational processes. In a previous position at Jupiter Medical Center, the hospital used an oncology product developed by IBM Watson Health that employed AI to produce a recommended treatment plan.

In his current position, the hospital just invested in a command center solution from GE Healthcare to advance care coordination, help enhance patient safety and quality, and improve efficiency through a new care coordination center, which he expects to save the system $62 million over five years.

Before instituting change, data produced by the hospital is processed through a "digital twin," which uses AI to examine the impact of potential decisions based on changing specific variables.

"In my opinion," says Couris, "that's the safest environment you can create because you're building a virtual experience and looking at the results before putting it into operation. That's pretty darn powerful. That does not happen broadly across our industry right now. What better way to safeguard somebody or something than be able to anticipate through AI what the potential result will be before you actually implement it?"

Reengineering teams, whose members are intimately familiar with hospital processes and personnel, will provide oversight of every aspect of the transformation.

When using AI for either purpose, it is crucial to quiz suppliers about checks and balances built into their systems.

Algorithms in GE Healthcare's command center solution, for example, "have to be precise and simple," says Jeff Terry, MBA, FACHE, CEO of healthcare command centers for GE Healthcare Partners. "Accuracy is essential, and GE’s platform constantly monitors it. Machine learning helps with this; training algorithms are constantly adjusting to be more accurate by comparing the prediction to the actual." 

Meanwhile, the promise of AI is spurring unprecedented growth.

"The number of patient monitoring devices using the data to train AI models for predictive analytics in North America will rise from 23,000 at the end of 2017 to 1.2 million in 2021 with a compound annual growth rate of 172%," says Pierce Owen, principal analyst, end markets for ABI Research. The United States, he says, accounts for about 90% of those devices.

As we catapult into the future, decisions health leaders make now will determine whether those applications are ultimately beneficial or result in issues they didn’t anticipate.

Mandy Roth is the innovations editor at HealthLeaders.


Get the latest on healthcare leadership in your inbox.