Skip to main content

Bias-Free AI and Algorithms in Healthcare Remain Elusive Goal

Analysis  |  By Scott Mace  
   June 30, 2021

Datasets still contain bias and hold back the ability of machine learning to improve healthcare.

Artificial intelligence (AI)–driven healthcare is widely expected to transform medical decision-making and treatment, but AI algorithms must be thoroughly tested and continuously monitored to avoid unintended consequences, including bias, to patients.

In a commentary published in JAMA Network Open, Regenstrief Institute President Peter Embí, MD, calls for algorithmovigilance—a term he coined for scientific methods and activities relating to evaluation, monitoring, understanding, and prevention of adverse effects of algorithms in healthcare—to address inherent biases in healthcare algorithms and their deployment.

HealthLeaders spoke with Dr. Embí to understand this new concept. This interview has been edited for brevity and clarity.

HealthLeaders: Is this just about detecting bias in algorithms?

Peter Embí, MD: Algorithms learn from existing data, which have inherent biases in them, because they come from a biased system, where we have disparities in care. There's a risk that you can build into these algorithms that same bias, and then you essentially operationalize something that is skewed, because it was based on skewed data. When we start to have predictions made by these algorithms that guide therapy, guide diagnosis, or provide advice, we have to make sure that they're having the right effects, or the effects that we expect.

One of the key elements of algorithmovigilance is to say—much like we do with drugs—just because we've done some initial studies doesn't mean it's going to perform the same way when we put it out into the world. And we need to be monitoring [predictions made by algorithms], to make sure that they're going to have the intended effect. It's already been demonstrated that if you don't do that, you can have unintended consequences, like worse care for certain vulnerable populations and the like. The analogy that I made with algorithmovigilance to pharmacovigilance, which has existed for quite some time, is apropos in the sense that you have in pharmacovigilance. We get FDA approval for a particular drug, we put it out into the marketplace. But once it starts to be used much more widely, we often find effects that we weren't expecting, and the only way to know about that is to look for them. So that's a core piece of it.

Peter Embí, MD, is the President of Regenstrief Institute. Courtesy of Regenstrief Institute.

HealthLeaders: The pharma world is filled with trade secrets. We know a lot of the science behind how some drugs are produced. Science then tests them and report the results. All of that is public. But there are concerns that AI is the same way, in the sense that we have all these black boxes, and a problem with oversight is that we're just not working with an innovation system that offers any incentive for somebody to tell you how their black box works.

Embí: Even those who develop them don't really know why they're giving the output that they are giving, because these are machine learning approaches. Oftentimes, when you do dig in, and you look at what are the parameters that are leading to the recommendations, they don't make logical sense all the time. The fact that someone has a certain characteristic, why should that lead to a prediction that they're going to have a worse outcome, and should be treated a certain way? It doesn't always make sense, except there's some multi-step correlation that we don't fully understand. And yet, that is the parameter that is predictive. As we continue to use these algorithms in practice, … I think it would be important to continue to call for that level of transparency, at least for those who are regulating them, if not for the rest of us. Whether it's a black box or not, you're determining what's happening in the real world.

And I think that's the most important piece of this. We can't presume that it's going to have the effect we expect it to have. We have to check for that. It's an ethical imperative that we do. And then I think, to your point, regulations need to follow. Even as the FDA and others grapple with how they're going to monitor and evaluate and certify these things, that ongoing piece of post-market surveillance is going to be critical.

HealthLeaders: So, what do I do as a technology provider to do the corrective action needed?

Embí: It's going to become increasingly important that some symbiotic relationship between the technology company and the end user or customer needs to exist, where there is ongoing monitoring, to ensure how it's deployed, how it's used, what the outcomes are, and are fitting within the expected parameters of what we anticipate would happen. I am not advocating that we don't use algorithms. In fact, I want the use of algorithms to be advanced for good. But if we aren't aware that they can have unintended consequences, we may not identify the downsides. So, I think [technology providers] have to build that in.

HealthLeaders: What's the road to bias-free data sets? How do we get there?

Embí: I don't know that I have the full answer to that, except to say that we know that our current datasets, and probably for some time, are going to have inherent biases in them. Pulling from other experiences, it may not be that we can achieve nirvana of completely bias-free datasets. But until we get there, we should work hard on that, to include a more representative sample of individuals and experiences in those data sets. The first step is to understand that we have a problem, that we do have bias, and characterize what those biases are. Because the better we understand and can characterize what the existing and current biases are in our existing datasets, the better we're going to be able to understand what we need to be correcting.

“Some symbiotic relationship between the technology company and the end user or customer needs to exist, where there is ongoing monitoring, to ensure how it's deployed, how it's used, what the outcomes are, and are fitting within the expected parameters of what we anticipate would happen.”

Scott Mace is a contributing writer for HealthLeaders.


KEY TAKEAWAYS

Data used by algorithms and machine learning to improve healthcare often contains inherent bias.

Existing FDA pharmacovigilance process provides an example of how similar vigilance should apply to AI and algorithms used in healthcare.

More representative samples of individuals and their experiences in healthcare can help reduce or eliminate bias.


Get the latest on healthcare leadership in your inbox.