Artificial intelligence should be used to support, not supplant, the healthcare provider.
Editor's Note: Jeremy VanderKnyff, PhD, is the chief integration and informatics officer for Proactive MD, a South Carolina-based advanced primary care provider.
Artificial intelligence (AI) and emerging chatbot technologies are revolutionizing the way healthcare is being delivered. Designed to mimic human intelligence to improve upon and perform standard operational tasks, AI is used by numerous healthcare organizations worldwide, with new products entering the marketplace daily.
Integrating AI in healthcare has a number of advantages, including accelerating the discovery of insights, making instantaneous clinical decisions, minimizing a provider’s time spent on administrative tasks, and supporting training and education—all of which have the potential to generate optimal health outcomes for patients.
With such cutting-edge advancements emerging at a rapid rate, there’s often an unidentified and wholly unexplored type of risk involved. The almost daily introduction of new healthcare AI capabilities significantly outpaces our ability to measure and diligently analyze its accuracy and performance. It’s crucial to try and identify the shortcomings of AI in healthcare in order to best use its advantages, and that means we have to ask: what are providers willing to sacrifice, and what do patients lose?
Considering the Unconscious Biases of Artificial Intelligence in Healthcare
There are few facets of healthcare that are unaffected by the unconscious biases of AI, including risk identification, healthcare claims, and data security. Research has found that some algorithms with these technologies have severely underestimated the risk for patients. This flaw exacerbates the already adverse problem that underserved populations face: underdiagnosis. Historically in healthcare, this disproportionately affects people of color, playing out in the form of back pain in black women and prostate cancer in black men.
Many healthcare organizations implement AI technologies to help identify patients who may need closer management during care; however, because these tools have now been trained on fundamentally flawed datasets, they often underestimate patient risks. As this overwhelmingly impacts people of color, it furthers the impediments to proper care that already afflict underserved populations.
Additionally, one lurking variable to consider is how access to care further skews the dataset AI references. For example, a significant portion of the population lacks proper access to healthcare, thus creating fewer claims. To an 'uninformed' AI model, these individuals would present as healthy—which could be far from the truth. In fact, research tends to point to lack of care access leads to poorer health outcomes over time.
The Threats of Artificial Intelligence Technologies on Patients
Innovative technology, like a chatbot, creates an ability to simulate a human interaction. It can joke with you, educate you, and give you advice like another human would.
However, as the use of chatbot technology in healthcare increases, the personal touch and impact that a provider can make on patients and their health begins to be sacrificed in the name of advancement. Removing the personal element from healthcare interactions entirely is a costly thing to make outdated, and it’s a choice that will continue to reveal negative repercussions the more it’s implemented.
We are seeing a surge in healthcare companies rebranding as 'technology companies,' despite the fact that healthcare is fundamentally human-centric. Healthcare is one of the most sensitive, personal, and important things in our lives, and it’s crucial for us to consider the impact that these tools will have on patients.
It’s easy to take our own health access and health literacy for granted; we make appointments, we understand diagnoses, and if we don’t know what a certain diagnosis means, we have the tools to ask those meaningful follow-up questions. Artificial intelligence is a convenient tool for the healthcare-informed.
For underserved, underrepresented populations, however, artificial intelligence is a health barrier at best and a severe health risk at worst. As healthcare leaders, how much are we willing to make human-centered healthcare 'a thing of the past' in order to incorporate new AI technologies?
Artificial Intelligence Tools as Decision Support, Not Decision Makers
The foundation of our problem reveals itself because we are asking the question, 'How do we bypass the human altogether?' We should be asking 'How do we find the right balance?' There has to be a middle ground where the best of human-touch and advanced technologies thrive.
No human is infallible. That is certainly unmistakable when we realize that our own shortcomings shape and ultimately 'flaw' the systems of AI we create. As the healthcare landscape continues to change rapidly, we should look at AI tools as clinical decision support tools, rather than decision makers. Artificial intelligence technology is meant to be an extension of the human provider to augment decisions, not to replace them outright.
Nothing can replace the compassionate care of a provider. They can see you, hear your stories, grieve with you, rejoice with you, and offer a human experience that can never be replicated—even with all the knowledge in the world at their fingertips. Human providers all have healthcare stories of their own, and their ability to empathize with you is not based on programming. It ultimately comes down to us as healthcare leaders understanding that we should not jump into new technologies, no matter how efficient or lucrative they may seem.
We have a responsibility to ensure any new technologies only enhance our abilities and 'do no harm,' and this only seems preservable by retaining the human touch in healthcare only providers can deliver.
Care to share your view? HealthLeaders accepts original thought leadership articles from healthcare industry leaders in active executive roles at provider and payer organizations. These may include case studies, research, and guest editorials. We neither accept payment nor offer compensation for contributed content. Send questions and submissions to Erika Randall, content director, firstname.lastname@example.org.