With new generative artificial intelligence (AI) solutions exploding onto the market, healthcare administrators face a moral dilemma — capitalize on near-term efficiency improvements or wait for validation on safety and effectiveness?
We believe the answer lies somewhere in between safely punting and taking the long pass. Kicking a field goal on responsible early adoption while assessing and understanding the complexity over the long term.
To get a sense of what the adoption cycle for emerging AI-powered healthcare solutions might look like, one need only look to other significant tech innovations misjudged as out of this world just a few years ago. One provocative comparison shaping the future of automobile transportation is self-driving cars, which, like AI healthcare technology, can have serious, life-threatening consequences if things go awry.
Trusting the technology
There is no doubt that both healthcare AI and self-driving cars are going to revolutionize our lives. As these innovations become normalized, society will come to trust the technology more than humans, much like pilots trust their instruments during low-visibility weather conditions.
Consider, for instance, a recent experience in San Francisco, taking a Waymo driverless taxi to a restaurant. It was shocking, quite frankly, how well the autonomous vehicle worked. While it felt odd not having a human driver, at no point was there concern about getting into an accident. However, as the self-driving taxi navigated through the busy streets, something interesting occurred.
Arriving near the destination, the self-driving taxi stopped short and would not pull into an alley. Presumably, the technology sensed something blocking the road, requiring its human passengers to exit the car and walk the rest of the way.
Another recent Waymo faux pas occurred when the driverless car was unable to navigate a detour because a human police officer was using hand signals directing traffic to go left. The car stopped and the officer peered into the back seat to get the passenger’s reaction. Tossing up his hands, all the helpless passenger could say was, “I’m not in control.” Instead of taking the left turn, the car ultimately went around the police officer to the right, because it couldn’t recognize hand signals.
Two steps forward, one step back
These are great examples of how far driverless vehicles have come — and where limitations still exist, both in functionality and to gain public trust. The point is that it takes time and a lot of iterations before a new innovation is ready for widespread adoption.
Indeed, Google revealed its driverless car project in 2010 and conducted the first public test in Phoenix, Arizona, in November 2017. In 2021, Waymo began testing its robotaxi service in San Francisco, and is now on its sixth-generation of hardware and software that promises greater resolution, range, and computing power. However, despite nearly 15 years of development and testing, autonomous vehicles require many more real-life encounters to resolve fringe use cases, such as random road blockages, manual hand signals, annoying honking, and other issues, before they’re ready for prime time. The same should hold true for AI healthcare solutions, carefully monitoring them to identify and resolve edge use cases that will invariably crop up.
Adopting AI in healthcare
Just as we have seen with self-driving cars, the unintended consequences of AI in healthcare are not unintended if you know they are going to happen. This is why humans should be involved in AI testing early on, trouble-shooting solutions before they are adopted across the healthcare ecosystem. Early autonomous test vehicles had humans in the driver’s seat, ready to take over if the car made a mistake. Enlisting qualified humans to monitor non-critical applications is a way to minimize risk and disruption for patients and providers, with user acceptance as part of predictable technology adoption lifecycle stages, which include Innovators (2.5%), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%), and Laggards (16%).
One of the pitfalls, of course, is trying to move through the lifecycle stages too quickly. We predict a cultural shift as different age groups increasingly accept having a non-sentient robot doing something for them. It is easy to assume that AI and chatbots might take over the “translation” part of language solutions in healthcare, but to avoid costly mistakes and worse, it will take robots and humans working together. While low-risk, informationally-focused AI applications for language solutions shouldn’t take as long to develop as driverless cars, only time will tell how quickly the healthcare industry adapts to AI interpreting. In these critical provider-patient conversations, the interpreter’s ability to accurately comprehend and relay medical terms and practices, and handle sensitive situations with grace, awareness, and cultural competence all come into play, helping to create an equitable and trusting environment.
Ultimately, trust is more important than technology. While an AI co-pilot might make translation more seamless, it will need a human partner to ensure that the communication is optimal between the provider and the patient. We foresee a shift in the role of translators and interpreters to support more complex cases, with the fundamental question being, “Will AI ever be able to read between the lines like a human?” A really good translator can understand the cultural implications of why someone is saying something. AI may never be able to do that, certainly not on the first iteration.
In a future perfect sense, the ultimate end state is singularity, defined as a hypothetical moment in time when AI becomes so advanced that humanity undergoes a dramatic and irreversible change. Like driverless cars, this could be 15-plus years in the making, if ever. For now, the industry should be asking: “How do we get humans to be better humans versus robots?” with AI doing the heavy lifting on repetitive, non-analytical requirements.
Countering skepticism
Where are we today in the adoption of AI for language translation? Clearly, a fair amount of skepticism still exists about AI being over-hyped among healthcare system CEOs and administrators (who arguably fall into the initial Innovators stage). Overcoming this resistance with unrelenting testing is the biggest challenge, iteratively verifying safety, accountability, and effectiveness based on research data.
GLOBO’s current testing model starts with evaluating the quality of large language models (LLMs), using certified interpreters to listen and scrutinize AI transcription, translation, and speech output for accuracy, empathy, and latency. In a pilot testing scenario, these live resources can intervene if the technology gets something wrong, or the patient or provider needs further clarification. Near- and long-term, AI for language solutions should strengthen and not interfere with the patient-provider relationship.
Collaboration is key
To responsibly assess and integrate AI into different care settings, we urge healthcare leaders to collaborate with a trusted partner. This type of engagement assures AI technologies are tested and configured to meet patient needs at various points along their health journey. Our team of experts is dedicated to designing the right AI-enabled tools to help your hospital, health system, or medical practice communicate with multilingual patients when it matters most.
Don’t allow the complexities of AI to hinder your goals for enhancing and expanding linguistic services to your clinicians, staff, and patients. Read our newly published research paper, “AI-Powered Medical Interpretation Study: Insights for Health Leaders,” to learn how we can help your organization leverage AI interpretation tools to better serve your non-English-speaking patients.
Stephen Klasko, M.D., is an executive in residence at General Catalyst and former president and CEO at Thomas Jefferson University and Jefferson Health. He is recognized as a transformative leader and advocate for innovation in our systems of health care and higher education. After nine years in Philadelphia re-imaging the future of Jefferson, he is now pursuing his vision of creative re-construction of healthcare to address health inequities.
Dipak Patel is CEO of GLOBO Language Solutions, a B2B provider of translation, interpretation, and technology services for multiple industries. Prior to GLOBO, Patel spent 20-plus years in corporate healthcare leadership roles. The son of immigrants, he understands the significance of eliminating language barriers to improve healthcare equity.