Other threats on the annual list include 'digital darkness,' unsafe medical products and technology implementations that create sketchy workflows.
The growing use of AI chatbots for dispensing medical advice is raising red flags in the healthcare industry. Simply put, you don't know where that data has been.
While healthcare leaders are embracing the technology in areas like call center operations and patient engagement, many worry that those chatbots could be harmful if not properly designed and managed. Several states are even moving to govern the technology in light of concerns that chatbots could give people potentially dangerous mental health advice.
That's why misuse of AI chatbots in healthcare has secured the top spot in ECRI's Top 10 Health Technology Hazards of 2026.
Chatbots built out of LLMs, including ChatGPT, Grok, Copilot, Claude and Gemini, are designed to crunch data and provide answers through a human-sounding interface, but that interface could confuse users into putting too much faith in the answer. Hallucinations, data drift and other problems could affect the technology, leading to incorrect diagnoses, unnecessary or harmful recommendations, even promoting unsafe practices.
"Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals," Marcus Schabacker, MD, PhD, president and chief executive officer of the Pennsylvania-based non-profit, said in a press release accompanying the report. "Realizing AI's promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI's limitations."
"AI models reflect the knowledge and beliefs on which they are trained, biases and all," he added. "If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems."
This year's list also serves as an example of AI's advances in healthcare. Last year's ECRI list was topped by "Risks with Ai-enabled health technologies," moving up from fifth place the year before. The specificity of this year's top entry might pave the way for more than one AI concern in future lists, especially as the technology moves further into different parts of the healthcare enterprise.
Second on this year's list is a long-standing concern brought on by the industry's reliance on technology. Healthcare leadership, from the CEO on down, is concerned about what would happen if a health system or hospital went dark, suddenly losing access to the EHR and other tech platforms.
This "digital darkness" concern is even making its way into pop culture. The current season of the HBO/MAX series The Pitt, chronicling a 15-hour shift in a Pittsburgh hospital ED, is set to include a sudden loss of access to the EHR, prompting clinicians to "go analog."
The digital darkness scenario didn't even make last year's list, while cybersecurity – always a top concern among healthcare leadership – dropped from third place last year to eighth place this year, where it focused on safety concerns in legacy devices.
Interestingly, last year's second place threat, "Unmet technology support needs for home care patients," didn't even make this year's list, even as healthcare organizations move more services from the hospital to the home and smart devices and platforms take over more of the consumer technology space.
The rest of this year's list consists of a wide variety of technology hazards:
- Substandard and falsified medical products
- Recall communication failures for home diabetes management technologies
- Misconnections of syringes or tubing to patient lines, particularly amid slow ENFit and NRFit adoption
- Underutilizing medication safety technologies in perioperative settings
- Inadequate device cleaning instructions
- Cybersecurity risks from legacy medical devices
- Health technology implementations that prompt unsafe clinical workflows
- Poor water quality during instrument sterilization
Eric Wicklund is the Associate Content Manager and Senior Editor for Innovation and Technology at HealthLeaders.
KEY TAKEAWAYS
Healthcare organizations are using AI chatbots to improve efficiency in areas like call centers, but the technology could be dangerous to consumers who ask for advice or recommendations.
Healthcare leaders need to continuously monitor the technology and make sure all output is reviewed.
Experts are also concerned about the impacts of a 'digital darkness' event, where a health system or hospital suddenly loses access to its technology platforms, like EHRs.