Artificial intelligence (AI) is rapidly becoming a staple in healthcare.
New solutions are entering the market at a lightning-fast pace. In fact, the United States has over 4,500 startups that focus on AI in healthcare, 47 of which were founded since the start of 2025.
The emergence of these tools stands to create operational efficiencies, assist with diagnoses, and improve patient care. But beneath this excitement lies a more sobering reality: AI is still far from infallible. In some critical areas, it’s falling short, sometimes with life-threatening consequences.
Trust is a key component of healthcare—and in order for AI to be successful, we need reliable solutions. Accenture's Technology Vision 2025 report emphasizes the critical need for trust in AI systems, which are built upon foundations of accuracy, predictability, consistency, and traceability.
What happens when AI fails our providers? And more importantly—how can we use those lessons to build better systems?
Here are three examples that should make all of us in healthcare approach implementing AI with extra care and consideration:
1. Prediction Gaps: AI Predictive Tools Missing Critical Patient Risks
A March 2025 study published in Communications Medicine tested machine learning models developed to predict patient mortality risk in hospitals. These systems were meant to flag early signs of patient deterioration—a tool that could, in theory, save lives. But the results were far from reassuring.
The study found that these models missed 66% of cases where a patient was at risk of death due to injuries and complications. In other words, AI systems failed to identify two-thirds of patients needing urgent attention.
For clinicians, this represents a serious concern. Predictive models are only as good as the data they are trained on—and when they fail, they don’t just cost money; they cost lives. If organizations are going to integrate AI into major care decisions, we need rigorous, continual validation in real-world environments—not just promising test data.
2. Automation Bias: How AI Can Undermine Clinical Confidence
What happens when AI causes highly skilled healthcare professionals to second-guess their expertise? This is known as “automation bias.” In a 2024 study, researchers studied this phenomenon by examining pathologists working under time constraints.
The research focused on how AI-assisted decision-making could unintentionally lead doctors to trust the machine’s output over their own better judgment. Surprisingly, pathologists who initially made correct diagnostic assessments were 7% more likely to override their decision in favor of an incorrect AI recommendation when under time pressure.
This is very concerning. This outcome suggests that when not properly contextualized or explained, AI tools can offset clinical judgment. In high-stakes environments, doctors need support—not distractions. To prevent this, we need to design AI systems with explainability and transparency, not just predictive power.
3. Built-In Bias: When AI Learns Our Flaws
Finally, we can’t discuss AI concerns without addressing the issue of bias. Several reports in recent years have noted that AI algorithms can inherit and amplify systemic inequities based on how the tools are developed and trained.
In a notable study of algorithmic bias, researchers found that an AI tool used across major U.S. health systems was found to recommend additional care for healthier white patients while underserving sicker Black patients. Why? The model used healthcare cost as a proxy for healthcare need, and historically, Black patients have received less care—not because they needed less, but because of longstanding disparities.
This is a prime example of why equity must be strategically included in AI development from the ground up. Diverse training datasets, continuous auditing, and ethical review boards are critical for creating equitable patient experiences and outcomes.
Human-Centered AI: Designing for Better Healthcare Outcomes
According to Julie Sweet, Accenture chair and CEO, ” ... unlocking the benefits of AI will only be possible if leaders seize the opportunity to inject and develop trust in its performance and outcomes in a systematic manner so businesses and people can unlock AI’s incredible possibilities."
AI is not the downfall of healthcare—it’s one of the most promising innovations to hit the industry in recent years. But we have to be honest about where it’s currently underperforming. The path forward isn’t blind adoption of new tools. Instead, it’s critical integration, driven by real data, cross-disciplinary collaboration, and an unwavering focus on positive patient outcomes.
Healthcare leaders and providers need to stop asking, “Can AI solve this?” and start asking, “Should it?” And if so, how can we do it better?
It’s critical for hospitals and health systems to partner with companies that have a proven track record in the fields their solutions aim to solve. These tools should be grounded in research and co-developed with healthcare systems in mind.
At GLOBO, we’re on a mission to transform patient communication. As a leader in the language services industry, we have over 15 years of experience reducing friction for healthcare providers serving multilingual patients. We envision a future for language access that leverages both human and technology-enabled solutions to improve experiences for healthcare organizations and their diverse communities.
Want to learn more about GLOBO's AI-enabled innovations? Request a briefing session with one of our language access experts here.
Dipak Patel is CEO of GLOBO Language Solutions, a B2B provider of translation, interpretation, and technology services for multiple industries. Prior to GLOBO, Patel spent 20-plus years in corporate healthcare leadership roles. The son of immigrants, he understands the significance of eliminating language barriers to improve healthcare equity.