Skip to main content

Survey Highlights Disturbing Use of 'Shadow AI' for Hospital Execs

Analysis  |  By Eric Wicklund  
   January 28, 2026

More than 40% of healthcare execs and clinicians surveyed say they know of a colleague who has used unapproved AI. And 17% have done it themselves.

A top concern of healthcare executives is that AI adoption among clinicians is moving faster than governance. A new survey proves that point.

Some 41% of clinicians and administrators taking part in a Wolters Kluwer survey say they are aware of colleagues who are using AI tools that haven’t been approved by the health system or hospital. And 17% say they’ve done it themselves.

Experts call this “shadow AI,” and point out that it could be dangerous.

“Doctors and administrators are choosing AI tools for speed and workflow optimization, and when approved options aren’t available, they may be taking risks,” Yaw Fellin, SVP and General Manager of Clinical Decision Support and Provider Solutions for Wolters Kluwer Health, said in a press release. “Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns. Leaders must act now to close the policy gap around AI use, develop clear compliance guidelines, and ensure that only validated, secure, enterprise-ready AI tools are used in clinical care.”  

To address the use of shadow AI, a white paper accompanying the survey offers six steps:

  • Develop clear policies on AI use
  • Foster collaboration between policy decision-makers and users
  • Identify purpose-built AI tools that support enterprise-wide security and goals
  • Clearly communicate AI policies and provider training sessions
  • Provider broader training on AI literacy
  • Continue to monitor for uses of shadow AI and gather feedback

The reasons for skirting protocol and using unauthorized AI? Some 51% of administrators and 45% of providers say the tools create faster workflows, while 39%/27% say they either offer better functionality or management isn’t offering those tools yet. And 10%/26% just say they’re curious or want to experiment with the technology.

But getting ahead of governance puts the health system at risk if something goes wrong.

Administrators are far more likely to be involved in formulating AI policy (30% to 9%), yet neither seem to be aware what their organization’s policies say. Only 41% of administrators and 35% of providers said they’re very familiar with their organization’s AI policy and follow the rules closely.

In addition, 42% of administrators and 30% of providers strongly agreed that their organization’s AI policies are clearly communicated.

“These differences show that AI policies need to be clearly communicated in multiple locations, not only by email or enterprise communications, but also in point-ofcare locations such as the HER,” the Wolters Kluwer report concluded. “Training sessions are even more critical as AI is an emerging and constantly evolving technology, and even the most technologysavvy employees may not understand the latest risks and opportunities. Training sessions can also support active learning and policy reinforcement among providers as enterprise tools are established.”

Other highlights of the survey:

  • One in every 10 respondents that they had used an unapproved AI tool for a direct patient care use case.
  • Both providers and administrators listed patient safety as the top AI risk. Providers ranked inaccurate outputs as second on the list and privacy as third, while administrators ranked privacy second and date breaches third.
  • 60% of administrators and 37% of providers said they frequently use AI tools to improve efficiency.
  • 34% of providers and 15% of administrators said they use AI tools only occasionally for specific tasks.
  • 94% of administrators and 80% of providers either agree or strongly agree that AI will significantly improve healthcare in the next five years.

Eric Wicklund is the Associate Content Manager and Senior Editor for Innovation and Technology at HealthLeaders.


KEY TAKEAWAYS

Healthcare leaders have expressed concern that they can’t keep up with AI adoption, and governance is falling behind.

A new Wolters Kluwer survey finds that administrators and clinicians are using AI tools that haven’t been approved by leadership.

In addition, only 35% of clinicians – and 41% of administrators – say they're very familiar with their organization’s rules governing the use of AI and follow those rules closely.


Get the latest on healthcare leadership in your inbox.