A Wolters Kluwer survey finds that some people are using AI tools that haven’t been approved by management, potentially exposing the organization to liability and perhaps endangering patients.
A recent survey by Wolters Kluwer found that a surprising number of healthcare administrators and clinicians know of a colleague who has used shadow AI, or AI tools that have not been approved by management. And some have used the unsanctioned technology themselves.
The survey found that a significant number of administrators and clinicians are eager to test out new AI tools, even before their organization has given them the green light. Others say the value of those new tools, particularly in creating faster workflows, outweighs waiting for management to catch up.
Some feel the ability to try out innovative new technology should be supported, but there’s a real danger to using that tech before it’s been fully vetted and approved. In fact, 10% of the survey’s respondents said they’d used shadow AI for a direct patient care use case.
Management shares some of the burden in catching up with governance, but it’s also up to administrators and clinicians to know their organization’s AI polices and follow them.
Read the story here. And check out the infographic below for some key statistics from that survey.
Shadow AI Chart by Eric Wicklund
Eric Wicklund is the Associate Content Manager and Senior Editor for Innovation and Technology at HealthLeaders.