Doctors under pressure to work more efficiently are looking for help from “shadow AI” – artificial intelligence applications adopted outside a formal hospital approval process. A new survey of U.S. healthcare personnel found that many administrators have encountered unauthorized AI tools in their organizations, including some used for direct patient care.
U.S. healthcare providers are struggling under rising patient volumes in the midst of an ongoing workforce shortage, a situation that’s leading to burnout among clinicians.
- AI is often touted as a possible solution by enabling providers to do more with less, but the jury is still out on whether this works in the real world.
The new survey was conducted by Wolters Kluwer Health to assess usage of what the report described as “shadow AI,” or AI that’s adopted without proper hospital authorization processes.
- Shadow AI introduces risk to data, security, and privacy, and providers should better understand the need for an enterprise approach to AI with appropriate controls.
It’s worth noting that the report’s use of the term “authorization” applies primarily to an institution’s internal approval and governance processes for AI rather than formal FDA regulatory authorization.
- AI algorithms that aren’t used for direct patient care don’t require FDA authorization, as the agency pointed out in a guidance just a few weeks ago.
Researchers surveyed 518 health professionals, finding…
- 41% were aware of colleagues using unauthorized AI tools.
- 17% said they had personally used an unauthorized tool.
- 10% said they had used an unauthorized AI tool for direct patient care.
While the report’s recommendation for stronger AI governance is valid, there could be a competitive subtext to the findings. Wolters Kluwer offers healthcare clinical decision support solutions, and the company is currently locked in a fierce battle with OpenEvidence for dominance in the CDS space.
- OpenEvidence’s CDS solution is wildly popular with clinicians, many of whom install and consult with the software on their own, outside an enterprise-level governance – exactly the kind of “unauthorized” model the new report criticizes.
The Takeaway
The Wolters Kluwer report could be shedding light on a concerning new trend, or it could represent an effort by an established player to shut out a competitive threat. Either way, its warning on the need for appropriate enterprise-level AI governance should not be ignored.

