Skip to main content

Mount Sinai study reveals AI’s 47% error rate with fake docs' notes

By Reuters / South China Morning Post  
   February 11, 2026

A new study has found that AI tools are more likely to provide incorrect medical advice when the misinformation originates from what the software perceives as an authoritative source. In tests involving 20 open-source and proprietary large language models, researchers reported in The Lancet Digital Health that the software was more easily misled by errors in realistic-looking doctors’ discharge notes. This was in contrast to mistakes found in social media conversations.

Full story


Get the latest on healthcare leadership in your inbox.