A new study has found that AI tools are more likely to provide incorrect medical advice when the misinformation originates from what the software perceives as an authoritative source. In tests involving 20 open-source and proprietary large language models, researchers reported in The Lancet Digital Health that the software was more easily misled by errors in realistic-looking doctors’ discharge notes. This was in contrast to mistakes found in social media conversations.
In a social media landscape shaped by hashtags, algorithms, and viral posts, nurse leaders must decide: Will they let the narrative spiral, or can they adapt and join the conversation?
...