The American Medical Association has adopted a new policy that calls for clinical AI tools that can explain their answers. It also wants the AI purveyors to provide safety and efficacy data. To make AI explainable means that the AI should be able to cite sources or back up its decisions with data clinicians can review. The AMA adopted the policy at its annual house of delegates meeting in Chicago this week, and it calls for an independent third party — like a regulatory agency or other certifying body — to verify that AI tools are actually explainable.
In a social media landscape shaped by hashtags, algorithms, and viral posts, nurse leaders must decide: Will they let the narrative spiral, or can they adapt and join the conversation?
...