The Coalition for Health AI (CHAI) includes Stanford, the Mayo Clinic, Vanderbilt, Johns Hopkins, Google, and Microsoft, and is overseen by a number of federal agencies.
A group of healthcare organizations who have joined together to advance AI adoption has released a set of guidelines designed to help providers use the technology responsibly.
The Coalition for Health AI (CHAI), which includes the Mayo Clinic, Johns Hopkins University, Stanford Medicine, Google, and Microsoft, this week unveiled its Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare. The 24-page document offers tips on how to use AI in healthcare that meets clinical and quality standards.
“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” Brian Anderson, MD, a co-founder of the coalition and chief digital health physician at MITRE, said in a press release. “The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care.”
The guidelines, which build upon the White House Office of Science and Technology Policy's (OSTP) Blueprint for an AI Bill of Rights and the AI Risk Management Framework (AI RMF 1.0) developed by the US Commerce Department's National Institute of Standards and Technology (NIST), come at a crucial time for the development of AI in healthcare. The technology has been praised as an exciting new tool and criticized as a dangerous trend.
"In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology," John Halamka, MD, MS, president of the Mayo Clinic Platform and a co-founder of the coalition, said in the press release. "Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry."
Launched roughly one year ago, CHAI also includes Berkeley, Duke Health, UCSF, Vanderbilt University Medical Center, Change Healthcare, MITRE, and SAS and counts several federal organizations, including the Centers for Medicare & Medicaid Services (CMS), US Food & Drug Admi9nistration (FDA), and Office of the National Coordinator for Health IT (ONC) as observers.
The group is also collaborating with the National Academy of Medicine (NAM) on separate guidelines for the responsible development and adoption of AI in healthcare delivery.
“We have a rare window of opportunity in this early phase of AI development and deployment to act in harmony—honoring, reinforcing, and aligning our efforts nationwide to assure responsible AI," NAM Senior Advisor Laura Adams said in the press release. "The challenge is so formidable and the potential so unprecedented. Nothing less will do."
Eric Wicklund is the associate content manager and senior editor for Innovation, Technology, Telehealth, Supply Chain and Pharma for HealthLeaders.
The Coalition for Health AI (CHAI), launched one year ago, is comprised of several leading academic and healthcare institutions and a handful of tech giants, and lists several federal agencies as observers.
The organization has released a blueprint for the trustworthy use of AI in healthcare, aimed at aligning standards and best practices for using the technology, and is working with the National Academy of Medicine on separate guidelines for the responsible development and adoption of AI in healthcare.
The latest blueprint builds off of the White House OSTP's Blueprint for an AI Bill of Rights and the NIST's AI Risk Management Framework.