Healthcare organizations and tech companies say the President's Executive Order is a good first step toward establishing AI policy, but will actions support the words?
For healthcare executives waiting for AI guidance from the federal government, Monday's executive order from President Joe Biden offered some comfort. After all, it's the thought that counts.
The order "directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems," a fact sheet accompanying the order states. But there's little detail about how it would affect healthcare organizations or technology companies working with them.
Across all industries, Biden's order seeks to balance regulations with encouragement, establishing guidelines for controlling risk while giving the green-light to grants and other funding models to support research.
Specifically for healthcare, the President is giving the U.S. Health and Human Services (HHS) Department six months to draft a strategy to determine whether AI meets the standards for delivering healthcare, and he asks HHS to create a task force within the year to create a plan for responsible AI use.
In addition, the order calls on companies developing generative AI tools to notify the government when they are testing those tools and to share the results of those tests.
Through HHS, Biden's order envisions grants that would cover, among other things:
- Developing AI-enabled tools that create personalized immune-response treatments.
- Improving the quality of data used in AI tools.
- Addressing administrative tasks that improve efficiency and contribute to stress and burnout.
- Improving care for veterans and developing nationwide "AI Tech Sprint" competitions to support start-ups and small healthcare tech companies.
Reaction to the announcement was, for the most part, positive, with health system executives and industry analysts praising the administration for taking action. But at the same time, many noted that the nation is lagging behind Europe in developing AI policy, and many healthcare organizations are already testing and even using AI tools.
In a press release issued on October 25 in advance of Biden's order, Mike Thompson, vice president of enterprise data intelligence at Cedars-Sinai, said it's incumbent upon the healthcare community to develop its own standards for ethical and responsible AI use.
"While many general principles of AI ethics apply across industries, the healthcare sector has its own set of unique ethical considerations," he noted. "This is due to the high stakes involved in patient care, the sensitive nature of health data, and the critical impact on individuals and public health."
"It is critical that AI in healthcare benefit all sectors of the population, as AI could worsen existing inequalities if not carefully designed and implemented," he added. "It's also critical that we ensure AI systems in healthcare are both accurate and reliable. Ethical concerns arise when AI is used for diagnosis or treatment without robust validation, as errors can lead to incorrect medical decisions."
To that end, many of the tech giants developing AI tools—sometimes in partnerships with health systems—pledged at a July meeting at the White House to adhere to a set of voluntary guidelines, including allowing independent experts to assess tools before public debut, researching societal risks related to AI, and allowing third parties to test for system vulnerabilities.
Among health systems, Duke Health announced the launch of an AI and Innovation Lab and Center of Excellence in a five-year partnership with Microsoft "aimed at responsibly and ethically harnessing the potential of generative artificial intelligence (AI) and cloud technology to redefine the healthcare landscape."
Among those weighing in on the President's order was Bill Gassen, president and CEO of Sanford Health, who attended Biden's press conference at the White House.
“We believe that emerging AI technologies have the potential to positively transform the future of care delivery [and] advance rural health equity and are key to the industry's long-term sustainability," he said. "While we need to be appropriately cautious in our adoption of these technologies, we are supportive of the development of frameworks and guidelines that would enable healthcare providers to responsibly and safely use AI-enabled technology so that we can address some of the most pressing challenges in healthcare today related to our workforce, access to care and quality improvement for all."
"It will be critical that the guidance strikes the right balance between setting appropriate guardrails but is not unnecessarily stifling so we do not impede innovation and progress where it’s needed most," he added. "We look forward to collaborating with industry leaders, elected officials and the Administration on these efforts to ensure we can harness these technologies in the best interest of our patients and caregivers.”
One organization taking advantage of the hype surrounding Biden's Executive Order was the American Telemedicine Association, which published its guidelines for responsible use of AI in telemedicine programs.
"With today's Executive Order issued by President Biden to ensure the safe, secure, and trustworthy use of artificial intelligence, our timely release of the ATA's AI Principles can help chart the way forward as the administration works to create new standards for AI's potentially game-changing capabilities," Kyle Zebley, the ATA's senior vice president of public policy, said in a release. "AI is already being used in telehealth and its future potential is endless, especially to harness the reams of data that our healthcare system produces, including data collected from virtual care technologies, to improve healthcare delivery."
Several AI companies also chimed in, praising the administration for starting the conversation.
"Overall, the executive order is a great start and begins to lay a policy foundation to harness the benefits of AI, while addressing key challenges," Rajeev Ronanki, CEO of Lyric, said in a message to HealthLeaders. "By integrating it with ethical frameworks, liability, training/upskilling, AI literacy, oversight & auditing, data interoperability, and transparency, policy-makers can create a more comprehensive framework for AI."
But the question is whether HHS or another government entity has a plan—or the muscle—to go after a vendor or health system that runs afoul of the guidelines, and whether they can step in before serious damage is done. How will healthcare executives feel about HHS oversight? And will there be enough guidance from the government to help health systems map out and execute reliable and effective AI strategies?
Eric Wicklund is the associate content manager and senior editor for Innovation, Technology, and Pharma for HealthLeaders.
President Joe Biden's executive order on AI this week sets up the U.S. Health and Human Services Department as the lead enforcement agency on AI policy and promotion.
Healthcare organizations and tech companies, many of whom are already testing and using AI tools, say the announcement is a good first step toward establishing AI governance.
Critics have said the U.S. is behind other nations in establishing policy, and some wonder whether the President's words will be supported by meaningful actions.