As federal efforts to limit AI oversight falter, healthcare leaders must prepare for a new regulatory landscape. Here's what hospital and health system executives need to know.
As we know, the U.S. Senate just voted 99–1 to remove a 10-year federal moratorium on state-level AI regulation from a sweeping legislative package.
The provision, part of President Trump’s “One Big Beautiful Bill,” would have blocked states from crafting their own rules for AI, handing regulatory authority solely to the federal government.
Why should we care? Well, for hospitals and health systems, its removal could mean a new era of vigilance, adaptation, and strategy as states move forward with their own approaches to AI governance.
Let’s dig into what this could mean.
The Senate's decision ensures that individual states retain the authority to regulate AI technologies, rather than deferring exclusively to federal oversight. This outcome is particularly significant for healthcare organizations that operate across multiple states.
Without a national regulatory framework, hospitals and health systems will now need to track a growing patchwork of rules, each with unique compliance requirements, privacy implications, and risk considerations. For large integrated systems, the burden of managing this complexity will increase.
At the same time, the ruling may empower some states to respond more quickly and locally to the evolving risks and opportunities of AI in healthcare. State legislatures could, for example, address specific regional concerns around bias in algorithms, data privacy, and workforce displacement.
But the benefits of responsiveness come with new responsibilities for healthcare leaders.
They must proactively monitor state-by-state developments, engage with policymakers where appropriate, and tailor their AI deployment strategies accordingly.
AI tools are rapidly being integrated into diagnostic workflows, administrative operations, and patient engagement platforms. Yet their effectiveness and safety depend heavily on how they're implemented.
Eric Poon, MD, MPH, Chief Health Information Officer at Duke Health, underscored this responsibility in a recent interview with HealthLeaders: “We openly tell our clinicians that AI is not perfect and AI is capable of making mistakes, so we as clinicians need to be responsible for everything that we take advantage of out of these products.”
In other words, AI is a tool—not a decision-maker. Human oversight remains essential.
Now more than ever, hospitals and health systems should continue to invest in building their governance structure. This includes formalizing internal review boards, establishing clear accountability structures, and providing targeted education to frontline clinicians who will interact with AI tools daily.
The Senate's decision has opened the door to a more decentralized regulatory environment. To succeed in this new reality, hospital and health system executives will need to be nimble, informed, and proactive while ensuring that innovation proceeds with responsibility.
Amanda Norris is the Director of Content for HealthLeaders.