Skip to main content

The Feds Want to Rollback AI Transparency Rules. This Will Raise Stakes for Health System Governance, Says AG

Analysis  |  By HealthLeaders Editorial Team  
   March 04, 2026

California AG Rob Bonta is urging federal health officials to reconsider a proposed rule that would eliminate key transparency requirements for AI tools used in healthcare.

For CIOs and health system executives accelerating AI deployment, the debate signals a pivotal moment in the future of AI governance, regulatory risk, and enterprise accountability.

In a letter to the Department of Health and Human Services, Mr. Bonta opposed a proposal that would remove federal certification criteria requiring “model cards” for AI-enabled healthcare products. The proposed rule, titled “Health Data, Technology, and Interoperability: ASTP/ONC Deregulatory Actions To Unleash Prosperity,” would scale back regulations tied to transparency and oversight of healthcare technology.

“I oppose the Trump Administration’s proposed rollback of regulations that require clarity about how AI tools used in healthcare were developed and tested. Delivering safe, effective and equitable access to healthcare services must be at the forefront of any attempt to integrate AI and healthcare,” Mr. Bonta said in a March 2 news release.

Model cards are structured disclosures that document how AI systems are trained, validated, and evaluated, including performance limitations and potential bias. Under current certification requirements introduced during the Biden administration, developers seeking federal certification must disclose training data characteristics and whether systems were assessed for fairness and bias.

The proposed rollback would eliminate that requirement.

Mr. Bonta characterized the model card provision as “one of the most significant guardrails currently in place on a federal level” governing AI use in healthcare.

For CIOs and digital health leaders, the implications extend well beyond documentation. AI tools are now embedded in clinical decision support, specialist referral pathways, disease screening, and risk stratification. These systems increasingly influence care prioritization, resource allocation, and patient outcomes. Governance frameworks that clarify how algorithms function are becoming foundational to risk management.

The California Attorney General’s office cited longstanding concerns that algorithms trained on incomplete or skewed datasets may perpetuate disparities. It referenced a 2019 study showing that a widely used hospital risk algorithm exhibited racial bias. In his letter, Mr. Bonta argued that eliminating the model card requirement would make it more difficult for providers to comply with federal and state nondiscrimination laws, including Affordable Care Act protections.

From an executive standpoint, this creates a widening regulatory gap. If federal transparency requirements are relaxed while states such as California strengthen oversight and issue their own guidance, health systems operating across multiple jurisdictions may face a fragmented compliance landscape. Last year, Mr. Bonta issued guidance clarifying how California law applies to AI systems used in care delivery, signaling that state level enforcement may intensify regardless of federal posture.

The central governance question for health systems is not whether model cards are mandated, but whether internal AI oversight structures are mature enough to withstand scrutiny. Without standardized federal disclosures, the burden shifts more heavily onto providers to conduct due diligence on vendor tools, document algorithmic performance, and validate equity safeguards.

Removing certification requirements may reduce vendor reporting obligations, but it does not eliminate legal exposure. If an AI system contributes to discriminatory outcomes, clinical harm, or biased resource allocation, liability risk will likely fall on the deploying organization as much as the developer.

For boards and executive teams, the story underscores a broader reality. AI governance in healthcare is moving from a compliance exercise to a core enterprise risk function. Transparency artifacts such as model cards are becoming proxies for deeper institutional capabilities: algorithm review committees, multidisciplinary validation processes, audit trails, bias testing protocols, and ongoing performance monitoring.

If federal oversight loosens, leading systems will need to decide whether to maintain higher internal standards regardless of regulatory minimums. In a landscape where AI adoption is accelerating and public scrutiny is intensifying, governance maturity may become a competitive differentiator.

The outcome of this rulemaking process will shape the regulatory baseline. However, the long term trajectory is clear. As AI becomes embedded in clinical and operational decision making, governance frameworks will determine not only compliance posture, but also trust, equity, and strategic resilience.

This report was written and reviewed by multiple HealthLeaders editors.


Get the latest on healthcare leadership in your inbox.