Skip to main content

Opinion: How to regulate generative AI in healthcare

By Harvard Business Review  
   September 06, 2024

The usual approach to protecting the public and helping doctors and hospitals manage new healthcare technologies won't work for generative AI. To realize the full clinical benefits of this technology while minimizing its risks, we will need a regulatory approach as innovative as generative AI itself. The reasons lie in the nature of the FDA's regulatory process and of this remarkable new technology. Generally speaking, the FDA requires producers of new drugs and devices to demonstrate that they are safe and effective for very specific clinical purposes. Why won't this well-established framework work for generative AI? The large language models (LLMs) that power products like ChatGPT, Gemini, and Claude are capable of responding to almost any type of question, in healthcare and beyond. In other words, they have not one specific healthcare use but tens of thousands, and subjecting them to traditional pre-market assessments of safety and efficacy for each of those potential applications would require untold numbers of expensive and time-consuming studies.

Full story


Get the latest on healthcare leadership in your inbox.