Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near "human level robustness and accuracy." But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Some of the hallucinations can include racial commentary, violent rhetoric and even imagined medical treatments. Such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
The healthcare industry stands at a transformative point where AI has the potential to address key challenges, including rising costs, inefficiencies, and the need for personalized care. To realize AI's full potential for more efficient, equitable, and patient-centered care, healthcare leaders must focus on infrastructure, ethics, and regulatory alignment.
Replacing this historically judgment-intensive work with LLMs is quickly becoming one of the most effective wedge strategies when building AI-native applications. Why is that the case?
Data recently posted on a federal website shows the cyberattack earlier this year at a UnitedHealth Group subsidiary affected 100 million patients — apparently a record in the U.S. The tally roughly matches the scope previously described by company CEO Andrew Witty, who suggested during congressional testimony in May that data for 1 in 3 Americans could be affected by the hack.
The FDA has named Dr. Michelle Tarver, an agency veteran, the new director of the medical device division. Dr. Tarver will face a slate of pressing tasks, that include addressing calls to strengthen standards to protect the public from issues like racial bias in AI software and hastily authorized and faulty cardiac devices, like external defibrillators.