It is a surprising revitalization for a company many in the tech industry had dismissed as a dinosaur of a bygone, precloud era. Oracle appears to be successfully making a case to investors that it has become a strong fourth-place player in a cloud market surging thanks to AI.
The usual approach to protecting the public and helping doctors and hospitals manage new healthcare technologies won't work for generative AI. To realize the full clinical benefits of this technology while minimizing its risks, we will need a regulatory approach as innovative as generative AI itself. The reasons lie in the nature of the FDA's regulatory process and of this remarkable new technology. Generally speaking, the FDA requires producers of new drugs and devices to demonstrate that they are safe and effective for very specific clinical purposes. Why won't this well-established framework work for generative AI? The large language models (LLMs) that power products like ChatGPT, Gemini, and Claude are capable of responding to almost any type of question, in healthcare and beyond. In other words, they have not one specific healthcare use but tens of thousands, and subjecting them to traditional pre-market assessments of safety and efficacy for each of those potential applications would require untold numbers of expensive and time-consuming studies.
A proximity resilience graph offers a more accurate representation of risk than heat maps and risk registers, and allows CISOs to tell a complex story in a single visualization.
A systematic review into the potential health effects from radio wave exposure has shown mobile phones are not linked to brain cancer. The review was commissioned by the WHO.
Amid declining patient visits and a 66% drop in stock value, telehealth giant Teladoc appoints new leadership and withdraws its previously stated financial outlook for 2024, signaling uncertainty in its future trajectory.