AI NOW panelists from Scripps Health and Providence say a health system needs to be up-front and transparent about how AI will be used, while making sure the resources are in place for educating staff and clinicians
Healthcare organizations need to establish a clear and transparent process for enterprise-wide AI governance. The first step is knowing whether you’re mature enough to use the technology.
That’s the opinion of executives from Scripps Health and Providence who participated in the recent HealthLeaders AI NOW summit. Both agreed that health system leadership needs to look at both culture and infrastructure before moving on to developing programs.
“This is truly a team sport,” said Shane Thielman, FACHE, CHCIO, corporate SVP and chief information officer at San Diego-based Scripps Health, who noted that leadership has to commit to open dialogue and transparency to not only educate staff and clinicians but keep patients and the public aware of how AI will be used.
And that conversation will be ongoing. It may even mean ending a program if the results just aren’t there yet and waiting for the technology to improve.
“This is not something that you turn on and then walk away to the next opportunity,” he said.
Sara Vaezy, EVP and chief strategy and digital officer at Seattle-based Providence, says health systems often have to start by surveying their staff and clinicians about what they want to know about AI, then creating specific resources to address those concerns. That might include creating a resource hub, work groups, and videos.
“Everyone’s getting their hands dirty,” she said, referencing the evolving nature of AI. “It’s a constant undertaking because so much is happening out in the market.”
She also noted that the hype around AI has taken on a life of its own, in some cases obscuring what health system leadership should be focusing on with the technology. Some AI models can drift away from what they were designed to accomplish, and leadership needs to “lean in with your hands on the wheel and make sure you’ve got the right processes and the right technology in place.”
Thielman said the analysis and decision-making need to be multidisciplinary, as AI extends into and affects many departments within the enterprise, from administrative to clinical to IT to security. All departments, he said, need to “understand what the lift will be to do that successfully.”
“There’s a significant element of change management that goes into introducing any AI solution,” he pointed out.
Aside from creating a culture around AI readiness, Thielman and Vaezy said health systems need to assess their infrastructure. Do they have the technology in place and the capacity at hand to support AI programs, which include data storage, quality, and analysis?
“Many organizations don’t necessarily have the capability or the capacity from a capital perspective to make investments,” Vaezy said. “Finding the right partners to build that out is a great way to extend that.”
Providence, for example, has partnered with Microsoft for more than eight years.
“If you don’t have a cloud structure, it’s going to be difficult,” she noted.
Thielman said data quality is an often-overlooked part of AI governance, especially with generative AI programs that require continuing oversight.
“If you have garbage data, it isn’t going to help you do much,” Vaezy added.
A key component of assessing AI maturity is understanding where the technology will be used. Too many organizations jump at what’s being called the “low-hanging fruit,” or programs that involve minimal effort and produce quick results, without planning ahead. Those early wins may be great for establishing a base and building morale, but a forward-thinking organization should be planning several steps ahead from the outset.
Thielman pointed out that early programs are tied to back-office and administrative gains and focus on financial improvement. But clinical outcomes need to be considered as well, as they usually take longer to prove value. Taking those into consideration at the start enables leadership to map out costs and outcomes over time.
“What’s the return on investment?” he asked. “That can have a financial element and it can also have a value element. As we explore AI further it is not only about a direct financial benefit … particularly if there is an up-front financial investment that is necessary. There are some other really intractable challenges … that we’re all interested in addressing.”
Vaezy noted that many technology projects “are worse before they get better,” and need time to settle in and show value. That’s especially true of AI.
“In some cases … with generative AI, frankly the solutions aren’t really ready for enterprise-grade adoption,” she said. “You don’t want some mission-critical function rely on something that’s a flash-in-the-pan.”
Finally, Thielman noted that health system leadership needs to pay attention to the ongoing debate over who should govern AI. The Biden Administration has unveiled its own strategy, with an emphasis on collaboration, but many within healthcare feel the reins should be in their hands.
“It is important that the autonomy continues to reside with healthcare systems relevant to AI that is not currently regulated today,” he said. “We don’t want to have an unintended consequence [or] a negative clinical outcome. We don’t want to place more burden on our clinical workforce. … We don’t want to introduce more inefficiency in our operations through the introduction of AI. … That level of decision-making should continue to be retained within the individual healthcare system.”
A new bill before Congress would create a pilot program to examine whether the Hospital at Home care model could be extended to other patient populations
Congress is wading into the question of whether the Hospital at Home model of care should be made permanent.
U.S. Senators Marco Rubio (R-Florida) and Tom Carper (D-Delaware) have introduced a bill that would set up a pilot program to test whether the Acute Hospital Care at Home initiative could be expanded to new populations beyond those needing acute care services.
The At Home Observation and Medical Evaluation (HOME) Services Act, if passed, would give the strategy some life beyond the planned December 31, 2024 expiration date for the Medicare waiver that supports the program.
“Addressing our healthcare challenges requires innovative solutions,” Rubio said in a joint press release issued yesterday. “The At HOME Services Act builds on the success of the hospital-at-home program to lower costs and burdens and improve patient outcomes and satisfaction.”
The bill adds a new wrinkle in the ongoing debate over whether the program should continue after this year.
The model targets patients who would otherwise be admitted to the hospital, creating a home-based care management plan that includes often-multiple daily visits by care teams, virtual care services and remote patient monitoring. Some programs have added ancillary services to address social determinants of health, imaging and tests, and pharmacy and rehab needs.
While acute care at home programs have been in existence in some form for several years, the Hospital at Home concept took off during the pandemic, when health systems and hospitals were struggling to keep up with the wave of new patients and were looking for ways to treat certain patients at home. The Centers for Medicare & Medicaid Services (CMS) established a waiver that enables Medicare reimbursement for health systems following the CMS model, which sets rigid guidelines for in-person visits and digital health and telehealth use. The waiver was due to end with the Public Health Emergency (PHE), but has been extended until the end of this year.
Supporters, including an advocacy group formed out of several of the more than 300 hospitals using the CMS model, are lobbying both CMS and Congress to make the Medicare waiver permanent, arguing that many programs would struggle or even shut down without Medicare support. They also point to a recent nationwide study that shows positive clinical outcomes in the Hospital at Home model.
On a related front, New Jersey recently passed a law that enables Garden State health systems to expand the Hospital at Home program to residents on Medicaid or NJ Family Care programs, as well as those on private insurance.
“We are excited to see Hospital at Home expand in New Jersey through this legislation, and we believe our state can serve as a template for the rest of the country,” Michael Capriotti, MBA, senior vice president of integration and strategic operations for New Jersey-based Virtua Health, told local media after the law went into effect. “It is important that we continually innovate to create the best possible experiences and outcomes for our patients.”
Health systems and hospitals are using more and more new technology to address clinical care gaps. That puts a strain on execs charged with making sure they’re safe and secure
Healthcare cybersecurity standards need to be strict for a reason: Compromised technology could lead to a patient’s harm, even death. But when a health system uses technology from a vendor, sometimes those standards aren’t the same.
“That’s a challenge,” says Adam Zoller, chief information security officer at Providence.
“As a large hospital system, we are relying on hundreds of third parties,” he says. “And when some of those devices are 100% vendor-managed, they often won’t modify anything,” making it much harder for the health system to ensure that technology can be used safely and securely.
“Cyber incidents affecting hospitals and health systems have led to extended care disruptions caused by multi-week outages; patient diversion to other facilities; and strain on acute care provisioning and capacity, causing cancelled medical appointments, non-rendered services, and delayed medical procedures (particularly elective procedures),” the HHS report, issued in December 2023, noted. “More importantly, they put patients’ safety at risk and impact local and surrounding communities that depend on the availability of the local emergency department, radiology unit, or cancer center for life-saving care.”
Zoller has nothing but good things to say about the federal government’s efforts to improve cybersecurity, particularly in elevating the responsibilities of the National Institute of Standards and Technology (NIST). And while many vendors in the clinical space are taking steps to better secure their technology, the rapid advance of AI and digital health is prompting health systems and hospitals to partner with companies outside the healthcare industry—companies with different philosophies around security.
“There needs to be more accountability,” he says.
Health systems like Providence spend a lot of time addressing cybersecurity through these devices—even when the vendor isn’t responsive to making changes on their end. Those are time-and labor-intensive projects that a smaller hospital or health system might struggle to accomplish, and which could be avoided if the organization and vendor could just work together.
This is an issue that has plagued healthcare for years. The gradual advance to consumer-facing care and the introduction of consumer-facing technologies and strategies has created a gap between those devices and clinically validated technology. In other words, health systems and hospitals have been looking at the consumer tech space with an eye toward expanding healthcare opportunities, but they’re wary of the value of the data coming from these devices as well as the safeguards in place to protect that data.
For Zoller, those gaps exist in any technology using commercial operating systems. Clinical technology, he says, is designed for a longer life-cycle, while commercial tech operates on shorter life-cycles and relies more heavily on updates and patches (which also add to the revenue stream). But each of those updates and patches represents another security risk that healthcare organizations have to address before those changes can go live.
“If I’m still having to educate the vendors who produce these devices about security [every time there’s an update}, that’s a real problem,” says Zoller.
Now multiply that by the number of vendors are large hospital system like Providence works with, and the problems become even bigger.
“We are very dependent on those third parties,” Zoller says, “so the biggest challenge for me is in managing third party risk at scale.”
To be clear, this is an industry issue, not just a Providence issue. The American Hospital Association has been advocating for better cybersecurity safeguards for this party vendors for years, and large health systems like Providence are a part of that effort. But Zoller notes his voice is one of many, and while the big guys have the resources to manage multiple third-party partnerships, smaller health systems and hospitals are stretched thin and apt to have more issues.
Likewise, with the evolution of smart devices and the smart home and an increase in remote patient monitoring and acute care at home programs, “the complexity you’re introducing to a healthcare ecosystem increases the risks,” Zoller adds.
He says healthcare organizations “are on the receiving end” of more and more technologies that don’t meet clinical cybersecurity standards because the industry is embracing new tools and concepts that have proven themselves in other markets, like retail. What might be a great new platform that boosts clinical care in the home setting might also be a security nightmare.
Zoller wants the federal government to extend its cybersecurity guidelines to vendors in the healthcare space who manage their products on commercial operating systems, to bring them to the table to discuss with healthcare organizations how their technology can better adhere to clinical cybersecurity standards. He says the new HHS cybersecurity guidelines set a good baseline that health systems and hospitals can use when working with vendors.
“We need to look at where the equities are aligned,” he says. “It is great that we’re beginning to see more of these conversations around security … but more needs to be done.”
The introduction of disruptors into the healthcare industry could have an effect as well. Companies like Amazon, Google, Apple and Microsoft are introducing healthcare services and products that aim to give consumers a choice as to where and how they get their healthcare. Given those options, consumers could look for services and platforms that better protect their data.
“The disruptors in this space could see security as a differentiator,” he says. “That could certainly make a difference.”
A recent HealthLeaders AI NOW panel discussed how the technology is being applied to clinical care
Health systems and hospitals are seeing specific benefits from deploying AI technology in clinical care, according to executives taking part in a panel at the recent HealthLeaders AI NOW virtual summit.
While much of the so-called “low-hanging fruit” has so far been tied to back-end and administrative tasks, AI tools have been used with considerable success in radiology, where the technology can pick up details in images that can improve diagnoses. And AI is also being used in places like the Emergency Department, inpatient care, and population health programs.
“Financial ROI is important, but it’s not the only factor that health systems should be focusing on,” said Jared Antczak, chief digital officer at Sanford Health.
The rapid pace of development for AI tools in healthcare is tied to the potential for the technology to solve a wide variety of healthcare’s biggest problems, but without a good enterprise-wide strategy in place, some organizations are launching projects with an uncertain ROI and putting pressure on executives to find value after the fact. Advocates suggest launching small AI programs at first with a defined ROI, especially in areas where the value is clear.
In other words, think before you act.
“AI is not and should not be a strategy in and of itself,” Antczak said. “It’s a potential tool that can be used to solve a problem. But really tools are enablers of strategies, not strategies by themselves. We need to avoid the trap of doing technology for the sake of technology and really leverage technology to create value in people’s lives.”
“Knowledge is expanding faster than our ability to assimilate it and apply it effectively,” he added. AI is “a powerful tool that can sift through the noise and the information overload and really help clinicians by lifting up the things that matter.”
Albert Karam, vice president of data strategy analytics at the Parkland Center for Clinical Innovation, a research institute allied with Dallas-based Parkland Health, said the health system is using a predictive AI tool in Emergency Departments at Parkland Hospital and University of Texas Southwestern Medical Center to assess patients’ mortality over the next 12 to 72 hours, to determine when patients are scheduled for surgery.
“The idea here is that the orthopedic surgeons … use that information to decide whether to take [those patients] into surgery,” he said. “If things are looking a little bit grim … they might … try and get some of those metrics better before taking them in.”
Karam said the score developed by the AI tool is comprised of many data sources and updated hourly.
“It’s one of the life-and-death models [that is] a little bit morbid but incredibly useful,” he said. “They were literally having yelling matches in the hallway between the orthopedic surgeons and some of the other surgeons to decide whether or not to take them into surgery, and that has completely gone away.”
Another AI tool, focused on evaluating a patient for sepsis risk, was introduced in the inpatient setting, Karam said. It worked so well that executives in oncology and OB-GYN asked to have it reconfigured for their departments as well, and just recently the tool was reconfigured again to address whether sepsis is present in a patient on admission in the ED.
Antczak said Sanford Health has several predictive AI tools in use with clinical applications, addressing such issues as risk of colon cancer and chronic kidney disease.
“We’ve developed a number of different algorithms around disease state progression and anticipation to really enable our clinicians and our patients to potentially intervene sooner,” he said.
“Sometimes that word ‘healthcare’ is a bit of a misnomer,” Antczak added. “Really, we’re in the business of sick care. We wait until people are sick, and then we react, and we treat them and try to keep them well. We try to keep disease from progressing. But really if we want to become healthcare providers, we need to get further upstream. We need to look at ways to prevent disease from progressing to begin with, and that’s really where I think … AI can help us.”
Karam noted that programs focused on clinical outcomes often take longer to show ROI, which can be a challenge for a health system looking to contain costs.
“Some of the ROI analysis that we do is in lives impacted and lives saved even though we know that this is going to cost more dollars and cents up front,” he pointed out.
In addition, both he and Antczak noted, it takes a while to properly plan and develop an AI program.
“I don’t think people realize to successfully launch and do appropriate quality assurance on these models, it does take a significant amount of time,” Karam said.
It takes “about a year from ideation to starting a pilot,” he said. “And then we’ll pilot that model in one or two departments for another 4-6 months or so before rolling it out to the whole hospital. So about the minimum amount of time from ideation to implementation, even at a pilot level, is anywhere from a year to a year and a half. Which is not a fast turn-around, but there are so many checks and balances, so much with that data governance.”
And finally, the value of using AI in clinical care has to be measured against the risk. Many healthcare organizations are still trying to figure out how to use AI correctly, with the understanding that bad data or prompts can create bad outcomes—including, potentially, patient harm.
“Everyone is trying to identify where the guardrails are,” said Antczak, who notes Sanford Health used a tiered structure to identify risk in AI programs. Both he and Karam said it’s essential to balance any risky AI programs with human review. In any case where AI impacts a patient, they said, someone other than the technology has to make that final decision.
“It’s absolutely the final decision of the clinician or the nurse,” Karam said.
The Los Angeles health system has launched XAIA, an AI-enhanced VR app designed for use with the new Apple Vision Pro headset
A health system pioneer in the use of AR and VR technology is launching a new VR app for mental health—to be used with the new Apple Vision Pro headset.
Cedars-Sinai, which has been using AR and VR for several years for a variety of treatments, last week unveiled the XAIA (eXtended-reality Artificially Intelligent Ally) app, giving users what the Los Angeles-based health system calls an “immersive therapy session led by a trained digital avatar, programmed to simulate a human therapist.”
Healthcare organizations have long experimented with AR and VR in areas like labor and delivery, pain management, pediatric care, neurological care (including concussion diagnosis and treatment), and behavioral health. The form factor holds promise for both inpatient and home use, and as an educational tool as well as a clinical tool.
“Apple Vision Pro offers a gateway into Xaia’s world of immersive, interactive behavioral health support—making strides that I can only describe as a quantum leap beyond previous technologies,” XAIA co-founder Brennan Spiegel, MD, MSHS, a professor of medicine, director of health services research at Cedars-Sinai and a pioneer in researching and using the technology, said in a press release. “With XAIA and the stunning display in Apple Vision Pro, we are able to leverage every pixel of that remarkable resolution and the full spectrum of vivid colors to craft a form of immersive therapy that’s engaging and deeply personal.”
Cedars-Sinai’s strategy here is to connect its new app with Apple’s latest consumer-facing technology, marrying consumer marketing with clinical use cases. XAIA was created by Spiegel and Omer Liran, MD, a psychiatrist at Cedars-Sinai, and is licensed by the health system for commercial sale through a spinoff company created by Spiegel and Liran called VRx Health.
The app is designed to take the user into a “spatial environment,” such as a beach or meadow, where an AI-enhanced avatar programmed to simulate a human therapist guides the user through a variety of treatments, including meditation and deep breathing exercises.
Last year, Spiegel and his team tested XAIA on 14 patients living with moderate anxiety or depression. The results of the study, published in the online journal Nature, indicated patients “described the digital avatar as empathic, understanding, and conducive to a therapeutic alliance.” Though some still preferred a human therapist.
“Virtual reality (VR) employs spatial computing to create meaningful psychological experiences, promoting a sense of presence,” Spiegel and his team explained in the study’s abstract. “VR’s versatility enables users to experience serene natural settings or meditative landscapes, supporting treatments for conditions like anxiety and depression when integrated with cognitive behavioral therapy (CBT). However, personalizing CBT in VR remains a challenge, historically relying on real-time therapist interaction or pre-scripted content.”
“Advancements in artificial intelligence (AI), particularly Large Language Models (LLMs), provide an opportunity to enhance VR’s therapeutic potential,” they added. “These models can simulate naturalistic conversations, paving the way for AI-driven digital therapists.”
The research is still a work in progress, and the researchers said the app should be used to augment human counselors rather than replace them. The study noted that XAIA sometimes questioned a patient too much, as a less experienced therapist might do, or reverted to explaining coping mechanisms rather than further probing why a patient was struggling. In addition, the app also occasionally recommended a treatment without going into detail on why it would work.
“These results provide initial evidence that VR and AI therapy has the potential to provide automated mental health support within immersive environments,” Spiegel said in a separate press release supporting the study. “By harnessing the potential of technology in an evidence-based and safe manner, we can build a more accessible mental healthcare system.”
“The prevalence of mental health disorders is rising, yet there is a shortage of psychotherapists and a shortage of access for lower income, rural communities,” he said. “While this technology is not intended to replace psychologists—but rather augment them—we created XAIA with access in mind, ensuring the technology can provide meaningful mental health support across communities.”
With healthcare organizations embracing AI at a frantic pace, health system leaders need to get in front of adoption and make sure new programs are carefully reviewed and vetted
Healthcare organizations need to plan carefully when setting up a review committee for AI strategy, even incorporating a few skeptics to make sure they’re getting the full picture of how the technology should and shouldn’t be used.
That’s the takeaway from the recent HealthLeaders AI NOW virtual summit panel. The panel, Plotting an AI Strategy: Who Sits at the Table?, featured executives from Northwell Holdings, Ochsner Health, and UPMC and offered advice on how to manage AI within the healthcare enterprise.
Jason Hill, MD, MMM, Ochsner Health’s chief innovation officer, said a review committee should ideally consist of between seven and 12 members. It should include the CFO or someone within that department “who understands what ROI is,” someone representing the legal and compliance teams, a medical ethicist or bioethicist, a behavioral science expert, and clinicians and technology experts.
“We don’t really want to get just ‘new shiny things syndrome’ … and so be very sure that you’ve got someone who’s a little bit of a contrarian,” he said.
Marc Paradis, vice president of data strategy at Northwell Holdings, expanded on that idea, saying a committee should have a rotating “10th person,” who would look at an AI program or project from the opposite angle.
“In any given meeting,” he said, “it’s someone’s turn to be the contrarian. It’s someone’s turn to be the alternative thinker. It’s someone’s turn to be asking ‘What if’ or ‘Why not’ or to be kind of trying to poke those holes in the group think that can very easily occur. … It helps everyone begin to develop some of those critical thinking skills ... to get a more robust conversation going.”
“The people who sit at the table need to establish a good sense of transparency and communication,” added Chris Carmody, chief technology officer and senior VP of the IT division at UPMC. “We have to make sure we’re communicating about what’s happening and how people can effectively use the tools that are available to them.”
AI governance within the healthcare organization is a crucial topic, especially with the fast pace of AI development in the industry. Many hospitals are struggling to understand whether they’re ready to test and use the technology, along with what steps they need to take to make sure their clinicians and staff know how to use AI and their programs are monitored to prevent misuse or errors.
That extends to vendor partnerships as well. All three panelists warned that many companies are claiming to have AI tools or AI embedded into their technology because that’s the big thing now, and what they’re offering isn’t really addressing a care gap or concern. Executives need to make sure a new product isn’t creating new problems where none existed before (especially in security) and isn’t doing something the health system is already doing on its own.
“How does this new LEGO piece fit into our technology ecosystem?” Carmody asked.
He also noted that UPMC has what he calls “our Avengers or our Justice League,” comprised of a group of skilled architects that review technology before the health system decides whether to buy it.
Paradis pointed out that health systems have to rethink how they govern the technology, balancing the benefits against the possibility of mistakes being made.
“My personal take on this is I think we have to recognize that this is a brand new technology, [and] we don’t know what we don’t know,” he said. “It’s going to make mistakes. It will do strange things. It will surprise us in ways that we did not expect, both in a very good way and in a very bad way.”
“The appropriate thing to do from a leadership standpoint is to step up and say to the community at large: These are the guiding principles, this is what we believe, this is how we are rolling it out, [and] these are the guardrails,” he said. “Something will inevitably go wrong somewhere along the way and what we commit to you is when something goes wrong, we will bring everyone who is affected by that to the table at that time to … figure out how that never happens again and to improve the system overall.”
Paradis noted there is always a certain amount of danger in launching a new tool or technology, but there can also be harm in holding back a technology that has the potential to improve healthcare and save lives.
“We have to remember that we are on the very shallow part of this growth curve in terms of … what these tools can do, and what we don’t want to do is—and I’m very worried this is going to happen from a regulatory standpoint—what we don’t want to do is be so concerned that we completely shut down and stop AI,” he said. “We just have to be open and honest about it.”
Healthcare organizations that embrace AI need to first decide who is in charge.
On this week's episode of HL Shorts, we hear from Jason Hill, Innovation Officer at Ochsner Health, one of our expert panelists during the recent HealthLeaders AI NOW Virtual Summit. In the session "Plotting an AI Strategy: Who Sits at the Table?," Hill explains the three types of AI now being used in healthcare—and why each type of technology requires a different type of governance.
The new rule, announced today, enables healthcare providers to use audio-visual telemedicine platforms to evaluate new patients for methadone treatment programs
Healthcare organizations looking to get a handle on the opioid abuse epidemic can now use telemedicine to extend opioid treatment programs (OTPs) to the home.
The announcement marks the first time in 20 years that HHS has revised its rules to expand treatment options. Healthcare organizations have long been restricted in how they use telemedicine and digital health tools for substance abuse treatment, which often require in-person services that hinder patients who face barriers to access.
“This final rule represents a historic modernization of OTP regulations to help connect more Americans with effective treatment for opioid use disorders,” Miriam E. Delphin-Rittmon, PhD, the HHS Assistant Secretary for Mental Health and Substance Use and the leader of SAMHSA, said in an accompanying press release. “While this rule change will help anyone needing treatment, it will be particularly impactful for those in rural areas or with low income for whom reliable transportation can be a challenge, if not impossible. In short, this update will help those most in need.”
Other aspects of the final rule that aid in treatment expansion include making permanent a pandemic-era waiver that allows providers to prescribe take-home doses of methadone; allowing nurse practitioners and physician assistants to order medications for treatment programs (where states allow); removing the requirement that a patient have a history of addiction for at least a year before entering a program; expanding access to interim treatment; and “promoting patient-centered models of care that are aligned with management approaches for other chronic conditions.”
The federal rule continues a nationwide effort to address substance abuse—and, in a larger context, behavioral health issues—through new programs that take into account both the nationwide shortage of qualified providers and barriers to access, including social determinants of health.
“At HHS, we believe there should be no wrong door for people who are seeking support and care to manage their behavioral health challenges, including when it comes to getting treatment for substance use disorder,” HHS Deputy Secretary Andrea Palm said in the press release. “The easier we make it for people to access the treatments they need, the more lives we can save. With these announcements, we are dramatically expanding access to life-saving medications and continuing our efforts to meet people where they are in their recovery journeys.”
The rule doesn’t make all the restrictions disappear. It specifies that providers can use telemedicine to evaluate a new patient for entering methadone treatment but not for prescribing methadone.
Prescribing rules are still very tricky in substance abuse treatment. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008 prohibited the online prescription of scheduled drugs, though it did call for a process by which providers could register with the US Drug Enforcement Agency to prescribe some controlled drugs via telemedicine without first needing an in-person evaluation. The DEA never set up that process, despite intense lobbying from the American Telemedicine Association and others to do so.
With the pandemic, HHS established a number of waivers aimed at expanding access to telehealth and digital health, including allowing for virtual prescriptions. Those waivers ended last year with the federal Public Health Emergency, but Congress voted to extend many of them until the end of 2024. The DEA has extended its waiver until the end of the year as well as it works to come up with new, permanent rules to prescribing by telemedicine.
A Kaiser Permanente study of ambient AI scribes used to capture doctor’s notes and enter data into the EHR finds that they are improving the doctor-patient experience, but doctors still need to edit their notes
Ambient AI scribes designed to transcribe patient-physician encounters into the EHR may hold promise in reducing clinician workloads, but they aren’t there yet.
That’s the conclusion drawn from a recent study of more than 3,000 clinicians at the northern California-based Permanente Medical Group (TPMG) who used the technology in late 2023. The study, appearing online today in NEJM Catalyst Innovations in Care Delivery, finds that the AI tool did accurately represent the conversation between doctor and patient, but there was still a significant amount of editing that had to be done.
“Ongoing enhancements of the technology are needed and are focused on direct EHR integration, improved capabilities for incorporating medical interpretation, and enhanced workflow personalization options for individual users,” the study team, comprised of eight Kaiser Permanente researchers and executives, concluded. “Despite this technology’s early promise, careful and ongoing attention must be paid to ensure that the technology supports clinicians while also optimizing ambient AI scribe output for accuracy, relevance, and alignment in the physician–patient relationship.”
While automation and AI technology have been around for several years, the rapid advances of new forms of the technology have created a stir in several industries, including healthcare. AI and large language model (LLM) tools have the potential to not only handle administrative and back-office processes, but reduce workloads and stress for clinicians and staff by handling time-consuming and computer-driven tasks. Ambient AI scribes, for example, are designed to capture conversations and input data into the EHR, giving clinicians and staff the opportunity to interact with patients more freely instead of typing words into a laptop or trying to recall the gist of the conversation later.
While not the first study, the Kaiser Permanente study is one of the largest to test the technology in a clinical setting. It gives healthcare executives valuable insight into where the technology stands now, and what needs to be done to make it more effective.
According to the study, some 6,000 Kaiser Permanente clinicians have been using software-based medical dictation technology for at least two years. In August 2023, TPMG launched a two-week pilot with 47 physicians using an AI scribe; based on positive reactions from the physicians, the organization then secured licenses for 10,000 physicians and staff across several settings.
According to researchers, 3,442 physicians used that tool in the first 10 weeks of implementation for 303,266 encounters, with almost 100 physicians using the tool more than 100 times and one doctor using the tool for 1,210 encounters. Overall, the tool was used more than 19,000 times a week in seven of the 10 weeks studied.
In studying how clinicians and their staff used the technology, the research team identified four aspects of ambient AI scribes that would facilitate effective use:
Facilitate engagement by demonstrating growing and sustained adoption of ambient AI by number of clinicians and percentage of patient encounters across diverse specialties and settings.
Aim for effectiveness by reducing the burden of documentation within and outside of direct patient encounters.
Enhance the physician–patient relationship by increasing the amount of time physicians spend interacting with patients by improving engagement and reducing time spent interacting with a computer.
Maintain documentation quality by developing approaches to assess and safely use ambient AI technology capabilities in transcription and summarization.
And at the end of the study, the team listed four takeaways:
Ambient AI scribes “show early promise” in reducing the burden on clinicians to take notes and spend extra time entering that data into the EHR.
Both clinicians and patients said the technology improved the care experience, and some clinicians called the technology “transformational.”
While a review of AI-generated transcripts resulted in an average score of 48 out of 50 in 10 key factors, that doesn’t mean they can replace clinicians. There were inconsistencies, and clinicians still had to review the notes and make corrections “to ensure that they remain aligned with the physician-patient relationship.”
“Given the incredible pace of change, building a dynamic evaluation framework is essential to assess the performance of AI scribes across domains including engagement, effectiveness, quality, and safety.”
The research team also noted that AI technology is evolving quickly.
“The approaches to robustly evaluate the quality and safety of AI technologies, including tools such as large language models, remain incompletely defined,” they said. “The underlying algorithms and relevant regulations are also continuing to evolve rapidly, which will necessitate ongoing benchmarking, evaluation, and monitoring as the technology improves and vendors bring new software to market. Adoption rates and usage patterns are also expected to change as new user groups and application domains are identified and tested.”
With that in mind, the study offered advice for other healthcare organizations aiming to evaluate ambient AI scribes.
Find clinical champions to overcome barriers to adoption and create a culture that embraces innovative ideas.
Starte with a limited pilot involving a small number of clinicians, then scale up to a regional or larger-scale pilot with “opportunities for clinician and patient feedback that result in ongoing improvement that is tangible to stakeholders.”
Develop monitoring and benchmarking processes “that offer proactive assessment of the tools and their impact on meaningful goals.”
The Tennessee-based health system has migrated its data to a FHIR-based platform and now plans to use AI to address administrative and clinical efficiencies.
Community Health Systems has announced a collaboration to develop generative AI programs on Google Cloud.
The Tennessee-based health system, comprising 71 hospitals and more than 1,000 healthcare sites across 15 states, announced today that it has completed migration to a FHIR-based clinical data platform on Google Cloud.
“The goal of this migration extends well beyond modernizing our data infrastructure,“ Miguel Benet, MD, MPH, FACHE, CHS’ senior vice president of clinical operations, said in a press release. “By building a secure foundation to take advantage of new innovations in AI, we’re able to streamline our clinical providers’ workflow and advance the way we deliver patient care.”
Tech giants like Google, Microsoft, and Amazon are partnering with health systems and hospitals to develop enterprise-level AI programs, combining the data storage and analysis capabilities of the former with the clinical and administrative expertise of the latter. In December, Google unveiled a new suite of healthcare AI models called MedLM, built off the Med-PaLM 2 large language model introduced earlier in the year, as well as an early iteration of its next-gen generative AI model called Gemini.
One of Google’s biggest partners is HCA Healthcare, also based in Tennessee, which has been piloting Ai technology in Emergency Departments (through smartglasses) and to help nurses with documenting patient encounters.
“We’re on a mission to redesign the way care is delivered, letting clinicians focus on patient care and using technology where it can best support doctors and nurses,” Michael J. Schlosser, MD, MBA, FAANS, HCA’s senior vice president of care transformation and innovation, said in a press release. “Generative AI and other new technologies are helping us transform the ways teams interact, create better workflows, and have the right team, at the right time, empowered with the information they need for our patients.”
CHS is looking to build off its centralized data depository on Google Cloud’s health data platform to improve interoperability and drive real-time data analysis. The health system also plans on using Vertex AI and other large language models to target both administrative and clinical efficiencies, even pairing AI with Google Maps to give patients personalized resources in their communities.