At the HealthLeaders Revenue Cycle Technology Mastermind forum this week in Savannah, RCM executives talked of how the technology is giving them a new profile — and compelling them to work more closely with clinical and administrative leaders.
Revenue cycle executives who are integrating the latest AI tools are seeing challenges that have less to do about technology and more to do about workflows.
In short, AI is creating a culture change. And that needs to be addressed well before anyone starts talking about deliverables and ROI.
That was one of the big takeaways from the HealthLeaders Revenue Cycle Technology Mastermind forum, an in-person event held Wednesday in Savannah just before the HealthLeaders Revenue Cycle Technology Exchange. The forum brought together executives from nine health systems to discuss the latest strategies and roadblocks, as well as debate new ideas like ambient AI integration, clinical and administrative collaboration and patient engagement.
The consensus was that AI is changing RCM and will be instrumental in boosting efficiency and outcomes, but making sure everyone is on the same page is a much harder task. That ranges from training (and in some cases retraining) RCM staff to manage new workflows and expectations, to working with clinical, financial and administrative leaders to define the ROI of a new tool to more than one department.
"It's about changing the old idea that revenue cycle management is just about the money," noted Allyson Keller, VP of the Patient Connection Center at Piedmont Healthcare.
In many cases, AI has raised the profile of the RCM department by highlighting how it can improve the health system's bottom line. But as RCM leaders integrate the technology into traditional workflows to improve operations, they're finding new challenges in marrying that potential with the health system's overall strategy. For example, the pace of change has been accelerated, so that five-year plans are unrealistic and are being replaced by one- to, at most, three-year plans.
That shortened time frame means executives need to be nimble in evaluating new technology. Short-term ROI is crucial, and that often means looking for rev cycle value in a clinical tool or talking with clinical leaders about how a tool that benefits the rev cycle can also add value to clinical operations.
That's a challenge. Joann Ferguson, the Henry Ford Health System's VP of Revenue Cycle, and Beth Carlson, VP of the Revenue Cycle and Chief Revenue Cycle Officer at WVU Medicine, echoed more than one executive in saying it's important to have clinical advisors as part of the RCM team, while Steve Kos, Senior Director of Revenue Cycle Application Support at Jacksonville, Florida-based Baptist Health and others spoke of the value of revenue cycle informaticists (and the difference between them and clinical informaticists). Without those connections, they said, it's often difficult to get in front of clinical leaders.
Part of the problem is that healthcare delivery has long been siloed. Finance and revenue cycle executives focus on the money, IT sticks to technology, and clinicians focus on what they do best. Lynn Ansley, VP of Revenue Cycle Management at the Moffitt Cancer Center, said clinicians often don't understand how to navigate the financial or administrative spaces, just as RCM execs don't necessarily understand clinical workflows.
But AI is changing that paradigm. It's enabling – some might say forcing – disparate parts of the hospital enterprise to work with each other. Clinical leaders need to know how the latest tech tool of platform not only improves clinical workflows and outcomes, but also how it impacts IT and RCM. To that end, an ambient AI tool that can also capture coding opportunities and improve patient handoffs would have value to multiple departments and catch the attention of the CFO and CEO.
And as AI takes away the traditional tasks associated with RCM, it's prompting RCM leaders to develop new job descriptions for their staff, including taking on more patient-facing tasks. The idea that AI can help clinicians reduce their time in front of the computer and spend more time with their patients also applies, in part, to RCM leaders and their staff. And that opens up opportunities to work not only with clinicians, but with payers to reduce denials and improve prior authorizations, even work with patients and understanding and managing their financial obligations.
RCM executives at the Mastermind forum were quick to point out that while AI can and does help with data-crunching and problem-solving, it doesn't replace the need to audit the technology or its outcomes. That ‘human in the lop' is just as important in RCM as it is in the clinical space.
But it does redefine RCM. Evan Martin, VP of Revenue Cycle Management at ZoomCare, pointed out that AI is taking what has always been a transactional process and is giving it a human face to the patient. No longer is RCM confined to the back offices of the hospital, and no longer are RCM leaders and staff relegated to number-crunching.
At the end of the day, one attendee noted, this is all about making better decisions for patient care.
Long considered the IT chief, CIOs are taking on new responsibilities and participating in long-term strategy. But can they earn the CEO's trust to set those priorities?
Today's CIOs are being asked to handle more than just IT, yet many feel they're not on the same page as their CEOs in developing strategic priorities.
That's the gist of a new report from digital security company Netskope, which calls on CIOs and CEOs to align their innovation and technology priorities to make sure the health system is headed in the right direction.
"The CIO role is evolving faster than many organizations are prepared for," Netskope's chief digital and information officer, Mike Anderson, said in a press release accompanying the survey. "CIOs are expanding their remit to own operations and business functions in a way that was not the case even a few years ago. Yet many don't feel fully aligned with their CEOs or empowered to make long-term decisions."
According to the survey, 34% of CIOs say they're significantly more involved with long-term priorities that expand beyond IT, and one of every three executives is being asked to lead the health system on initiatives like AI development, including human capital planning, digital innovation and operational resilience.
Yet at the same time, 34% of CIOs surveyed say they don't feel empowered by the CEO to make long-term IT strategy decisions, and 39% say they and the CEO aren't aligned on those strategies.
Solving that disconnect is the focus of HealthLeaders' newest exchange, the Chief Digital Executive Exchange (CDEX), taking place December 3-4 in Washington D.C. The invitation-only event will bring together CIOs from leading health systems and hospitals across the country to discuss collaboration to advance IT and innovation strategies, and it will feature panel sessions with CEOs, CFOs and other C-Suite executives.
The idea that the CIOs role in healthcare is evolving isn't a new concept. Top executives have been talking about this evolution for the past few years, driven in large part by the advances of AI and the need to embrace new ideas, like digital health and virtual care, to effect healthcare transformation.
To improve alignment between CIOs and their CEOs, the Netskape study suggests six topics of conversation:
Cost. CEOs are often unsure of the cost behind IT and innovation upgrades, and while the CFO should play a part in this discussion, it's up to the CIO to explain the rationale behind these improvements. At the same time, the CIO has to be clear that digital investment and transformation isn't just a fad or a search for the next shiny thing, and that investment in technology leads to improved operations and outcomes.
Risk. In today's uncertain healthcare environment, CEOs want to make sure they're plotting the right course for the organization. They need clear and sensible advice from their CIOs.
Innovation. Here's the sweet spot for CIOs, and where CEOs often are confused. Some executives confuse innovation for complexity, and immediately think in terms of increased cost and risk. It's important for CIOs to get in front of this, explaining where technology can be used and where workflows can be impacted.
People. A popular topic in these workforce-challenged times, and a key area where AI can make a difference. CIOs need to set the stage for their CEOs, explaining how strategic use of AI can address staffing pain points, improve efficiency and workplace morale and even raise the bar on recruitment.
Measurement. CIOs are the technology experts, and CEOs need to look to them to understand how that technology is evaluated. Is the organization getting all it can get from the EHR? Are digital health and virtual care platforms providing the right ROI? And more importantly, is technology supporting or standing in the way of clinical care?
The IT Estate. This dates back to the idea that different parts of the healthcare enterprise are siloed, with a C-Suite executive in charge of just one part of the whole and no one getting in anyone else's way. That philosophy is changing: True healthcare transformation depends on collaboration across the C-Suite, and that means the CIO has to let the CEO into what's often called the "black box" of IT.
Ambient Ai tools are proving their value in reducing clinician stress and documentation burden, but there are risks to using them. A new study offers some tips on how to make sure they're governed and used effectively and safely.
Ambient AI tools may be all the rage in healthcare these days, but that rapid adoption may be exposing healthcare providers to risk.
A new study out of Columbia University finds that AI scribes are proving their value in reducing clinician stress and burnout by easing documentation burdens. But that potential should be weighed against the risk of documentation errors, privacy concerns and a lack of transparency.
"Moving forward, we must balance innovation with safeguards through rigorous validation, transparency, clear regulations, and thoughtful implementation to protect patient safety and uphold clinical integrity," the study, conducted by Maxim Topaz and Zhihong Zhang of Columbia University and Laura Maria Peltonan of the University of Eastern Finland and posted in Nature, concludes. "The key question is not whether to adopt these tools but how to do so responsibly, ensuring they enhance care without eroding trust."
The concern isn't new to healthcare. The rapid embrace of AI by healthcare providers has come side-by-side with concerns that governance is falling behind, and that providers are putting their organizations at risk by using new tools without first establishing the proper guardrails.
That's especially true of AI scribes, which capture PHI during the doctor-patient visit and could be putting that data at risk.
Aside from HIPAA concerns, healthcare executives need to make sure patient consent is baked into the process, as well as transparency about how the technology works, why it's being used and measures taken to protect patients and data.
4 Key Concerns
The study cites four concerns related to scribes:
Hallucinations. AI tools can generate inaccurate or even fictitious content, such as creating non-existent diagnoses or case studies. That's especially true if a scribe isn't trained on the language of a particular specialty.
Omissions. A scribe might not be able to track all of the conversation, especially if there are multiple speakers speaking at once, and might miss vital information.
Misinterpretations. Some AI scribes may not be trained to understand medical jargon, or to understand context related to a specialty like pediatrics or mental health. They also can't track non-verbal communications, including gestures and visual signs of discomfort or stress. Finally, they might not be trained to pick up on social determinants of health, yet another key element in care management.
Misidentifying speakers. If there are several people in the room (such as during a pediatric exam), the scribe might not be able to keep up with who's talking. And according to the study, speech recognition systems underlying AI scribes might have had difficulty with African American speakers, resulting in higher error rates.
One key concern is that ambient scribes aren't equipped to differentiate what should go into the medical record and what can be left out. The study cited research that indicated roughly half of patient problems and 21% of care interventions discussed by home healthcare nurses and patients don't make it into the EHR.
"These gaps occurred for various reasons, including problems being outside the scope of practice for the conversing clinicians or issues not deemed severe enough to warrant documentation," the researchers noted. "This raises critical questions about how AI scribes might change documentation patterns. Will these systems document everything discussed, potentially creating information overload? Or will they selectively filter information based on unclear criteria? Either approach presents challenges. Comprehensive documentation might capture previously missed information but could also clutter the medical record with less clinically relevant details."
"Conversely, if AI scribes apply filtering algorithms, they might perpetuate or even exacerbate existing documentation gaps without the contextual understanding that human clinicians possess," they continued. "These risks may disproportionately affect vulnerable populations who are less able to engage in effective self-advocacy."
Other Issues
There are other concerns as well.
"Compounding the issue is the ‘black box' nature of these systems," Topaz and his colleagues reported. "The underlying neural network algorithms are not constrained by established medical knowledge, making it difficult to understand how they arrive at specific conclusions or predict when errors might occur. This lack of transparency makes it challenging to identify potential biases within the system and ultimately ensure the reliability of generated documentation."
"Emerging explainability techniques, such as attention visualization (which highlights which parts of conversations most influenced specific documentation decisions) and SHapley Additive exPlanations (SHAP) frameworks (which identify key linguistic features that trigger certain AI outputs), offer promising approaches to enhance AI transparency; however, their effectiveness and practical implementation for clinical documentation systems require further validation."
Finally, AI tools may be leading to increased expectations – as seen in several health systems that found physicians weren't finding significant benefits in using them. According to the study, ‘healthcare organizations may respond by increasing patient volume expectations based on promised efficiency gains, creating a workload paradox where modest time savings are offset by greater demands and the cognitive burden of reviewing AI-generated errors."
As well, the study points out that clinicians might also become too dependent on scribes, "potentially compromising their professional judgment and independence in clinical decision-making."
Making Sure Governance Is Front and Center
The key, then, is to make sure the guardrails are in place for the use of AI scribes in clinical care. To that end, the study offers five recommendations:
Establish rigorous validation standards. Use independent, standardized metrics for accuracy, completeness and time saved.
Mandate transparency. Make sure vendors are disclosing how these tools work, what data they're using, and their limitations, including biases, and insist on regular reporting of error rates.
Develop clear regulatory frameworks. Define responsibility and accountability when errors are found, and set clear expectations for correcting errors.
Implement thoughtful clinical protocols. Establish robust training programs, quality assurance processes and patient consent protocols for the use of AI scribes. Training programs should include how clinicians audit their content, monitor for errors, verify clinical accuracy and edit while maintaining accuracy.
Invest in research. Set aside funding to support independent research around the long-term impacts of AI scribes on quality, clinical decision-making and communication, including discipline and specialty-specific evaluations.
With pandemic-era telehealth and Hospital at Home waivers expiring, executives are curtailing or ending some virtual care services. Supporters, meanwhile, are lobbying to extend those flexibilities or make them permanent.
Health system and hospital leaders are cutting telehealth and Hospital at Home programs following the expiration of pandemic-era CMS waivers, but that doesn't necessarily mean those programs are gone for good.
Healthcare leaders say they're looking only at short-term strategies as advocates, led by the American Telemedicine Association and the Alliance for Connected Care, lobby Congress to reinstate the waivers in any spending bill to end the federal shutdown. And while aiming to restore the waivers for the time being, the ultimate goal of advocates is to make those freedoms permanent.
For now, however, telehealth policy reverts to pre-COVID rules.
Originating site. The waiver allowed for telehealth services to be delivered at any U.S. location, including the patient's home. Restrictions are now back in place, limiting telehealth to certain locations, including the provider's office, hospital, SNF, and home if the patient is receiving home dialysis for end-stage renal disease (ESRD), treatment for substance use disorder (SUD) or diagnosis, evaluation and treatment for a mental health disorder (provided the in-person visit requirement is met).
Geographic restrictions. The waiver eliminated those restrictions. Now, telehealth services are limited to a rural health professional shortage area or a county not included in a Metropolitan Statistical Area. Exceptions are made for patients with ESRD receiving dialysis at home or at a hospital or critical-access hospital-based renal dialysis facility, as well as patients receiving diagnosis, evaluation or treatment for an acute stroke, those receiving for SUD or a co-occurring mental health disorder, and those receiving diagnosis, evaluation and treatment for a mental health disorder (provided the in-person visit requirement is met).
Audio-only visits (such as via telephone). These were allowed through the waiver for any clinically appropriate telehealth services. Now, they're limited only to patients receiving telehealth services at home if the care provider has the technology to conduct an audio-visual visit but the patient can't or won't use video.
Provider types. The waiver expanded the list of healthcare providers able to bill Medicare for the use of telehealth to include occupational therapists, physical therapists, speech and language pathologists and audiologists, among others. Now that list is limited to physicians, PAs, NPs, clinical nurse specialists, nurse-midwives, clinical psychologists, clinical social workers, registered dietitians or nutrition professionals, certified registered nurse anesthetists, marriage or family therapists and mental health counselors.
Federally Qualified Health Centers (FQHCs) and Rural Health Centers (RHCs) as a distant site. The waiver enabled FQHCs and RHCs to bill Medicare for telehealth services as eligible distant sites. This is no longer allowed.
In-Person Mental Health Treatment. The waiver enabled mental healthcare providers to use telehealth without first conducting an in-person assessment with the patient. Now, an in-person visit is required for patients receiving diagnosis, evaluation or treatment for a mental health disorder before telehealth can be used. That visit must take place within six months prior to the first telehealth visit and every 12 months thereafter while the patient is receiving treatment. An exception can be made if the provider and patient agree that the risks of an in-patient visit outweigh the benefits and the provider documents that decision in the patient's medical record.
CMS has also updated its guidance on Medicare telehealth claims during the ongoing shutdown.
According to the Center for Connected Policy, the guidance, released on October 1, directs Medicare Administrative Contractors (MACs) to implement a temporary claims hold, which can last as long as 10 business days.
"The hold is meant to prevent a large reprocessing of claims if Congress acts after the statutory expiration date, which was September 30, 2025," CCHP reports. "CMS also suggested that without further Congressional action, providers that deliver telehealth services and are now not eligible for Medicare payment as of October 1, 2025, may want to provide patients with an Advance Beneficiary Notice of Noncoverage."
Why Were the Waivers Enacted?
The waivers were launched during the height of the COVID-19 pandemic in 2020 to boost telehealth coverage and access by giving providers more opportunities to use virtual care. That increase, in part, helped fuel a surge in telehealth programs that carried over into the post-pandemic era.
Advocates say the waivers are crucial to enabling health systems and hospitals, especially those in rural and underserved regions, to improve access and clinical outcomes. Without them, it's expected that many providers will scale back their telehealth services or even end them altogether.
In letters to Congressional leaders and President Trump, ATA Action, the ATA's lobbying arm, called for both the telehealth waivers and the Acute Hospital Care at Home (AHCaH) waiver to be reinstated, saying healthcare providers have been building strong programs since they were enacted in 2020.
"Most providers and hospital systems are taking calculated risks to continue care during this time, but long-term continuity depends on action by our telehealth champions in Washington to restore these flexibilities and ensure retroactive reimbursement," said Kyle Zebley, ATA Action's executive director and the ATA's senior vice president of public policy. "Medicare patients woke up this morning without telehealth coverage for the first time since the pandemic, five years ago. Our healthcare services are regressing, falling woefully short for millions of patients in need."
The Hospital at Home Waiver
Roughly 400 health systems and hospitals were participating in CMS' AHCaH model as of September 30. Some have shut down their programs or let them lie dormant after patients transitioned out.
"Like many other healthcare organizations across the country, all of our patients who were receiving hospital at home care have been either appropriately discharged to outpatient status or transferred to brick-and-mortar inpatient care," the health system said in a statement to HealthLeaders. "Extension of the waiver, or even better a permanent authorization, is essential to allow our patients to continue to have access to this program that has improved patient outcomes, expanded access for rural communities and enabled greater flexibility in how care is delivered."
Others, especially larger organizations, say they'll continue with a model that they feel is a key element to the future of healthcare.
Executives at Mass General Brigham, one of the front-runners in the acute care at home movement, have also altered their program.
"While there continues to be strong bipartisan support for the Acute Hospital Care at Home Waiver extension, it is unfortunate that the timing of its expiration was tied to the broader government funding debate," a spokesperson said in an e-mail to HealthLeaders. "Fortunately, the steps we have taken over the last year have enabled us to pivot our operations to provide advanced care at home for patients after a hospital stay during this pause. This framework enables us to support patients outside of the inpatient waiver while maintaining the structure we need to provide exceptional acute care in the home."
"The future of healthcare is in the home and we are invested in our efforts to see this through," the spokesperson continued. "We will continue to advocate for a multi-year waiver extension to reduce the capacity strain on our brick-and-mortar hospitals and ensure our patients receive this safe, effective and exemplary care where they want it, surrounded by their family and loved ones."
The concept has a lot of supporters, and studies are showing the value in delivering acute-level care at home. MGB, for instance, has published reports noting clinical and financial benefits in caring for patients in their homes instead of a hospital, and the health system's announcement that it will continue the program points to an opportunity to show that this strategy can survive beyond the waivers.
But he also noted that the CMS model isn't perfect.
"I don't think that the way it's structured now is necessarily that way it will be structured forever," he said. "We need more of a critical mass of information" to prove what works and what doesn't.
"Research shows that hospital at home models yield positive health outcomes," the Bipartisan Policy Center stated in an August 2024 report calling for continued support for the program. That report cited a small study which found that the program led to shorter hospital stays, lower readmission rates, fewer diagnostic tests, and lower costs compared to patients admitted to the hospital for the same health concerns.
"Initial data show promise, including the potential for cost savings," the report added. "But more research is needed on patient and caregiver experiences, access and patient selection, the cost impact on Medicare and Medicaid, hospital expenses, and service delivery across diverse populations. Research is also needed on whether the relatively small number of hospitals participating is nonrepresentative and unique. … Congress needs more clarity about the likely financial effects of the model if it were to move from a model with low uptake, which is the case today, to something that would be implemented on a larger scale."
Health systems and hospitals conduct extensive reviews of any new AI tool before putting them to use. Through at least one and sometimes several different committees, they ask questions about everything from value to workflow effects.
Healthcare leaders are putting new AI tools through a rigorous review process before they’re tested in the healthcare setting. In many settings those tools pass through a series of committees, who evaluate the technology for cost, usability, workflow effects and, of course, ROI.
James Blum, MD-CDH-E, Chief Health Information Officer at University of Iowa Health Care, takes this a step further. Any pilot using AI, he says, should be good enough to publish as a study and stand up to peer review.
Here’s what healthcare leaders are asking of their AI tools:
This week’s The Winning Edge webinar focused on the backbone of the healthcare ecosystem, and how AI is being asked to make data management easier for healthcare leaders.
Healthcare organizations are dealing with massive amounts of data, clinical and financial, structured and unstructured. And while AI may be the tool to manage that data, there’s a lot of work that goes into setting up the rules and guardrails.
HealthLeaders’ The Winning Edge took on that hot-button issue this week with an in-depth discussion about data management from two renowned experts. Roopa Foulger, VP of Digital and Innovation Development at OSF HealthCare, and Sarah Pletcher, MD, MHCDS, Chief Digital Health Officer and SVP and Executive Medical Director of Strategic Innovation at Houston Methodist, laid out how their health systems are managing data and governing data usage.
It’s not an easy task. Ironically, while AI may eventually take away all the heavy lifting for data management, right now there’s a lot of complexity.
“Where the data was acted on or how the data was inputted has a meaning depending on where it is in the clinical workflow,” Foulger notes. “We're just scratching the tip of the iceberg in terms of how we're going to leverage that data clinically and financially.”
CommonSpirit Health CIO Daniel Barchi says AI development over the next five years won't focus on better technology, but on healthcare leaders finding the right way to use the tools to make healthcare more efficient.
The CIO of the nation's second largest nonprofit health system says AI governance isn't a revolutionary concept. It's a strategy built on how the industry has embraced new ideas and technologies in the past.
"It's important to remember that clinicians, hospitals and health systems have been caring for patients for many, many years through many advances in technology and many changes in clinical care," says Daniel Barchi, EVP and CIO of CommonSpirit Health, the Chicago-based network of 142 hospitals and more than 700 care sites spread across 21 states. "And if we hew to the guiding principles that clinicians have and that we as caregivers should aspire to have, we can apply those same ground rules, guiding principles and vision to AI in the same way that we've used other advanced tools safely."
Daniel Barchi, EVP and CIO of CommonSpirit Health. Photo courtesy CommonSpirit Health.
That's not to say that AI isn't causing problems with its rapid adoption, but Barchi says healthcare executives need to temper their concerns with a little common sense. They've been down this road before.
"Our goal is to make sure that we use it in the most efficient way for how fully it's developed at this point," he says. "And [we] use it as broadly as possible, with the proviso that we always have a clinician between the AI and the patient."
Reviewing AI Use Cases
CommonSpirit Health has a three-tiered approach to AI governance. Barchi says the health system runs any AI projects first through data management and patient advisory councils, then through the enterprise data and AI governance committee (EDAG), before finally going before the IT executive steering committee. Some 200 tools have made it through review and are now being used with the health system.
But there have been some that didn't make the cut.
"We've rejected 15 use cases for different reasons," he says. "Whether we didn't feel that they were clinically efficacious, [or] we were concerned about the ways that a third-party company might be using data, [or] whether or they were concerns about algorithmic bias."
Most, if not all, of those rejections come out of the EDAG committee, which meets every two weeks and is comprised of roughly 30 members, including ethicists, medical informaticists and representatives from legal, innovation, finance, clinical (including nursing), IT and cybersecurity departments.
"These people come together and evaluate every AI initiative that we have and determine if there are risks, if we're using data appropriately, if there are risks of algorithmic bias, what the upside is and whether we should approve it for use clinically and operationally in our health system," Barchi says.
Analyzing Agentic AI
He says he's particularly interested in how agentic AI evolves.
"We're creating the capability for tools to surface information, analyze it, make decisions and interact with others in ways that are very similar to what many of our colleagues do with data today," he says. "And management of these AI tools in agentic AI is almost akin to managing a team of workers. Thinking of this not as a technical process, but as a management challenge and a way to use operational efficiency safely is the next frontier for us as health system leaders."
"I anticipate over the next five years many of the advances and adoption of AI are not contingent on AI getting better," he adds. "It's health leaders thinking more intuitively about how AI can make our processes more efficient."
Which ties back to the idea that AI will replace doctors and nurses. Barchi says that isn't about to happen, as healthcare is still based on human interactions. But he does believe that healthcare organizations using AI will replace those who aren't on the bandwagon, and those using AI will become better at delivering healthcare.
"Physicians, nurses and other clinicians are more likely to be more thoughtful caregivers, because they can focus on the patient in front of them and allow AI to do more of the work behind them," he says. "And I've seen clinicians be very open-minded [about] what AI can provide them, whether it's data or insight, because they know at the end of the day, their overarching objective is improving the health of the patient in front of them."
That concept will also apply to patients using AI.
"We are better patients when we're better informed about our own health conditions," Barchi says. "We'll never be as educated as the neurosurgeon who's caring for us, but we have better insights to what he or she shares with us by being more educated and using AI and other tools to gain insight about the ways we might help with our own caregiving."
Charting a Future for AI
Looking ahead, Barchi sees AI evolving in two directions. He expects new tools and programs to be integrated directly into existing technology platforms like the EHR, so that workflows aren't negatively affected. He also believes that healthcare organizations will develop the capabilities to create their own AI tools.
"We'll simply adopt them in a way that nobody would go off and buy aftermarket parts for a car if you can buy them with the car itself," he points out. "And so yes, there will always be point solutions, but I think those are going to be fewer and more far between and developed internally. And then much of what we get from AI is simply going to be embedded in the tools that we buy and it's going to be harder for standalone companies to try to sell point solutions that are not embedded in our core platforms."
And he sees AI helping healthcare move out of the hospital, clinic and doctor's office and into the home.
"AI will begin to assimilate data and make inferences about our healthcare long before we as patients think about a clinical condition," he says. "It might monitor the number of times we open the refrigerator, the number of steps that we take, the way that we sleep, our online patterns, and look for patterns in a way that might inform an emerging condition long before you even begin to feel it physically."
In this week’s The Winning Edge webinar, Roopa Foulger of OSF HealthCare and Sarah Pletcher of Houston Methodist discussed the complexities around gathering, assessing and using data – and how those strategies can improve patient care.
Healthcare executives are dealing with more data than ever before, and new technologies like AI and concepts like patient engagement are muddying the data management waters. How can leadership stay on top?
For Roopa Foulger, VP of Digital and Innovation Development at OSF HealthCare, the key is establishing a data governance chain of command and protocols. The Illinois-based health system has a data stewardship group in place to watch not only how data is used within the network, but also how it’s managed with vendors and payers.
“It’s just been crazy, how much we have to work with,” she says.
“Everyone is trying to get their kayaks onto the whitewater river that is data flow,” adds Sarah Pletcher, MD, MHCDS, Chief Digital Health Officer and SVP and Executive Medical Director of Strategic Innovation at Houston Methodist.
Foulger and Pletcher were participants in HealthLeaders’ The Winning Edge panel this week, during which they discussed the complex landscape of data management. In particular, they talked about how AI is changing the data management paradigm and creating a challenge: While the technology promises to better gather, assess and use data, it’s making life more complicated for those in charge.
“There's an irony in that the very thing that is creating a lot more work is also there to help us do that work,” Pletcher noted.
Staying On Top of a Dynamic Landscape
Both agree that AI can manage data much better than humans, but humans have to be on top of governance. And that means understanding how to manage transactional and analytics data, as well as unstructured and structured data, and working with vendors and payers over who has access to what and who owns the data.
“We can't just care about how we, the health system, store and manage the data,” Pletcher says. “We really have to be involved in how our vendor partners store and manage data, how our employees store and manage data as they're communicating in everyday life, doing their jobs, how contractors have access to data, supply chain, insurers and our patients. Nowadays, a patient can click on, agree, agree and integrate, and have data exchange right through the EMR. It has become a very complicated world and no longer can you just focus on your own organization.”
Foulger says healthcare leaders also have to understand that data management means more than just keeping track of data. Thanks to AI, there are “dynamic new landscapes” in healthcare to govern, including metadata, continuous monitoring for drift and decay, and tools that can parse data not only for clinical but also financial value.
“Where the data was acted on or how the data was inputted has a meaning depending on where it is in the clinical workflow,” she says. “We're just scratching the tip of the iceberg in terms of how we're going to leverage that data clinically and financially.”
Pletcher says that while health systems and hospitals are often forming committees to specifically assess and monitor AI programs, data management goes back a lot further. Instead of reinventing the wheel, she says, it’s important to integrate AI protocols into long-standing data rules and standards.
Keep the “philosophy of the original protocol … in place,” she says, and “evolve and refresh the language to better capture the modern landscape.”
Where challenges may arise is in generational AI, for many healthcare organizations a new and evolving landscape. Foulger says continuous data governance is a new concept, and it requires input from all departments as well as new arrangements with vendors.
Using Data to Help Patients
One of the more intriguing possibilities with AI and data management is turning that data around to help patients. With the move toward patient-centered and value-based care, health systems and hospitals are learning how to empower patients to better understand their care journey, ranging from how to improve care management, managing health and wellness and understanding financial responsibilities.
“There's still a long way to go to help patients really understand the insights from their data,” says Pletcher.
That includes using AI to improve healthcare literacy.
“We are looking at how we can translate some of the complex clinical data into plain language or visualization,” Foulger says. Aside from using AI to translate medical jargon, this includes adding experts in human-centered and experience design to make data tools more convenient for patients.
Pletcher says the rapid development of the Internet of Things means that there are countless opportunities to capture data in the healthcare ecosystem. And that also means helping patients to understand when and how they’re not only accessing their data, but creating opportunities for unintended uses or misuse.
“We have to hold ourselves accountable … and be aware of how much data exchange is happening, and how little control we sometimes have over how those permissions are being given away,” she adds.
That will be important as healthcare evolves and more services move from the hospital, clinic or doctor’s office to the home. Both Pletcher and Fouler say they can see a future where data from outside the enterprise is just as important for care management as data from inside the hospital. Concepts like remote patient monitoring, virtual care and acute care at home will require new protocols and standards to ensure that information being pulled in is entered into the medical record and put in front of the right clinicians at the right time.
Foulger says it will be important to track data for many kinds of value, and remember that clinical data can have financial value as well. The ROI can be as varied as the effect on variations in care, improved billing and reimbursement opportunities, improved treatment and outcomes, patient engagement and satisfaction, and operational efficiency.
Both agree that “archaic” policies around reimbursement and governance also have to be updated. Healthcare organizations are often held back from using data because of the rules around data gathering and use outside the hospital, and innovative programs are held back because of reimbursement barriers.
Pletcher, meanwhile, says she sometimes wonders if healthcare is making data management more complex than it needs to be. Technology should not get in the way of someone accessing the healthcare they need.
“The patient just wants to live their life as well as they can, feel as good as they can, have as much convenience and simplicity as they can,” she says. “And that's kind of the trade-off they're making. Sometimes I'm surprised that we don't get out of our own way to deliver those experiences to patients.”
“I guess I'm surprised that there's still such an abyss between what patients want and the trade-offs they're willing to make for it and what we're delivering to them, in terms of a truly integrated convenient experience because of our worries about their privacy,” she adds. “I think that we may be making things much more complicated than they need to be.”
From enhanced EHRs to AI initiatives to new programs like RPM and virtual care, healthcare leaders are dealing with more data, structured and unstructured, than they’ve ever had before. This week’s panel takes a look at how to manage that data, separating value from noise and putting information in the right hands at the right time.
Before AI or any other new technology, there is data. And if healthcare executives don’t know how to manage their data, those new ideas won’t work.
This week’s The Winning Edge takes a look at how health systems and hospitals collect, store, assess and eventually use data, the backbone to all clinical and financial activities in the healthcare enterprise. Alongside information stored in the EHR and other technology platforms, healthcare leaders need to know how to gather and manage unstructured data coming into the system, separating value from noise and giving clinicians and others in the healthcare enterprise what they need.
And as AI and other new ideas like remote patient monitoring (RPM), digital health and patient engagement take hold, data management is becoming more important. Health systems and hospitals are dealing with more data than they’ve ever had in the past, and they’re being asked to do more with it.
HealthLeaders’ The Winning Edge will take place on Tuesday, September 30, at 1 p.m. ET. This week’s panel features distinguished experts from two large health systems: Sarah Pletcher, MD, MHCDS, Chief Digital Health Officer & System VP and Executive Medical Director of Innovation at Houston Methodist; and Roopa Foulger, Vice President of Digital & Innovation Development at OSF HealthCare.
Please join us for an informative hour on a topic that every healthcare innovation and technology executive should value.
At the HealthLeaders AI in Clinical Care Mastermind forum this week, execs from 14 health systems discussed how the technology will – eventually – improve patient care.
In the simplest terms, AI is a complex technology that aims to make healthcare simpler.
For healthcare executives, however, the devil's in the details. While AI promises to ease workflows for stressed clinicians and improve patient care, getting to that point takes a lot of time and patience. And a good understanding of what generative technology should and shouldn't be doing.
C-Suite executives from 14 health systems tackled these issues at HealthLeaders' AI in Clinical Care Mastermind forum Wednesday at the Stein Eriksen Lodge in Deer Valley, Utah, the opening event to the HealthLeaders CMO Exchange. The Mastermind program, now in its third year, brings together executives for three virtual round-tables and one in-person forum to discuss and share ideas with their peers on a key healthcare topic.
In this case, it's how AI is being introduced to care management. Early wins in this area have centered on ambient tools and bots, some designed to capture the interaction between doctor and patient (often in an ambulatory setting) and transcribe that data into the EHR, others taking on administrative tasks or even handling queries from patients.
James Blum, MD, CDH-E, Chief Health Information Officer for University of Iowa Health Care, detailed his health system's successful roll-out of an ambient AI tool for doctors, along with the ongoing deployment of chart mining technology designed to pull relevant data from the patient's medical record to help clinicians determine care pathways.
Blum noted the tools not only help doctors, but could also improve prior authorizations and even reduce denials. The key, he said, is developing AI products that address both clinical and financial – particularly revenue cycle management – pain points. That's because a tool that addresses doctor stress and burnout needs to have a more robust ROI to appeal to CFOs and even CEOs and be sustainable.
ROI is indeed a tricky component of AI. In many cases the technology is expensive and uses considerable processing power and data, straining even the largest health systems and out of reach for smaller and rural providers. AI has to have value across departments to truly succeed.
The Struggle for Clinician Buy-In
At the same time, Ai has to integrate seamlessly into clinical pathways. Many an ambient AI project has failed because the tools require doctors or nurses to take a couple extra steps or adjust their workflows. For clinicians who have weathered EHR implementations, the idea of doing one more thing to add more technology to the process is a deal-breaker; they'll ignore the tool entirely or develop work-arounds.
Corey Cronrath, DO, MPH, MBA, FAAPL, FACOEM, CMO of the Mental Health Cooperative, noted his organization tried an ambient listening tool a few years ago and was forced to discontinue it because their doctors found it to complex and intrusive. He said it took almost three years to bring their doctors back on board.
Candace Robinson, MD, CMO of LCMC Health's Touro Hospital, pointed out that the perception of AI in healthcare is that it has to be perfect, and anything that's less than perfect won't be embraced. But executives noted that AI doesn't have to be perfect to have value, only that it can improve on what clinicians are doing now.
Beyond improving clinical workloads, executives attending the Mastermind program said the value of AI lies in giving them information they need to improve care. Tipu Puri, MD, PhD, CMO of the University of Chicago Medical Center; Roopa Foulger, Vice President of Digital Innovation and Development at OSF HealthCare; and Tom-meka Archinard, MD, MBA, FACEP, SVP and CMO at University of Maryland Capital Region Health, all talked about new tools that analyze data to give clinicians insights on specific medical concerns, such as sepsis or the onset of chronic disease. But those tools need to be carefully managed to ensure that the data they're using is accurate.
While the “human in the loop” strategy is designed to ensure continuous governance of AI before it affects patient care, all the executives pointed out that it's also important to ensure that a human being is still delivering care. Aside from the concerns that AI will replace doctors and nurses, there have been some concerns that clinicians could rely too much on AI to make their decisions for them.
Michael Fiorina, MD, CMO for the Independence Health System, questioned whether AI could have an impact on critical thinking, leading to a discussion on how AI should be included in medical education. There's no doubt that tomorrow's clinicians need to have a good understanding of AI as they enter the healthcare field, but the executives agreed that it's more important that they learn medicine before they learn AI so that they're using AI to augment care delivery rather than replace it.
How Will Patients Use AI?
That same question about relying on AI could be turned around to focus on patients. Some have worried that an AI-emboldened patient could overwhelm doctors, while others feel the technology will help patients become more knowledgeable about their health and be able to contribute more to the doctor-patient relationship.
That, in part, is why Blum said he expects health systems like his will be investing in AI for patient engagement, and why the patient portal could be the next big proving ground for new AI tools and platforms. That's why many health systems are trying out bots and other tools that use AI to communicate with patients.
It's a tricky field. Mark Kandrysawtz, MBA, SVP and Chief Innovation Officer at WellSpan Health, noted his health system's implementation of a bot called Anna, which helps steer patients to the right care provider, has actually evolved from being an efficiency tools to drive growth. But there have been hiccups, from the bot using humor at an inappropriate time to convincing someone that it was an actual human.
At the same time, with some studies showing that patients prefer to talk to AI bots because they think the bots are more empathetic, some are wondering whether AI should prompt the industry to take a closer look at the doctor-patient relationship and teach its doctors and nurses how to be better care providers.
At the end of the day, and at the end of this Mastermind program, the consensus is that AI will improve healthcare, and that the industry has to weather the rough spots, learn how to best use the technology, and make sure there will always be a human being making that final care decision.
As Thomas Balcezak, MD, MPH, EVP and Chief Clinical Officer for Yale New Haven Health, put it, AI will eventually become so accepted and commonplace that we'll forget how much trouble we had putting it in place.