New technology is changing the economics for hospitals—lowering thresholds, shortening time to cash, and boosting success rates
Denials have become one of the most expensive choke points in the revenue cycle. Providers are facing growing volumes, more complex cases, and limited capacity to respond at scale. For hospital finance leaders, that leaves millions of dollars uncollected and exposes organizations to mounting risk.
“The economics of working every denial haven’t been successful in the past, but new technology is changing that,” says Spencer Allee, chief AI officer at Aspirion. He points to advances like large language models (LLM) and other AI tools that are opening new ways to handle cases that once fell through the cracks.
In this interview, Allee shares how denials management is evolving, where AI is delivering the biggest lift, and what providers should weigh as they make build, buy, or partner decisions.
Q1: Where does AI deliver the biggest impact versus disappointment?
Allee: One of the clearest advantages is in addressing denials that historically went untouched. Roughly 60% of denials weren't worked due to the high cost and the need for high-level human expertise and time to overturn them, including the long tail of low-dollar denials and downgrades. Writing an effective appeal means pulling together pieces from many different sources—the medical record, denial letters, coding or care guidelines, payer policies, and even managed care contracts. An expert would have to review all of that and then distill it into an argument, which could take hours or days.
AI advances in the last few years are changing this. LLMs have made it possible to parse through unstructured information in minutes. Combined with human expertise, it opens the door to working a much wider array of denials. Cases like partial denials or downgrades, which often got left behind, can now be addressed effectively.
Disappointment occurs when there’s a belief that AI is like magic and works flawlessly. The reality is that LLMs make mistakes. ChatGPT might write something that looks like a perfect appeal letter, but it will miss a lot of crucial information. For us, what has worked is building a platform with the right guardrails—integrating code sets, payer policies, and clinical guidelines at the right time—so subject matter experts have the strongest foundation for their work.
Q2: What realistic performance benchmarks are you seeing?
Allee: What’s realistic depends a lot on your strategy—whether you’re building, buying, or partnering. I would think about it in terms of your use cases. For us, it’s denials management and improving denial outcomes. From there, we measure results across four areas. The first is the claim threshold at which we can work effectively. Our model is based on providing real expertise in the form of clinicians, attorneys, and other experienced subject matter experts. Historically, this has meant we focused on higher-dollar claims with a minimum balance of $10,000.
Now, with AI, we’re able to automatically address that longer tail and pull in a much broader set of accounts that previously wouldn’t have made sense financially. DRG downgrades are a good example. They were often left untouched because they were partial denials with lower dollars, and now we’re able to work those very effectively for our clients.
The second benchmark is time to cash. We’ve seen about a 40-day improvement from account placement through payment—a material gain for hospitals managing liquidity. The third is success rate, where we’ve consistently seen a 10 to 20 percentage point lift. And finally, we look at appeals per bill. With AI, we’ve reduced the need for second and third appeals by roughly 20 percent.
Q3: For build vs. buy vs. partner decisions, what hidden factors determine success?
Allee: An effective AI strategy starts with being clear on outcomes—what you’re trying to achieve and which use cases support those goals. For each use case, the decision may be to build, buy, or partner, and it doesn’t have to be the same answer across the board. What matters is working with people you trust who have a track record of success, because past performance is often a good predictor of future results.
The framework I use to make these decisions is capacity, capabilities, culture, and conviction. Capacity refers to whether you have the bandwidth and budget to develop and sustain real AI solutions over multiple years. That means competing for talent with pure-play AI companies. Capabilities are about whether you already have the in-house expertise to build and scale software. With large language models, the challenge isn’t training your own models anymore—it’s about developing the right applications to use vendor models. If you don’t have that capability internally, you’ll need to either build it out or look to vendors and service partners who do.
Culture is about whether your leadership embraces innovation and is willing to iterate and fail, then learn and try again. Without that culture, building in-house becomes more challenging, and it may be more effective to buy or partner to achieve ROI faster. And finally, conviction is about time horizon. If you want to build internally, you need the conviction to make a five-year commitment. If your organization needs results today, then buying or partnering is the more realistic path.
Q4: For providers who haven’t made the leap to AI yet, how long do they have before this technology gap becomes insurmountable?
Allee: The trends are clear: denial rates are only going up. Payers are pushing more denials to the back end and issuing more partial denials or downgrades, because they know most health systems don’t have the capacity to fight those at scale. From the provider side, the only way to address them is with technology. When people ask when the technology gap will become insurmountable, my view is that for providers still trying to fight denials without AI, you’re already there. The real question isn’t an if or when—it’s what is your AI strategy?