“I couldn’t have said it better myself.”
These words from a service center agent, praising a response to a patient billing question drafted by a large language model (LLM), provided powerful validation for Cedar’s technology. It wasn’t just about proving the capability to automate—it signaled a breakthrough in preserving the human touch within AI-driven support.
In this article, we pull back the curtain on developing an architecture to offer personalized patient billing support with AI. Specifically, we detail the steps taken to improve data quality and response accuracy, paving the way for better patient and agent experiences.
So much data, so little time
The saying “garbage in, garbage out” holds particularly true for LLMs. To get the best output, we needed to produce a high-quality dataset from chat transcripts that:
- Safeguards patient privacy by removing all Protected Health Information
- Guides the model to respond empathetically and professionally in various scenarios
- Provides a reliable and up-to-date source of provider-specific information
One of the main challenges was tagging and cleaning the data. This process requires a clear and consistent set of labeling guidelines, with continual updates as new issues and edge cases emerge. We also found that collaboration with subject matter experts—in this case Cedar’s service center agents—was crucial for improving accuracy and relevance.
After attempting to do this manually, we decided to put LLMs to the test. We used carefully-selected, manually-tagged data to prompt the models to act like an “unstructured data pipeline,” having them “think” step-by-step. These steps included data cleaning, segment understanding, segment parsing, and labeling.
Our experiment showed that LLMs possess a remarkable ability to complete these tasks, making unstructured data much easier to review, analyze, and later ingest.
Fact, not fiction
A common concern with LLMs is the potential for hallucinations, which can arise when a model is uncertain about how to respond. To address this, our model needed access to detailed patient billing information as well as current provider and insurance policies. By embedding this contextual knowledge into LLMs, we aim to reduce uncertainty and maintain tight control over the outputs.
This is where Retrieval Augmented Generation (RAG) comes into play. RAG combines language modeling with real-time data retrieval, enriching responses with patient- and provider-specific information.
Think of it this way: a student is taking an exam. Recalling all the answers from memory is challenging, and flipping through an entire textbook isn’t the most effective strategy. With RAG, the student gets a concise, comprehensive reference guide—providing accurate answers without the need to recall everything.
Our LLM (“the student”) receives questions from the patient about their bill. Each question comes with a curated high-level summary (“reference guide”) of key information. This approach has nearly eliminated our model’s tendency to hallucinate, allowing it to craft accurate, personalized responses.
Keeping humans in the loop
In our initial LLM use case, we believed it was important for agents to serve as a guardrail against inaccuracies and hallucinations.
We designed our tool to allow agents to:
- Send the suggested response as is
- Edit the suggested response and send a revised version
- Discard the suggested response and compose a new one
This also provided an opportunity to gather feedback on LLM-generated responses, helping us improve our product over time.
The future of AI-powered patient billing support is here
New technologies like LLMs always come with challenges, but our journey over the past year has provided valuable insights into their potential. We saw a 17% reduction in overall chat duration with our AI-powered tool at one client, equating to a 2.5-minute decrease per chat session. This indicates that we’re making agents more efficient and improving the patient billing support experience.
And this is just the beginning. Our goal is to enhance the end-to-end healthcare financial experience through the thoughtful application of AI, ensuring every interaction is not only streamlined but enriched with intelligent, compassionate support.