As population health programs grow across the industry, experts expect employers and health plans to demand analysis on costs and participation levels. In its most recent Outcomes Guidelines Report, DMAA: The Care Continuum Alliance looked to standardize methodology and measurement that will allow comparison of disease management (DM) and population health improvement programs.
Building on the previous two editions, which focused on financial goals and clinical outcome measurements, the 84-page Volume III explored the areas of small populations, medication adherence, selection criteria, and wellness evaluation methodology.
Volume III further delved into population health, which mirrors the change in DM from chronic disease to caring for patients across the care continuum, says Susan Jennings, PhD, cochair of the Outcomes Steering Committee and an independent healthcare consultant in Los Angeles. Moving from siloed DM programs to broader population health means complexities in comparing a wide variety of programs. “They are all different, so it’s got to be some kind of range of ways in which one can go about evaluating them,” says Jennings.
Seth Serxner, PhD, MPH, principal at Mercer in Los Angeles, participated in Volume III’s methodology and finance, and operational metrics workgroups. Serxner, who criticized the DMAA’s first volume because it set the bar too low, says Volume III’s recommendations showed progress, adding that although the Outcomes Guidelines Committee did not solve all issues, it did provide new approaches.
“I think there is a way to go here, but I think there is a tremendous acknowledgment that we need to move beyond the old pre-post aggregate population models in terms of measuring these things,” says Serxner.
DMAA president and CEO Tracey Moorhead said Volume III built on the previous editions and “honors” the “commitment to inclusiveness and transparency.”
“Reliable, validated outcomes measurement in chronic disease care shows the value of population-based interventions. Our guidelines make that possible,” Moorhead said in a prepared statement.
The Outcomes Guidelines authors noted that measurement of population health programs is an “imperfect discipline that balances suitability (rigor) and acceptability (practicality). This tension has been explicitly acknowledged from the outset of the project and is well-known to those involved in population health, but seems occasionally overlooked by those outside the industry.”
Donald Fetterolf, MD, executive vice president of health intelligence at Alere and cochair of the DMAA Outcomes Steering Committee, said that balance is an “overarching theme” in the process.
“Ultimately, your goal of improving care depends on real-world application of these guidelines,” Fetterolf said in a statement.
One area that the industry needs to move away from is the pre-post design to evaluate financial outcomes, Serxner says. Several industry experts criticize the design that measures total healthcare costs, saying the methodology benefits DM programs.
Serxner wants population health companies not to use pre-post analysis, but supporters of the idea point to the ease of pre-post compared to a randomized controlled study.
“We could theoretically design and implement every instance of a population health improvement program as a randomized controlled study, but that would be impractical and unacceptable to many who sponsor and/or deliver these programs,” the authors stated.
Realizing that some companies will still demand pre-post analysis, Serxner says Volume III provided pre-post improvements and enhancements in the areas of building baselines and dealing with outliers.
“People still use it, and if you are going to use it, there are some things that can help,” Serxner says, such as focusing on utilization rather than total healthcare costs.
Volume III provided recommendations when dealing with small populations, which are often difficult to measure because the size can influence outcomes. DMAA noted that a small population’s high variability can result in conflicting, misleading, or inaccurate results.
To tackle those problems in gauging small populations, Volume III suggested three alternatives:
The first option is in line with standard actuarial processes and blends customer-specific results with results from a more stable population that is comparable without specifics, such as severity, age, and sex. The second alternative enables group level of activity and possibly cost data, which can be used to derive savings, and uses a more stable population. The third option enables group-level information to be used to gauge savings, uses a more stable population, and may build upon other studies, according to the authors.
To figure out an individual member’s engagement level and how programs affect outcomes, a company needs to create definitions, such as what constitutes an engaged member versus an enrolled member.
DMAA put forth a presentation cascade as a way to measure engagement levels. The presentation cascade diagram moves from who is “eligible” to what constitutes “participating.” This creates a baseline for companies when determining how an individual’s engagement level affects outcomes.
Volume III also offered recommended definitions for operational stages and measures. Operational stages included identified and targeted populations, whereas operation measures focused on enrolled populations (and defined opt-in and opt-out programs), as well as the definition of engaged and participating populations.
“That model may be specific for disease management, but what we also try to advocate for is an individual-level model that documents who is participating in what program,” Serxner says.
This allows for apples-to-apples comparisons of indivi- dual and multiple programs—for example, comparing members who completed a health risk assessment (HRA) only against those who filled out an HRA and participated in a DM program.
Serxner says having the ability to separate programs allows for easier measurement of individuals and allows for companies to accurately compare program effectiveness.
“I think we need to be careful about claiming credit when maybe there’s more than one program that can accommodate some savings,” Serxner says.
Jennings says clear definitions for operational metrics, such as the definition of eligibility, are critical when comparing programs.
The authors of Volume III developed a detailed specification for the medication possession ratio (MPR) as a measure of adherence.
MPR is a population-based measure, reported as a percentage, that uses administrative pharmacy claims and eligibility data within a defined 12-month period.
In its Outcomes Guidelines, DMAA suggested using MPR by condition and drug classes applicable to that condition and counting individuals with multiple conditions for all the conditions and appropriate drug classes. The measurement is intended only for oral medications (not inhalers and liquids) and for more prevalent chronic conditions such as coronary artery disease, chronic heart failure, diabetes, hypertension, and hyperlipidemia.
Jennings says oral medications were only included because a population health company can track medication usage via claims data. An item such as an inhaler, for example, is used on an emergency basis, so gauging use is not applicable, she says.
The DM industry has changed from caring for chronic illness to a wider population health model that seeks to improve health across the care continuum. With that in mind, Volume III included wellness evaluation methodology.
The methodology’s goal was to recommend evaluation strategies for wellness programs that are consistent with DM recommendations and appropriate for wellness programs. Volume III focused on comparing DM and wellness programs on key factors to find overlap.
Volume III is still a pretty early step in the process of evaluating wellness programs, says Jennings, who expects further exploration in Volume IV. “Now we are more broadly thinking about how you look at the full spectrum in an evaluation. That is challenging,” she says.
Volume IV and what’s next
DMAA has already started work on Volume IV and will publish white papers in 2009 on such topics as self-management, productivity, and behavior change.
The workgroups will consider approaches to identifying individuals with multiple conditions, as well as exclusions, including codes indicating residential treatment such as hospice and other exclusions that were noted in Volume I.
Jennings says those who read the Outcomes Guidelines should remember they are best practices and should not be used as a plug-and-play formula. It’s up to the end users to adapt the measurement approach to their specific populations and programs, she says.
DMAA is closer to providing measurement methods that allow payers to compare program results from different vendors, Jennings says, but more work is needed. “I think we’re much closer than we were [after Volume I],” she says.
Serxner says measurement tools that allow experts to compare programs have been lacking, adding that he has been disappointed that DM vendors have not published and presented findings to show methodology and outcomes. Instead, the vendors have focused on marketing, operations, and sales. This has sparked questions about the validity of DM, he says.
“I think it’s been a disservice to the industry that we haven’t done that, and in many cases, there is a tremendous skepticism out there about the marketing findings,” Serxner says. “I think we’re at a critical point now. If we don’t do it soon, we’re really going to lose credibility.”