Skip to main content

VBP Program Impact, Incentives Questioned

 |  By cclark@healthleadersmedia.com  
   August 11, 2014

A pay-for-performance researcher says Medicare's value-based purchasing program has not shown improved hospital quality in its initial nine-month rollout period, probably because the financial incentives are too low. But Premier Inc.'s medical director challenges the findings.

In the initial nine-month rollout period for Medicare's value-based purchasing program, participating hospitals achieved quality scores that were no higher than at ineligible hospitals, says Andrew Ryan, associate professor of healthcare policy and research at Weill Cornell Medical College, and fellow researchers.

The culprit is low financial incentives, he says. Eligible hospitals appear to need much greater payment incentives than the 1% to 2% specified in the Patient Protection and Affordable Care Act.

"It's important to have stronger incentives, which is a level of payment that's enough to generate change that will improve value for the system," Ryan says. His paper was published online in a July issue of Health Services Research.

But Ryan's research was challenged by Richard Bankowitz, MD, medical director of Premier Inc., a large group purchasing and quality collaborative which years ago designed a demonstration project that was the model for the VBP program. The hospitals used to compare VBP results make it impossible to draw any conclusions from Ryan's research at all, Bankowitz says.

Negligible quality differences, trivial financial incentives

According to rules set by the Centers for Medicare & Medicaid Services, during that first nine-month performance period, which ended March 31, 2012, eligible hospitals relinquished 1% of their base operating Medicare payments to a pool that was redistributed to the best performers. In a few weeks—for fiscal year 2015, which begins October 1—the pool increases to 1.5%, and ultimately to 2% by October 1, 2017.

The first periodof the federal VBP program evaluated 12 clinical process measures, such as how often antibiotics were given within one hour prior to surgical incision, and patient experience, as measured by responses to eight questions, such as how well patients thought doctors and nurses communicated with them and managed their pain.

Ryan wanted to know how the 2,800 hospitals eligible for the program compared on those measures with 399 hospitals that were ineligible during that early period but contributed data on their quality voluntarily to CMS. There was no difference.

Ryan believes that's largely because the 1,427 hospitals that lost money didn't have enough at stake, nor was there sufficient upside.

The worst lost just 90 cents for every $100 in Medicare payments, 1,329 lost less than 50 cents, and of those 463 lost less than 10 cents. The 1,557 hospitals that received bonus payments received only between a fraction of one cent and 83 cents per $100 in Medicare payments, and only 117 hospitals got more than 50 cents.

About $850 million went into the pool, but most hospitals got it all back; only about $120 million was ultimately redistributed, Ryan says.

For his comparison group, Ryan looked at quality scores for hospitals in Maryland and for critical access hospitals. CAHs aren't eligible for the VBP program because they are not paid under Medicare's prospective payment system, but they nevertheless voluntarily reported data to CMS. He compared their scores with those from 2,873 hospitals paid prospectively, whose participation is mandated by the PPACA.

A questionable comparison group

But Premier's Bankowitz says Ryan's paper doesn't tell anything about the VBP program's effectiveness. "The jury is still out" on whether VBP works, he says, and more time is needed to study the question.

"He's asking an important question, about whether VBP makes a difference," Bankowitz says, "but this study makes it very difficult to draw any conclusion. That's because the control group used is not a valid control group … because he was limited to hospitals that voluntarily chose to report their data, and that's a very biased sample."

Those hospitals were "very confident that they had good results, or were committed to transparency and therefore most likely very committed to improving. Either way you have hospitals that are highly motivated to improve," he says, "so it's not surprising" they did just as well as VBP program-mandated hospitals, Bankowitz says.

All of the measures used in that early phase of VBP were well known to hospitals because of prior rulemaking from CMS. The measures were already being reported on Hospital Compare, he says.

"Hospitals were well aware they were going to be measured on these, even in the pre-rule-making phase, when CMS put out the VBP plan, which was way before the Affordable Care Act" was written, he says.

Bankowitz takes issue with Ryan's paper for another reason: most of the hospitals in the control group were critical access hospitals, which have 25 beds or fewer and are paid on a cost plus percentage basis. "These are very small hospitals, in very remote areas, and they're hardly representative of the nation's hospitals," Bankowitz says. "Reaching this kind of conclusion based on that kind of a control group I think is very difficult."

Ryan acknowledges that the similar scores of participating and nonparticipating hospitals might be due to a number of other factors. But he dismisses those.

Hospital quality leaders may have been unaware of or ill prepared to tackle specific VBP measures, since the final rule defining those measures was published May 6, 2011, just a few weeks before the first performance period began on July 1. The core measures also changed, from 17 in the proposed rule to 12 in the final rule.

It's also possible that hospital quality leaders weren't taking the program seriously because they suspected the U.S. Supreme Court would not uphold the PPACA when it decided that case on June 28, 2012. But Ryan says most hospital leaders should have known that another form of the VBP rule would have been passed if PPACA had failed. CMS had this power in annual rule-making and had signaled even before the PPACA passed that it was moving to use quality measures for payment.

All told, Ryan believes that the biggest reasons for VBP hospitals not outperforming non-participating hospitals are really just two.

"The amount of money isn't enough to put improvement efforts in high gear, and it takes time for hospitals to figure out how to improve. Those two things probably have come together."

Change takes a long time, and even the most well-intended hospitals can't move their Titanics quickly, especially when it comes to hospitality skills of nurses and doctors as measured in patient experience scores, which weigh 30% of a hospital's VBP score.

Subsequent years of the VBP program add 30-day mortality outcomes, efficiency rates, hospital-acquired infections, and adverse events.

Tagged Under:


Get the latest on healthcare leadership in your inbox.