Skip to main content

Does Measuring Quality Really Ensure Patient Safety?

Analysis  |  By Tinker Ready  
   April 21, 2016

There's been a "striking" rise in the number of quality measures that are publicly reported, "but no standards on how accurate or inaccurate a measure needs to be," says Peter Pronovost, MD.

Does healthcare quality measurement ensure patient safety?  Sounds logical, unless you take a step back and ask whether healthcare quality measures truly measure quality.

That’s what hospitals have asked the Centers for Medicare & Medicaid Services to do. And this week, the agency stopped short of posting new hospital ratings on Hospital Compare as scheduled. The launch of the ratings has been rescheduled for July.

The delay has been welcomed by both providers and policy makers who say we just don’t know how well the measures work.  Peter Pronovost, MD, is a Johns Hopkins Medicine researcher and the man behind the much-touted checklist approach to patient safety. He is the director of Hopkins’ Armstrong Institute for Patient Safety and Quality.  

In an opinion piece in the current issue of JAMA, Pronovost notes that CMS and others are using publicly reported data “to make pronouncements about which clinicians and hospitals are safe and unsafe.” 

Some efforts to measure quality are better than others, he writes, but none is as good as it should be. Without standards for accuracy and timeliness of data, the metrics are only as good as the data that goes into them, he wrote with co-author Ashish Jha, MD, of the Harvard School of Public Health.

As a result, Pronovost told me, healthcare lacks valid patient safety measures even though much rides on them.

“What is striking is that there has been an increase in the number of measures that are publicly reported and [in] the amount of money at risk for performance on those, but no standards on how accurate or inaccurate a measure needs to be before you are paid,” Pronovost said in a telephone interview. 

Related: Processing Quality Measures Costs $40K Per Physician Per Year

It seems that efforts to identify and prevent medical errors and to ensure patient safety face some of the same challenges that have generated the outcry over the burden and benefit of quality measurement in general. Despite widespread deployment, critics argue that measures of both quality and patient safety are based on incomplete science.

And while much of the data needed to weigh quality and safety is available or within reach, the technology and related infrastructure needed to collect, validate, and analyze it isn’t, despite major investment in HIT.

Pronovost says we are currently in a period of frustrating debate where you have policymakers saying the measures are good enough and provider organizations saying they’re not, yet providers are burdened by having to comply with them.

“We’re talking past each other,” he says. “The real question is how accurate is the measure; how accurate does it need to be; and what does it costs to get more accurate data?  Maybe this is the best we can do for the resources we want to a spend. “

Another question: What do we need to measure? In the JAMA piece, the two authors offer suggestions for sorting all this out. For one thing, CMS needs to root out and eliminate unreliable metrics and develop good ones.  

Pronovost makes a radical suggestion:  CMS needs to define standards of what makes a good measure and set accuracy requirements before implementing measures in pay for performance and public reporting. It's a little late for that, but the recent CMS action represents a pause.

What's Measured Matters
So, while the search is on for measures that matter, what is measured also matters. Research has identified the most common causes of patient safety problems for hospitalized patients:  adverse drug events, hospital-acquired infections, blood clots, bedsores, falls, and surgical complications.

Pronovost notes, however, that nationally there is a validated approach to measuring quality for only one of them—hospital-acquired infections.

Dean Sittig, PhD, a biomedical informatics professor at The University of Texas Health Science Center at Houston agrees that more research is needed to validate measures. The problem is that payers and regulators can’t really admit that the measure needs to be refined if they are already using them.

"If you call for measures, you’ve got to act like the measures are perfect and we know exactly what to do with them,” he says. “If they say they are going to fund research in this area, they can’t really use the measure for a while.”

That would be fine with him.

Sittig echoes the very complaints that have been lobbed at CMS over the Hospital Compare data. “Most of the measure we have are not really for comparison across organizations, across facilities, [or] across physicians,” he says.

Look at readmissions. While not all readmissions are preventable, hospitals get penalized for them anyway. “So we have a measure that is very imprecise and when we start paying hospitals for that, they start dong all kind of crazy things to avoid readmissions,” he says.

The goal is not to get hospitals to optimize scores and ranking, but to get them to optimize quality and safety, he adds.

Health information technology could help and someday will. For example, rather than relying solely on billing data, researchers could more easily tap into richer clinical data. But challenges abound there too, including interoperability, inconsistent coding, and data validation. Currently, Siting says, many quality measure are still reported manually, not electronically.

Related: Clinical Registry Groups Push for Greater Access to Medicare Claims Data

Pronovost agrees that HIT systems are not yet up to the job of using data to improve safety. He would like to see better-integrated IT systems. “Healthcare is unique among industries in that it has spent heavily on technology and has very little to show for it,” he says.

CMS’s decision to hold off on hospital rankings might be a sign that it is willing to slow down and heed all this advice. But it seems unlikely that the practice of issuing rankings will go away. The newly empowered healthcare consumer wants to comparison shop.

Plenty of third parties–the Leapfrog Group, US News & World Report, Healthgrades, Consumer Reports–rank hospitals. The investigative reporters at ProPublica even turned CMS data into surgeon scorecards. The project drew both mostly jeers and some cheers from health policy types for taking data journalism to a new level. (Pronovost was critical. Jha called it a "step in the right direction.")

And then, there’s always Facebook and Yelp. Studies have shown that their rankings don’t fall far from the others. How’s that for validation?

Tinker Ready is a contributing writer at HealthLeaders Media.

Get the latest on healthcare leadership in your inbox.