Does Measuring Quality Really Ensure Patient Safety?
There's been a "striking" rise in the number of quality measures that are publicly reported, "but no standards on how accurate or inaccurate a measure needs to be," says Peter Pronovost, MD.
Does healthcare quality measurement ensure patient safety? Sounds logical, unless you take a step back and ask whether healthcare quality measures truly measure quality.
That’s what hospitals have asked the Centers for Medicare & Medicaid Services to do. And this week, the agency stopped short of posting new hospital ratings on Hospital Compare as scheduled. The launch of the ratings has been rescheduled for July.
The delay has been welcomed by both providers and policy makers who say we just don’t know how well the measures work. Peter Pronovost, MD, is a Johns Hopkins Medicine researcher and the man behind the much-touted checklist approach to patient safety. He is the director of Hopkins’ Armstrong Institute for Patient Safety and Quality.
In an opinion piece in the current issue of JAMA, Pronovost notes that CMS and others are using publicly reported data “to make pronouncements about which clinicians and hospitals are safe and unsafe.”
Some efforts to measure quality are better than others, he writes, but none is as good as it should be. Without standards for accuracy and timeliness of data, the metrics are only as good as the data that goes into them, he wrote with co-author Ashish Jha, MD, of the Harvard School of Public Health.
As a result, Pronovost told me, healthcare lacks valid patient safety measures even though much rides on them.
“What is striking is that there has been an increase in the number of measures that are publicly reported and [in] the amount of money at risk for performance on those, but no standards on how accurate or inaccurate a measure needs to be before you are paid,” Pronovost said in a telephone interview.
It seems that efforts to identify and prevent medical errors and to ensure patient safety face some of the same challenges that have generated the outcry over the burden and benefit of quality measurement in general. Despite widespread deployment, critics argue that measures of both quality and patient safety are based on incomplete science.
And while much of the data needed to weigh quality and safety is available or within reach, the technology and related infrastructure needed to collect, validate, and analyze it isn’t, despite major investment in HIT.
Pronovost says we are currently in a period of frustrating debate where you have policymakers saying the measures are good enough and provider organizations saying they’re not, yet providers are burdened by having to comply with them.
“We’re talking past each other,” he says. “The real question is how accurate is the measure; how accurate does it need to be; and what does it costs to get more accurate data? Maybe this is the best we can do for the resources we want to a spend. “
Another question: What do we need to measure? In the JAMA piece, the two authors offer suggestions for sorting all this out. For one thing, CMS needs to root out and eliminate unreliable metrics and develop good ones.
Pronovost makes a radical suggestion: CMS needs to define standards of what makes a good measure and set accuracy requirements before implementing measures in pay for performance and public reporting. It's a little late for that, but the recent CMS action represents a pause.