Quality e-Newsletter
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

What's Wrong With Healthcare Quality Measures? Part I

Cheryl Clark, for HealthLeaders Media, November 14, 2013

In their article in this week's the Journal of the American Medical Association, Robert Panzer, MD, chief quality officer and associate vice president for the University of Rochester Medical Center in Rochester, NY, and colleagues noted that when auditors compared objective clinical findings in the record with the billing code data, "21% of those positive for the claims-based Patient Safety Indicator 'postoperative pulmonary embolus or deep venous thrombosis' were miscoded."

They wrote, "These flaws are expected because claims data are primarily intended to communicate sufficient information for fair payment, not to accurately reflect the nuances of the clinical condition of the patient."

3. The ratings are based on old data.
On the Centers for Medicare & Medicaid Services' Hospital Compare website and many of the above listed rating sites which rely on CMS data, performance periods in some cases began as long as five years ago and ended as long as three or two years ago.

That delay allows hospital officials and front line staff to make excuses for their poor scores, arguing that they're doing it much better now. Of course no one knows for sure, because today's data won't be out for another three or four years.

4. There's too much in the middle.
One thing bugs me about Hospital Compare and a few other rating systems that most folks don't realize. Only 2% or 5% of organizations are "better" or "worse" with everyone else being in the middle, okay. If 90% to 96% percent of all hospitals are the same, why bother measuring?

1 | 2 | 3 | 4

Comments are moderated. Please be patient.

6 comments on "What's Wrong With Healthcare Quality Measures? Part I"


Jim Reinertsen (11/19/2013 at 11:04 PM)
Fifteen years ago, Mosser, MacDonald and Solberg wrote a brilliant paper describing 3 very different purposes of measurement in healthcare: research, accountability, and improvement. (Jt Comm J Qual Improv. 1997 Mar;23(3):135-47.) Cheryl, your post focuses on what Mosser et al. would term measurement for accountability[INVALID]i.e. comparison to peers, or to a standard, in which reports are typically in the form of decile rankings, or "above or below the median." The problem with this type of measurement is that for all those who look bad, most of the energy in response goes into self defense, rather than improvement. When we look bad on the comparisons, we say "the data are wrong." And the fact is, we're usually right. The data are always wrong in one way or another (bad risk adjustment, errors in the claims database, etc.) The classic recent example of this is the NYC hospital leaders who explained their poor HCAPS scores by saying "Our patients are whinier." Give me a break. And there's another problem. The top two deciles for most comparative measures of process quality e.g. many Value-Based Purchasing measures are currently 99 or 100%. The bottom deciles are 92-95%. There is absolutely no evidence that there is any clinical outcome difference between 95 and 100% i.e. between the worst and the best deciles in these measures. It's largely a matter of better coding and documentation, not better clinical care. My view is that healthcare leaders waste far too much energy "How do we compare to others?" i.e what I call the "Healthcare BCS Rankings." Patients would be better served if we all focused on measurements that asked two questions: 1) Are we getting better? and 2)What's the gap between our current performance and the theoretical ideal? In Mosser's lexicon, that's called "Measurement for Improvement."

Naomi (11/15/2013 at 1:46 PM)
As a Kaiser patient, I have two complaints: [INVALID] Kaiser uses sampling to game the system and say it has too few admissions to report results. [INVALID] Or perhaps it's because Kaiser doesn't report on its Medicare Advantage patients [INVALID] the vast, vast majority [INVALID] skewing the data terribly. On that same point, without including MA, Kaiser's reporting is meaningless and yet the government uses the data to rate KP for Medicare "stars." This isn't trivial. KP covers 7 million people; I don't know the percentage >65.

Jacob Kuriyan (11/15/2013 at 9:57 AM)
Obama is ready to diss the healthcare IT vendors and Ms. Clark is challenging the likes of JCAHO - the "Emperors" that rule healthcare. These are the unintended consequences of bringing free market forces into healthcare. The first question I have is why "healthcare quality" is equated to "hospital quality"? Sure, that's part of it, but don't we need to get away from measuring "procedures" to measuring "health"? The second question is- why focus on hospital treatments? Why not discuss treatment protocols or even medications - especially since drug costs are much higher than hospital costs in many cases. As Dr. Berwick points out in his seminal paper in Health Affairs - we must measure healthcare performance using three measures of Triple Aim - patient experience, improvements in health of populations served and cost. So these rating services are of limited value. I look at them as a valuable way to spot the bad performers - the hospitals you may want to avoid. Of course, most people select a specialist - and inherit the hospital based on the attending privileges of the specialist. Usually good doctors practice only in good hospitals - no matter what the rating of the hospital may be! I cannot wait to read her next column!