Quality e-Newsletter
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

What's Wrong With Healthcare Quality Measures? Part II

Cheryl Clark, for HealthLeaders Media, November 21, 2013

We know that measuring healthcare quality helps healthcare systems improve. But we could and should be doing it a lot better.

The way we measure quality in healthcare is pretty darn primitive. Even how we think about what constitutes quality is flawed. Just look at the examples I've been compiling and tell me I'm wrong. You may share your own observations in the comments below. Or email me directly.

Part I covers items 1 6 on my list. Here are the rest:

7. Quality Measures Come and Go
 What we call a legitimate measure for payment or reporting is usually endorsed by a long negotiated process within the National Quality Forum, a 14-year old organization contracted by CMS to come up with valid ways to measure quality in most healthcare settings. But 15% of the 700 or so measures used today have not received NQF endorsement.

What's more, the number of measures that receive its imprimatur changes drastically. According to Robert Panzer, MD, chief quality officer and associate vice president for the University of Rochester Medical Center in Rochester, NY, in the last year, NQF withdrew endorsement for more than 100 of its endorsed measures, and added another 90.

The NQF process is largely hidden from public view. How do hospitals keep up with all this?

1 | 2 | 3

Comments are moderated. Please be patient.

1 comments on "What's Wrong With Healthcare Quality Measures? Part II"


robert plass (11/22/2013 at 11:44 AM)
Good summary. Expanding on item #4, there is a difference between statistically significant and clinically or administratively relevant. Patient satisfaction scores similarly exist within a very tight range. So the difference between 80 and 85 on a scale of 100 may have a significant impact on where that score falls in a percentile ranking (50th vs 80th percentile for example). But does the difference between an 80 and an 85 really mean anything relative to patients recieving those services? Currently, there is too much emphasis on process measures rather than outcome. Giving education to patients relative to stopping smoking seems like a good idea, but does it really have an impact that improves health? What is actually being measured is whether or not the information being provided is DOCUMENTED, not even whether or not it was given, given in an effective manner, nor whether or not it caused any harm to fail to provide that information or benefit from providing it. Relative to #10, many hospitals actually have a system that evaluates performance quite well, but like many HR functions, the results are not advertised. The emphasis needs to be on education and improvement, not punishment. Also, it generally takes a pattern of mistakes to indicate a problem, since any given doctor can and will have a bad outcome at some point, but that makes them human, not a bad provider. Many of the measures are simply THOUGHT to be a good idea without any data to back up whether or not that is actually true. Some measures come and go because what was thought to be a good idea is subsequently proven to be wrong or to even be causing more harm than good. There is also significant variability between the INDIVIDUALS who are entering the data as well as the hospitals they work for. As strict as the criteria are, there is some room for judgment and it can be hard to always follow the guidelines precisely and to the letter. So, it sounds like a good idea to measure, and it can be. But even the medical literature that is supposed to create the evidence that "evidence based medicine" is based on can be quite flawed, misinterpreted and may not be replicated in subsequent studies. So, understanding the limitations of the data, and being cautious with interpretation is paramount.