Quality e-Newsletter
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

What's Wrong With Healthcare Quality Measures? Part II

Cheryl Clark, for HealthLeaders Media, November 21, 2013

8. We Look Only Where the Light Is
Measures force hospital teams to put their resources into improving what is paid or penalized, and ignore other parts of their operations that may cry for attention.

According to Panzer and colleagues, "the total of the current and planned measures from different sources can be overwhelming, hence, the sense some organizations leaders have of excessive and potentially overwhelming measurement and reporting requirements."

Mandates may "crowd out" initiatives that would have more relevance for a particular institution's patients, staff and leadership, they add.

"For example, a hospital may internally detect problems with the safety of transitions in care and be unable to focus sufficient attention to this important patient safety issue, due to the volume of other measures to which they must direct their attention."

9. Variables are Inconsistent
I've learned from hospital leaders that healthcare quality measurement is a tower of Babel. The point at which one hospital reports an infection or a severe pressure ulcer, may vary depending on the organization.

I've heard that for some surgeons, a retained surgical object is not declared a serious adverse event if the patient is still under anesthesia when the lost object is identified and if the surgical wound can be reopened in the same surgical session without the patient being the wiser.

For others, the instance would be counted and reported.

1 | 2 | 3

Comments are moderated. Please be patient.

1 comments on "What's Wrong With Healthcare Quality Measures? Part II"


robert plass (11/22/2013 at 11:44 AM)
Good summary. Expanding on item #4, there is a difference between statistically significant and clinically or administratively relevant. Patient satisfaction scores similarly exist within a very tight range. So the difference between 80 and 85 on a scale of 100 may have a significant impact on where that score falls in a percentile ranking (50th vs 80th percentile for example). But does the difference between an 80 and an 85 really mean anything relative to patients recieving those services? Currently, there is too much emphasis on process measures rather than outcome. Giving education to patients relative to stopping smoking seems like a good idea, but does it really have an impact that improves health? What is actually being measured is whether or not the information being provided is DOCUMENTED, not even whether or not it was given, given in an effective manner, nor whether or not it caused any harm to fail to provide that information or benefit from providing it. Relative to #10, many hospitals actually have a system that evaluates performance quite well, but like many HR functions, the results are not advertised. The emphasis needs to be on education and improvement, not punishment. Also, it generally takes a pattern of mistakes to indicate a problem, since any given doctor can and will have a bad outcome at some point, but that makes them human, not a bad provider. Many of the measures are simply THOUGHT to be a good idea without any data to back up whether or not that is actually true. Some measures come and go because what was thought to be a good idea is subsequently proven to be wrong or to even be causing more harm than good. There is also significant variability between the INDIVIDUALS who are entering the data as well as the hospitals they work for. As strict as the criteria are, there is some room for judgment and it can be hard to always follow the guidelines precisely and to the letter. So, it sounds like a good idea to measure, and it can be. But even the medical literature that is supposed to create the evidence that "evidence based medicine" is based on can be quite flawed, misinterpreted and may not be replicated in subsequent studies. So, understanding the limitations of the data, and being cautious with interpretation is paramount.