Quality e-Newsletter
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

Joint Commission 'Best' List Draws Skepticism from C-Suite

Cheryl Clark, for HealthLeaders Media, September 22, 2011

First, she says, there is wide disagreement among those who design and vet quality metrics about which ones add value. "There's still debate in Washington about which measures will actually improve outcomes," she says. "If it were easy, we would have agreed by now."

Her second concern is that getting a high score does not allow for "positive deviance." That's when clinicians discover that a particular process just isn't appropriate for a certain group of patients, and "try something else and find they get a better outcome."

Correcting blood glucose levels in patients during operations or giving beta blockers to patients with suspected heart attacks are two such examples. When performed in certain groups of patients, they can result in unintended consequences, Conroy says.

She says that teaching hospitals do take these process measures seriously. "But once we get into composite measures and scoring, it becomes another contest around ratings and you have to wonder if you're really improving care," she says. She thinks that's one reason why so few academic medical centers made the list. 

Asked why academic medical centers were prominently absent from the list, Jerod Loeb, PhD, Executive Vice President, Division of Research for the Joint Commission replied in an e-mail:

"Academic health centers are very complex places and, often because of that complexity, routine processes can fall through the proverbial cracks. However, at the end of the day, the processes we measure are based on sound clinical evidence that the given process, if followed (and there are no contraindications), is essential for good patient outcomes. There are – or should be – no excuses."

1 | 2 | 3 | 4

Comments are moderated. Please be patient.

4 comments on "Joint Commission 'Best' List Draws Skepticism from C-Suite"


C.L.Jones (9/23/2011 at 9:46 AM)
There are many elements of disconnect here. First-it's like comparing apples to oranges when comparing these two unique and different rankings and trying to make the same conclusion. The JC is trying to use bsic and core measures to rank basic standards of care. USNWR is a publication,informative however, and not a peer reviewed medical science based journal. There are some interesting survey and measure techinque in the USNWR methodology- that fortunately have improved over the years- but have a lot of opinion based information, from research companies owned by physicians of these major "report" headliners. Bottom line- consumer beware.

Daniel Fell (9/22/2011 at 11:40 PM)
Sadly, it's the patient attempting to make informed decisions about his or her healthcare who is faced with how best to interpret another set of conflicting quality measures. While the lack of standards surely helps some hospitals to compete in the marketplace, long-term it continues to erode consumer confidence and trust. The industry doesn't need more healthcare ratings, rankings and awards - it needs more consensus on which ones matter.

mila michaels (9/22/2011 at 3:29 PM)
Not surprising C Van Gorder is perplexed. Scripps boasts the highest number of fines in San Diego county by the state licensing board. TJC has again proven that a true and unbiased rating shouldn't be bought.