Skip to main content

Hospital Rankings Contradictory, Cryptic, Confusing

Analysis  |  By John Commins  
   November 02, 2016

Researchers challenge the usefulness of the plethora of perennial hospital rankings that are largely ignored beyond the executive suite and doctors' lounges.

Have hospital rankings reached the saturation point?

A research brief from the University of Michigan's Center for Healthcare Research & Transformation suggests that hospital rankings, ostensibly designed to enlighten healthcare consumers, have morphed into a confusing array of metrics and methodologies that are now largely ignored outside of the healthcare echo chamber.

"One of the overall messages we have is that we do have to take these rankings with a grain of salt because they measure different things and they do come out with different results when you compare them," says Kirsten Bondalapati, MPH, a co-author of the brief.

"Another thing we need to think about is how the consumer or patient perspective fits into this," Bondalapati says. "Some of the rankings are geared toward hospital quality improvements, and some are geared more toward patients, providing them information about what hospital to go to. One thing that could be improved is making those intentions a little clearer so we know what each ranking system is trying to do."

Does Measuring Quality Really Ensure Patient Safety?

The research brief examined nine (yes, nine!) prominent hospital rankings that are published each year and found that individual hospitals ranked all over the board.

Among the findings:

  • In 2012, 37% of hospitals were highly ranked on one of nine hospital ranking systems;
  • In 2015, 53% of Michigan acute care hospitals received a high rank on at least one of nine hospital ranking systems, but only 22.5% received a high rank on at least two ranking systems;
  • Consumers in a CHRT focus group said they don't use rankings to choose a hospital because the rankings do not always include information that they're interested in and are not presented in a consumer-friendly manner.

Bondalapati believes much of the confusion could be cleared up if the ranking systems communicated with one another and standardized evaluation methods.

A Wide Variety of Ranking Systems

"It might be difficult for that to occur because all of these ranking organizations are different, with different missions and even in different markets. Some are nonprofits, some are for-profits, some are government organizations," she says.

"Another suggestion is to have a third-party unbiased entity that would look at the ranking and all the quality measures and do their own analysis to make this more understandable for patients," she says.

That makes perfect sense, but it's doubtful that will happen because there is little incentive to change.

Hospital Speaks Up, Helps Tweak Ranking Methods

In all likelihood, each of the ranking systems uses a different methodology specifically to differentiate themselves from the other rankings. They're each carving out a niche because they know some hospital somewhere will score well on it.

Hospital marketing departments understand that if they don't earn a top grade from Ranking A, they can shop around and find a high score from Ranking B, C, D, E, F, G, or H, or I and hang a self-congratulatory banner atop their website.

"One of our strongest recommendations is that we incorporate a patient perspective more into these hospital rankings so that we know who is catering to the hospitals and who is catering to the patients," Bondalapati says. "And if they are catering to the patients then they should be addressing patient needs instead of their own needs."

Sorry, but if the primary incentive is to make things clearer for patients, don't hold your breath.

John Commins is a content specialist and online news editor for HealthLeaders, a Simplify Compliance brand.

Get the latest on healthcare leadership in your inbox.