Quality e-Newsletter
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

More Rankings, Less Value?

Jacqueline Fellows, for HealthLeaders Media, August 12, 2014

Many third-party organizations rate hospital quality, but healthcare leaders are finding limited value in the plethora of grades, stars, and rankings.

This article appears in the July/August 2014 issue of HealthLeaders magazine.

Clarification: This story has been updated to address an editing error in which comments by Mark Chassin, MD, president and CEO of The Joint Commission, were misstated.

The crowded field of hospital rankings, ratings, lists, and grades elicits strong opinions from both the organizations attempting to measure and rate quality, and the organizations that are on the receiving end of letter grades, star designations, and appearances on top-10 lists.

Critics of these proliferating hospital evaluations have a laundry list of complaints: The methods aren't transparent enough, consumers don't pay attention, and the grade, rating, or ranking given out doesn't match up with other public reports. But for every critic, there is also a proponent, and pointing out statistical shortcomings is a losing battle, says Mark Chassin, MD, FACP, MPP, MPH, president of CEO of The Joint Commission, an Oakbrook, Illinois–based organization that accredits and certifies more than 20,000 healthcare organizations and programs in the United States.

"The constituencies that love this stuff love this stuff," says Chassin, who is a strong supporter of public reporting and an equally strong critic of the methods used by some of the well-known consumer-oriented evaluations, such as U.S. News & World Report's Best Hospitals list, The Leapfrog Group's Hospital Safety Score, and Healthgrades, a website that measures the performance of physicians, hospitals, and dentists, and issues annual reports identifying the nation's best hospitals in various specialties, and by state.

"The central problem is that the data in all of these reports have fatal flaws that render them invalid as measures of quality," says Chassin. "The research-supported fact that patients don't use these data to choose doctors or hospitals is, in many ways, a good thing, because those patients aren't being misled by faulty data."

But, according to Chassin, the reports are also problematic in another way. "Hospitals, doctors, nurses, and other caregivers devote a lot of time, energy, and resources to improve their numbers to be part of these reports," says Chassin. "The biggest harm is that trying to make invalid metrics look better diverts attention from far more productive improvement efforts."

A PricewaterhouseCoopers Health Research Institute study in 2013 showed that out of the 1,000 individuals surveyed in November 2012, only 21% reported using the publicly available evaluations to choose a doctor; 16% used them to choose a hospital. Consumers still relied heavily on personal recommendations from family, friends, and physicians. The study also concluded that part of the problem was that too much information confused consumers.

Measurement chaos

The methodologies organizations use to determine the order, grade, or star ratings come under fire from academics, specialty societies, and hospitals.

Concerned about the confusing results of one hospital getting named as a best hospital but receiving an F on another list, the Association of American Medical Colleges this year developed 25 guidelines meant for hospital leaders to gauge the value of public scorecards and the like. The guidelines are based on three overarching themes: purpose, transparency, and validity. The AAMC said that no single publicly available hospital performance evaluation met all of its guidelines.

The Healthcare Association of New York State issued a similar study in 2013. It released a report card on well-known public raters, doling out between 1–3 stars, with three stars being the highest score. Two organizations earned three stars—The Joint Commission, for its Quality Check website, and the Centers for Medicare & Medicaid Services' Hospital Compare website—while several well-known ratings organizations received a single star.

But even the HANYS approach shows how tricky it can be to measure quality because neither Hospital Compare nor Quality Check are lists; rather, they are online measurement tools populated with publicly available data for consumers to use to compare hospitals.

The difference between hospital comparison tools and performance-based lists may be a fine point of distinction, but it's one that Evan Marks, chief strategy officer at Healthgrades, says is important for consumers and hospitals to understand. Denver-based Healthgrades has been evaluating hospital performance since 1998, and does issue various reports on top hospitals by state, specialty, and other indicators, such as patient experience, patient safety, and clinical quality, but Marks says publicly reported measures and ratings should not be lumped together. They should be considered separately for their meaning.

"Healthgrades doesn't give hospitals report cards," says Marks. "We provide consumers information on our website. These kinds of 'best hospital' lists are accolades. I don't think anyone should solely base their decision on where to get care on a 'best hospital' list."

At issue is the lack of standard measurement across these public performance assessments. A quick glance at the most well-known raters shows that the methods for attaining an honor vary widely. How can the Cleveland Clinic, for example, be a top-ranked hospital by U.S. News & World Report but not get named to The Joint Commission's Top Performers list? It happened this year. And, Consumer Reports, which began issuing safety scores based on a 100-point scale in 2008, gives the renowned hospital a score of just 46. Leapfrog issues 11 different grades for the hospital, broken out by location with the grades ranging from A–C. Meanwhile Healthgrades has named the Cleveland Clinic among the best 100 hospitals for cardiac care, cardiac surgery, and patient experience.

Michael Henderson, MD, chief quality officer for Cleveland Clinic Health System, which has more than 1,440 beds at its main campus and more than 4,450 total beds throughout
the system, does not outright dismiss these external evaluations of hospital quality, but he says getting an A from Leapfrog or a perfect score from Consumer Reports is not going to change how the hospital operates.

"That data is not useful for driving performance-improvement change," says Henderson. "That depends on much more timely internal data, which inevitably looks different. It's having communication and understanding about the two types of quality-data sources: one to identify gaps and one to drive performance improvement. But what you're driving to improve isn't going to look the same as what's publicly out there."

The main campus of Cleveland Clinic fills out Leapfrog's hospital survey and the one from U.S. News & World Report, but like other large hospital systems, it also produces its own quality report on each of its 11 hospitals, as well as what are called outcomes books on 14 specialties. Both types of internal reports contain detailed outcomes and measures on the same types of information that Hospital Compare, Leapfrog, Healthgrades, Consumer Reports, and U.S. News & World Report use, such as heart attack, heart failure, and surgical care. But there is more information in Cleveland Clinic's reports. And the hospital shows both good and bad outcomes.

1 | 2 | 3 | 4 | 5

Comments are moderated. Please be patient.

1 comments on "More Rankings, Less Value?"


Timothy Lantz (8/19/2014 at 7:32 PM)
This article highlights many of the shortcomings of traditional performance assessments: the methods aren't transparent, the results are inconsistent, the data is not standardized. And most importantly, as noted by Michael Henderson, MD, chief quality officer of Cleveland Clinic Health Systems: "That data is not useful for driving performance-improvement change." While today's healthcare marketplace may demand that healthcare providers participate in these public assessment tools, they do little to help what Dan Varga, MD, chief clinical officer and senior executive vice president of Texas Health Resources calls the other "big, big audience" – which is the providers themselves. Many times, this data is there in your organization, but getting to it, seeing it presented in a meaningful way, and interpreting how to take action on it to move the bar in the right direction is a difficult, if not impossible, task. I'm impressed with the organizations taking charge and creating their own benchmarking tools and reports. For those providers that are aggressively benchmarking, how are they drilling down to root causes and subsequently using their data to generate an immediate impact on costs, quality and outcomes in their organizations?