Skip to main content

More Rankings, Less Value?

 |  By jfellows@healthleadersmedia.com  
   August 12, 2014

Many third-party organizations rate hospital quality, but healthcare leaders are finding limited value in the plethora of grades, stars, and rankings.

This article first appeared in the July/August 2014 issue of HealthLeaders magazine.

Clarification: This story has been updated to address an editing error in which comments by Mark Chassin, MD, president and CEO of The Joint Commission, were misstated.

The crowded field of hospital rankings, ratings, lists, and grades elicits strong opinions from both the organizations attempting to measure and rate quality, and the organizations that are on the receiving end of letter grades, star designations, and appearances on top-10 lists.

Critics of these proliferating hospital evaluations have a laundry list of complaints: The methods aren't transparent enough, consumers don't pay attention, and the grade, rating, or ranking given out doesn't match up with other public reports. But for every critic, there is also a proponent, and pointing out statistical shortcomings is a losing battle, says Mark Chassin, MD, FACP, MPP, MPH, president of CEO of The Joint Commission, an Oakbrook, Illinois–based organization that accredits and certifies more than 20,000 healthcare organizations and programs in the United States.

"The constituencies that love this stuff love this stuff," says Chassin, who is a strong supporter of public reporting and an equally strong critic of the methods used by some of the well-known consumer-oriented evaluations, such as U.S. News & World Report's Best Hospitals list, The Leapfrog Group's Hospital Safety Score, and Healthgrades, a website that measures the performance of physicians, hospitals, and dentists, and issues annual reports identifying the nation's best hospitals in various specialties, and by state.

"The central problem is that the data in all of these reports have fatal flaws that render them invalid as measures of quality," says Chassin. "The research-supported fact that patients don't use these data to choose doctors or hospitals is, in many ways, a good thing, because those patients aren't being misled by faulty data."

But, according to Chassin, the reports are also problematic in another way. "Hospitals, doctors, nurses, and other caregivers devote a lot of time, energy, and resources to improve their numbers to be part of these reports," says Chassin. "The biggest harm is that trying to make invalid metrics look better diverts attention from far more productive improvement efforts."

A PricewaterhouseCoopers Health Research Institute study in 2013 showed that out of the 1,000 individuals surveyed in November 2012, only 21% reported using the publicly available evaluations to choose a doctor; 16% used them to choose a hospital. Consumers still relied heavily on personal recommendations from family, friends, and physicians. The study also concluded that part of the problem was that too much information confused consumers.

Measurement chaos

The methodologies organizations use to determine the order, grade, or star ratings come under fire from academics, specialty societies, and hospitals.

Concerned about the confusing results of one hospital getting named as a best hospital but receiving an F on another list, the Association of American Medical Colleges this year developed 25 guidelines meant for hospital leaders to gauge the value of public scorecards and the like. The guidelines are based on three overarching themes: purpose, transparency, and validity. The AAMC said that no single publicly available hospital performance evaluation met all of its guidelines.

The Healthcare Association of New York State issued a similar study in 2013. It released a report card on well-known public raters, doling out between 1–3 stars, with three stars being the highest score. Two organizations earned three stars—The Joint Commission, for its Quality Check website, and the Centers for Medicare & Medicaid Services' Hospital Compare website—while several well-known ratings organizations received a single star.

But even the HANYS approach shows how tricky it can be to measure quality because neither Hospital Compare nor Quality Check are lists; rather, they are online measurement tools populated with publicly available data for consumers to use to compare hospitals.

The difference between hospital comparison tools and performance-based lists may be a fine point of distinction, but it's one that Evan Marks, chief strategy officer at Healthgrades, says is important for consumers and hospitals to understand. Denver-based Healthgrades has been evaluating hospital performance since 1998, and does issue various reports on top hospitals by state, specialty, and other indicators, such as patient experience, patient safety, and clinical quality, but Marks says publicly reported measures and ratings should not be lumped together. They should be considered separately for their meaning.

"Healthgrades doesn't give hospitals report cards," says Marks. "We provide consumers information on our website. These kinds of 'best hospital' lists are accolades. I don't think anyone should solely base their decision on where to get care on a 'best hospital' list."

At issue is the lack of standard measurement across these public performance assessments. A quick glance at the most well-known raters shows that the methods for attaining an honor vary widely. How can the Cleveland Clinic, for example, be a top-ranked hospital by U.S. News & World Report but not get named to The Joint Commission's Top Performers list? It happened this year. And, Consumer Reports, which began issuing safety scores based on a 100-point scale in 2008, gives the renowned hospital a score of just 46. Leapfrog issues 11 different grades for the hospital, broken out by location with the grades ranging from A–C. Meanwhile Healthgrades has named the Cleveland Clinic among the best 100 hospitals for cardiac care, cardiac surgery, and patient experience.

Michael Henderson, MD, chief quality officer for Cleveland Clinic Health System, which has more than 1,440 beds at its main campus and more than 4,450 total beds throughout the system, does not outright dismiss these external evaluations of hospital quality, but he says getting an A from Leapfrog or a perfect score from Consumer Reports is not going to change how the hospital operates.

"That data is not useful for driving performance-improvement change," says Henderson. "That depends on much more timely internal data, which inevitably looks different. It's having communication and understanding about the two types of quality-data sources: one to identify gaps and one to drive performance improvement. But what you're driving to improve isn't going to look the same as what's publicly out there."

The main campus of Cleveland Clinic fills out Leapfrog's hospital survey and the one from U.S. News & World Report, but like other large hospital systems, it also produces its own quality report on each of its 11 hospitals, as well as what are called outcomes books on 14 specialties. Both types of internal reports contain detailed outcomes and measures on the same types of information that Hospital Compare, Leapfrog, Healthgrades, Consumer Reports, and U.S. News & World Report use, such as heart attack, heart failure, and surgical care. But there is more information in Cleveland Clinic's reports. And the hospital shows both good and bad outcomes.

For example, at Cleveland Clinic's main campus, the hospital quality report shows the rate of central line–associated bloodstream infections in the ICU by quarter from mid-2011 to December 2013. Its goal is zero CLABSI events, which it has not been able to accomplish over the time period, though the rate of CLABSIs has continued to fall over time after a spike in 2012. It came close to the goal of zero in fall 2013 when the rate fell between 0.5 and 1.0. That's still better than the national average, which is all Hospital Compare and Healthgrades reports. Leapfrog represents Cleveland Clinic's ICU CLABSI events with two green bars out of four, meaning "some progress."

The hospital also lists information that is not easily found in the public domain, such as falls with injuries. Cleveland Clinic reports 28 patient falls with injuries during the 2013 calendar year. Its goal is zero patient injuries due to falls, and as of March 2014, a new protocol requires making frequent check-ins at the bedside, providing nonskid socks, offering bathroom assistance, and making sure the call light is within reach of patients.

The commitment to being transparent, says Henderson, comes directly from Cleveland Clinic President and CEO Delos "Toby" Cosgrove.

"Toby took the lead of saying, 'I want you guys to put our outcomes out there: good, bad or indifferent,' " says Henderson.

Most organizations that evaluate hospital quality for the public rely on Hospital Compare, which is maintained by CMS. It's a primary source of data collection because it is one of the largest repositories of information on the nation's Medicare-certified hospitals. CMS, The Joint Commission, the National Quality Forum, and the Agency for Healthcare Research and Quality all had a hand in picking the more than 100 measures on quality, safety, and patient experience available for the public to view on Hospital Compare. Medicare Provider Analysis and Review (MedPAR) data, information from specialty societies, state departments of health, state hospital associations, and the Centers for Disease Control and Prevention also are popular sources of data for organizations evaluating hospital care.

The variations in the results are due to different combinations of data used by each organization, different weights each organization places on individual measures to come up with a composite score that is understandable to the public,or because some raters use self-reported or proprietary surveys, such as Leapfrog and U.S. News & World Report.

Transparency in methodology has become increasingly important, and most rating organizations detail step-by-step how they determine their rankings; but for patients and hospitals, the scores and methodology can add up to confusion.

"Consumers want to know one thing," says Marks, noting that while clinicians and academics may be interested in the details, the patient's attitude is "Don't bore me with how you risk-adjust the data; just tell me where to go."

Emphasizing internal scorecards

Similar to Cleveland Clinic's focus on being transparent about its quality performance, Arlington-based Texas Health Resources unveiled its own quality report for 14 of its wholly owned hospitals. THR operates 11 other hospitals in the Dallas-Fort Worth metroplex, either through affiliation or joint venture, but that data isn't available for scrutiny because THR doesn't own it.

THR's new quality report includes 300 measures on 16 indicators, many of which are the same ones found on the CMS Hospital Compare website and are part of the Joint Commission's core measures, which hospitals have to report on to receive Joint Commission accreditation. Using external organizations' standards was intentional, says Dan Varga, MD, chief clinical officer and senior executive vice president of THR. He says self-reported data can be viewed as suspect, and it's better to use measures and indicators that are already vetted through independent entities trusted by the healthcare industry.

"As opposed to THR creating a definition for what we think a healthcare-acquired condition is, Medicare has a definition, for example; it is publicly available, and that indicator has gone through the National Quality Forum for approval," says Varga. "What we're saying is THR won't invent its own indicator, with its own rules and own methodologies. We're going to look at a national consensus definition, and we'll be very transparent with it so people can understand where the opportunities to improve are or why a score is at a particular level."

THR's quality report is modeled after the one used by Louisville, Kentucky–based Norton Healthcare, where Varga was chief medical officer in 2005. That's the first year Norton published its quality report, and Varga had a front-row seat to its development. Initially, Norton—which includes five hospitals, 12 immediate care centers, and more than 90physician practice locations—included approximately 200 indicators; now that number is approximately 800, says Kathleen Exline, system vice president of performance excellence and care continuum for Norton.

"This provides an internal benchmark that we use to either sustain or enhance excellent results or improve our process and outcomes where needed," she says. "We believe transparency is the right thing to do, even when the numbers aren't what we would like for them to be."

Exline also says the information in the quality report for the public is the same information that clinical and administrative staff begin with before it is broken down at the service line and unit level.

"It is the same data," says Exline. "For example, we will slice the data to show nursing unit performance, whereas on the website, we'll show hospital performance. The nurses look at unit-level and patient-level data for root-cause analysis. It's more specific because of our desire to find out the failure point in the process."

That is what Varga plans to do with frontline clinical staff at THR, and its usefulness to doctors is one of the reasons THR decided to publish its own quality report.

"It is an important thing to tell the community how we perform, but the other big, big audience is us," says Varga. "When we're able to show physicians data that's endorsed and standardized, they'll embrace that more than they will a five-star or three-star rating."

THR's push to issue its own quality report comes despite generally positive public rankings. The hospital system and its individual hospitals tend to score well across the external rating organizations. THR doesn't always pay the licensing fees that some organizations, such as Leapfrog, U.S. News & World Report, and Healthgrades, require to highlight the honor online or on advertising. But Varga explains those calculations and conclusions aren't transparent enough for THR.

"Healthgrades responds to the public's desire to simplify down to a score or grade, and when we ask them, 'How did you come up with that score or grade?' it basically exists in a black box; they consider it proprietary," says Varga. "So we can't really understand in many cases how they calculate the outcome they publish. If you're going to improve, which is one of the things we like about being transparent with this information, you have to understand how things are measured."

While Henderson from Cleveland Clinic agrees that the methodology isn't good for driving quality improvement, he points out that external reports serve a purpose. "We love U.S. News," says Henderson. "It's good in the sense that it profiles where to go if you've got the sickest of the sick patients. That is their stated goal."

Varga says he focuses on the internal operations and initiatives at THR.

"For us, we don't pay a whole lot of attention [to external scorecards], to be quite honest," says Varga. "We don't, at THR, teach to the test, or teach to multiple, different, specific tests. We try to set an incredibly high bar, which is the THR way, and then we go out and pursue it. If that gets us a five-star [rating], then we're excited about it, that's great, but it's not an intentional pursuit of ours."

THR's public quality report also isn't really new, at least among its employees and hospital leadership. Varga says the public report is an expanded version of the measures THR has been collecting for years. The report aggregates internal data, Hospital Compare data, and puts it into a format the public can easily understand. Like the example set by Cleveland Clinic and Norton, THR says it will report the good and the bad, as well as compare THR hospitals side by side, and to state and national averages when applicable.

The value of public perception

Despite the lack of consensus on their usefulness, external decrees of hospital quality and performance continue to flourish. Consumers might not be using them to make healthcare decisions, but they are reading the results of these reviews.

Ben Harder, managing editor and director of U.S. News & World Report's healthcare analysis team, says the publisher's online hospital rankings garner millions of user views each month.

"They don't come in, look at a ranking, then leave," says Harder. "They're much more engaged—several pages per visit."

Harder also says that viewership of the hospital rankings and related health advice content has grown by 70% from last year, which is a greater increase in viewership than that of U.S. News & World Report's college rankings, its oldest and still most-read list.

But the list of best hospitals that U.S. News & World Report puts out is an annual punching bag for critics because of its methodology.

One such critic is The Joint Commission's Chassin. "I have problems with all the other measurement systems whether it's Healthgrades, Leapfrog, or U.S. News," he says.

In 2011, The Joint Commission began issuing its own scorecard of sorts, an annual Top Performer on Key Quality Measures list that recognizes hospitals that achieve 95% on its accountability measures taken together and 95% on each measure individually. They are the same measures for which hospitals must achieve an 85% performance score just to be accredited. Chassin says the Joint Commission's science behind the list is strong because it is evidenced-based and drawn from clinical data that show the severity of each case.

While U.S. News does derive its rankings in part from publicly available data that is risk-adjusted and used in other evaluations, the rankings also rely on the reputation a hospital has among physicians who are surveyed. The list ranks hospitals in 16 specialties, and four of the specialties—ophthalmology, psychiatry, rehabilitation, and rheumatology—rely solely on the reputation score from surveyed physicians. The reputation score is a component of the other 12 specialties.

Chad Smolinski, senior vice president for U.S. News & World Report, says he thinks critics sometimes lose sight that its hospital rankings evaluate care for the sickest patients.

While U.S. News & World Report's Best Hospitals list receives poor marks from some rankings critics, many hospitals nonetheless consider it a badge of honor.

DeAnn Marshall, senior vice president and chief development and marketing officer for Children's Hospital Los Angeles, used CHLA's recognition by U.S. News & World Report as one of the top 10 children's hospitals in the country to launch its major rebranding campaign in 2011.

"Within Los Angeles, everyone wants the best of everything. Every city has a culture, and that happens to be our culture," Marshall says. "We are keying into the mind-set, and what we did with our brand platform is we equated the best of L.A. with the best children's hospital."

CHLA, which has 347 active beds overall and treats more than 104,000 children annually, has a long-running history with U.S. News & World Report. The children's hospital has been ranked on the publisher's top children's hospitals list since 1990, and has been on its exclusive honor roll for six consecutive years, 2009–2014. For Marshall, that was enough to put the familiar blue badge on nearly every piece of advertising during the relaunch of the CHLA brand in 2011.

"From our perspective, it's a brand moniker that is important to our hospital, especially given the fact that we are on the honor roll, and we are the only children's hospital in the west to have that designation," says Marshall. "It's important to us."

CHLA has also received honors from other groups that evaluate hospitals. The Leapfrog Group in 2013 named CHLA as a Top Hospital. CHLA also received a pediatric safety award from Healthgrades in 2010. But the only designation other than the U.S. News & World Report ranking that Marshall promotes widely is its Magnet Recognition from the American Nurses Credentialing Center.

"I could literally put every single one of those designations on everything we send out," says Marshall, who explains that while it is an honor to be a Leapfrog Top Hospital, the recognition is like "insider baseball."

"Those of us who work in healthcare understand what those designations are; I don't think consumers generally do."

Even though CHLA has been ranked highly by U.S. News & World Report for nearly 25 years, it didn't promote the ranking prior to the brand's relaunch. For Marshall, the pre- and post-rebrand metrics show clearly that using the publication's ranking on hospital advertising is effective for its patient population.

"We look at hits to our website," she says. "For example in 2010, we had 542,000 visitors. In 2013, we had 1.4 million."

Marshall also says the hospital has seen a marked increase in online donations since the rebranding campaign.

"In 2010, we were at $1.1 million, and as of 2013, we were at $1.7 million," she says.

The U.S. News & World Report ranking is so important to CHLA that Marshall says a 30-member executive team that includes administrative and physician leaders reviews the survey.

"It's a very detailed process that our hospital goes through on an annual basis, and as an executive leadership team, we focus on that questionnaire and take it very seriously."

Fighting for transparency and more measures

The Leapfrog Group, an employer-based coalition and advocacy group based in Washington, D.C., is another hospital quality reporting organization that is no darling of critics but has found growing support among hospitals.

Since 2001, it has been publishing the results of surveys it developed to gauge safety at hospitals. In 2012, it began publishing a Hospital Safety Score, which is represented by a letter grade of A–F. The scores are based on a composite of 28 publicly reported measures, such as hospital-acquired conditions, ICU physician staffing, and patient safety indicators. The primary sources are CMS Hospital Compare and data from Leapfrog's own hospital survey it administers annually.

Leapfrog's purpose is to "shine a light on hospital safety and alert consumers and give them tools to protect themselves," says President and CEO Leah Binder. She believes that hospitals and other healthcare providers should supply more data beyond what the government or an accreditation organization requires so that consumers can see a full picture of a hospital's safety and quality track record.

Hospital participation in Leapfrog's surveys is voluntary—but Leapfrog attempts to measure hospital quality even for nonparticipating hospitals.

Rexburg, Idaho-based Madison Memorial Hospital, a 69-staffed-bed hospital, was rated by Leapfrog as one of the 25 worst hospitals in the country in 2012. Madison Memorial officials publicly voiced their disagreement with the score, saying it was penalized for not participating in the survey.

When Memorial officials finally did fill out the survey, its hospital safety score rose three letter grades to a B.

Nolan Bybee, director of risk management and compliance at Madison Memorial, says the only thing that changed between 2012 and 2013 was filling out the survey.

"When we were labeled one of the worst hospitals in the U.S., they were using [data from] two years prior," says Bybee. "When we took the [Leapfrog] survey, we were 100% compliant. If you can go from one of the 25 worst to a B in less than a year, then there's something wrong with
the survey."

The bad press that resulted from being named as one of the worst hospitals is the primary reason Bybee says Madison Memorial continues to fill out Leapfrog's survey, though he's not sure how much longer he'll continue to do it.

But some hospitals that received failing grades from Leapfrog and have since turned their scores around are grateful.

Chicago-based Norwegian American Hospital also landed on Leapfrog's worst 25 hospitals list in 2012. Back then, the 200-licensed-bed critical access hospital on the poor northwest side of Chicago was not at the top of any organization's list. Its Joint Commission accreditation was in jeopardy, CMS was monitoring the hospital for patient safety issues prior to 2010, and its finances and relationships with physicians were dismal.

President and CEO José R. Sánchez, LMSW, LCSW, has helped turn around the hospital to be a safer place for patients, and credits Leapfrog for helping the hospital focus on patient safety.

"Leapfrog assists us to reevaluate ourselves on an ongoing basis," says Sanchez. "It is important for this community to see we were ranked by an outside, objective entity."

Norwegian American went from an F in 2011 to a B in 2013 and this year. Sanchez says filling out Leapfrog's hospital survey takes time—a lot of time. Binder estimates hospitals spend anywhere from 40 to 80 hours completing its 65-page survey. Sanchez says it takes more than 80 hours.

"I do expect at some time we'll get an A, as we continue to progress on quality care and patient safety," says Sanchez. "It is important."

Whether hospital leaders are filling out Leapfrog's hospital survey because they're afraid of the bad press that could accompany a low score or, like Sanchez, they view it as an opportunity to identify areas to improve, the bottom line is more hospitals are participating voluntarily.

"We have had increases steadily over time, but we had a record year in 2013," says Binder. "The Leapfrog Hospital Survey has 1,439 hospitals participating, which is an all-time record. In 2012, we had around 1,200 hospitals."

That's still a far cry from the 5,723 hospitals in the United States, and the more than 4,500 that report to CMS. The survey's lack of complete participation, while a shortfall, does not stop Binder from public advocacy on the need to improve patient safety.

She says the real shortcoming is CMS data. Leapfrog, Joint Commission, Healthgrades, Consumer Reports, and other reports rely on CMS data on Hospital Compare, but Binder is exasperated with the data's limits.

"On Hospital Compare, 90% of hospitals are rated 'average' on every single one of the things they measure," she says. "That's not going to give consumers what they need to know."

So Binder attempts to go further, using standards Leapfrog developed to measure patient safety and quality at hospitals. ICU physician staffing, evidenced-based hospital referral, and computerized prescriber order entry are all Leapfrog metrics.

Binder says ICU physician staffing is intended to improve errors in the ICU; EBHR use is aimed at lowering adverse outcomes for high-risk surgery patients; and CPOE use at a hospital is intended to reduce medication errors, one of the most common mistakes made at a hospital.

But not all hospitals have these metrics in place, and if they don't fill out Leapfrog's hospital survey, Leapfrog has limited ability to verify these activities. Binder admits the method isn't perfect, but stands by it as a "pretty good" indicator of hospital safety, particularly its measurement for medication errors, CPOE.

"Leapfrog's standard on CPOE is quite rigorous," Binder says, explaining that for hospitals to meet Leapfrog's threshold for CPOE, they must take a two-hour Web-based test.

"We give the hospital a set of dummy orders for a set of dummy patients, then we ask them to enter those orders into their CPOE system, and then report back to us what the system does. We're testing the decision-support system underneath the CPOE system. It's a proxy measure. We don't have the actual number of errors made in a hospital."

Neither does anyone else. The Joint Commission does not include medication errors as part of its core measure sets available to hospitals. Chassin says that's because errors are all self-reported, and there is no current process by which they can be validated and therefore "would pass our accountability measure test."

Cleveland Clinic's Henderson, who serves on the leadership advisory council for The Joint Commission's Center for Transforming Healthcare, says that setting up a national system for reporting medication errors is a huge undertaking. "Medication errors have not risen into that space yet because of the difficulty of having a standardized, agreed-upon measure of approach," says Henderson. "We all struggle."

This is a sticking point for Binder and other healthcare leaders who point out not only varying methodologies of report cards, lists, and rankings, but also key measures
missing from these reports.

"There are lots of limitations," says Binder. "I'll be the first to tell you that. I spend 90% of my life trying to get better data and better measures. But we have some pretty good measures and some pretty good data. It's not perfect, but we have some pretty good understanding now about how hospitals are doing. And 'pretty good' is a lot when you're entrusting your life in a hospital."

Ultimately, however, the usefulness of consumer-oriented scorecards is questionable, and not just because the methodologies for calculating scores vary. It's because going to an acute care hospital is often not a planned event, explains Barclay Berdan, FACHE, chief operating officer and senior executive vice president at Texas Health Resources.

"More than half of the patients that are in the beds of our hospitals right now came through the emergency room," says Berdan. "I think people who are going to be using hospitals or doctors electively will find a quality and safety report like the one we're putting up can be very useful. But do I think that a patient who is worried they're experiencing signs of a stroke is going to take the time to pull out their laptop or smartphone and advise the ambulance driver they ought to go here instead of there because there is a better score? I don't think so. What I think it will do over time is that EMS providers will pay attention to better outcomes."

Clinical leadership at hospitals doesn't view public scorecards the way that ratings organizations would like, Berdan says. But their participation, either by filling out Leapfrog's surveys or using the U.S. News & World Report badge as a marketing element, shows these popular scorecards aren't completely ignored by hospital leadership, particularly because hospital boards notice them, says Berdan.

"They certainly pay attention to the various reports that come out in the media and ask us a lot of questions," says Berdan. "It's given us the opportunity to help educate the board about the various natures of these various ranking systems, and really I think is, in part, the impetus for management to propose—as well as the quality committee of the board to embrace—creating our own report that we'll look to first."

Reprint HLR0814-2

Jacqueline Fellows is a contributing writer at HealthLeaders Media.

Tagged Under:


Get the latest on healthcare leadership in your inbox.