Skip to main content

Joint Commission 'Best' List Draws Skepticism from C-Suite

 |  By cclark@healthleadersmedia.com  
   September 22, 2011

The Joint Commission last week touted its first issue of a list of the nation's "best" 405 hospitals, identified as those that achieved top scores in a composite of process measures.

But based on some conversations I've had with a few healthcare executives, the TJC project has left many scratching their heads.

During a news briefing to launch the report, commission president Mark Chassin, MD, said he had high confidence in the measures "because they pass a very rigorous set of tests that when hospitals improve on these measures, outcomes for patients get better directly because of that work."

Those hospitals that made the list checked all the right boxes indicating the correct procedures and processes were performed 95% of the time for each and every appropriate patient in 2010.  That's a tall order.

The Joint Commission is taking these metrics so seriously that hospitals falling below 85% on their composite scores will have to raise them. 

Beginning January 1, 2012 organizations that are cited for compliance scores below 85% will have a period of time to come into compliance before their accreditation would be at risk, according to a Joint Commission spokeswoman.

Here's the rub: some health system officials whose hospitals made the list say they're questioning its worth, in part because they aren't totally on board with these process measures, and in part because they share the list with hospitals not known for high quality performance in their respective communities.  

And hospitals with national reputations for quality, as measured by Thomson Reuters or U.S. News & World Report – such as Johns Hopkins University, Stanford University Medical Center, Mayo Clinic, Cleveland Clinic, Geisinger Medical Center, Duke University Medical Center, Massachusetts General Hospital – are not on the list.

Instead, many hospitals that made the cut are smaller, and many are in rural areas.

That could be because, as one cynical healthcare executive observed last week, they don't have nearly as many patients. It's a lot easier to get a 95% or better score when checking off the boxes when you have only 100 beds, than when you have 1,000. 

And the score does not measure outcomes, such as whether a hospital's patients are more likely to die within 30 days after they came through the door, fall, have a surgical object left inside them or require readmission.

The other big concern these executives express is that this list puts on a pedestal those hospitals that may be merely "teaching to the test," not measuring what ultimately matters to the patient, which is – again – the outcome. Did the patient get better? 

Chris Van Gorder, American College of Healthcare Executives' immediate past president and CEO of Scripps Health, whose 312-bed flagship Scripps Memorial in La Jolla made the list, is one who isn't jumping with joy.

"I'm pleased to see that Scripps La Jolla made the cut but frankly, I looked at some of the others who did and I don't think all are very good hospitals – the type of hospital where I would refer a friend or loved one," he says.

"But I'm not going to lose any sleep over this list -- being on it or not on it."

Officials with Premier healthcare alliance, a large hospital quality and purchasing group, also have concerns about TJC's list.

“The uncertainty in the methodology of these types of rating programs is an example of the problem with our healthcare system," said Blair Childs, Premier's senior VP of Public Affairs. "There is no true, consistent way to measure top performance in healthcare. Each of these top 'lists' use different measures, different scoring systems, etc. They say they have the best way, but the true best way to define top performance is to utilize measures that providers themselves developed and agreed upon, and then make that information and outcomes transparent for all to see and learn from. This is what we do in our QUEST program and the results have been tremendous.”

Joanne Conroy, MD, chief healthcare officer for the Association of American Medical Colleges – most of whose teaching hospital members did not make the list – has concerns about The Joint Commission's methodology.

First, she says, there is wide disagreement among those who design and vet quality metrics about which ones add value. "There's still debate in Washington about which measures will actually improve outcomes," she says. "If it were easy, we would have agreed by now."

Her second concern is that getting a high score does not allow for "positive deviance." That's when clinicians discover that a particular process just isn't appropriate for a certain group of patients, and "try something else and find they get a better outcome."

Correcting blood glucose levels in patients during operations or giving beta blockers to patients with suspected heart attacks are two such examples. When performed in certain groups of patients, they can result in unintended consequences, Conroy says.

She says that teaching hospitals do take these process measures seriously. "But once we get into composite measures and scoring, it becomes another contest around ratings and you have to wonder if you're really improving care," she says. She thinks that's one reason why so few academic medical centers made the list. 

Asked why academic medical centers were prominently absent from the list, Jerod Loeb, PhD, Executive Vice President, Division of Research for the Joint Commission replied in an e-mail:

"Academic health centers are very complex places and, often because of that complexity, routine processes can fall through the proverbial cracks. However, at the end of the day, the processes we measure are based on sound clinical evidence that the given process, if followed (and there are no contraindications), is essential for good patient outcomes. There are – or should be – no excuses."

To be clear, no one is saying process measures aren't important. They have one critical advantage in that they're not affected by severity of illness or economic demographics, so they don't need a complex risk adjustment algorithm. The question is merely, was the thing done or not? 

Also, if these process measures tracked, we should see the same hospitals scoring well on HospitalCompare and on the new Joint Commission list. That's not generally the case.

I think hospital executives have another reason to be confused by The Joint Commission's report. The Centers for Medicare & Medicaid Services' is signaling – both with reimbursement policies scheduled to take effect in 2014 and with the release of data on its HospitalCompare website, that it plans to aggressively emphasize outcomes, such as 30-day readmission rates for heart attack, pneumonia and heart failure and 30-day mortality rates those same conditions.

Consumers, employers, payers, and providers alike are in the very early stages of understanding how to measure quality. Patients are just beginning to realize that their decisions can be based on real metrics rather than the beautiful artwork in the hospital lobby or the place where a neighbor volunteers.

The Robert Wood Johnson Foundation now lists 224 reputable state, federal,  and consumer group lists that rank hospitals in a variety of ways. If healthcare executives are confused about all of this, imagine how confusing it is to those who have to make a choice of where to seek care, the patient.

Tagged Under:


Get the latest on healthcare leadership in your inbox.