Skip to main content

Stop Using In-Hospital Mortality Rates to Judge Quality

 |  By cclark@healthleadersmedia.com  
   January 05, 2012

How should hospitals count patient deaths to measure quality?

Medicare now includes just those that happen within 30 days for three disease categories, and posts those scores on Hospital Compare, a number it soon will use to determine payment of Value Based Purchasing incentives starting in 2014.  

A death on day 29, the hospital might look bad. A death on day 31, and the organization shines. It seems to be a reasonable cutoff, since most deaths for heart attack, congestive heart failure, and pneumonia patients happen within 30 days.

But a paper published Tuesday in the Annals of Internal Medicine, which examined nearly 3.5 million Medicare admissions, highlights a stunning weakness in the method by which many, if not most, other payers measure quality through fatal outcomes.

It turns out they don't look that hard at 30-day mortality. Instead, health plans and some states that measure quality in death look just at the number of patients who expired within the hospital's walls during their stay. That number has dropped rapidly as the mean length of stay has declined significantly across the country, by 25% or more in recent years.

And as you can probably surmise, hospitals that have policies, practices, and procedures resulting in shorter lengths of stay (or that are quicker to transfer or discharge certain candidates with poor prognoses) have lower in-hospital deaths than hospitals that keep patients longer.  And yes, these organizations look better when their death rates are compared with hospitals that keep patients longer.

The paper's authors, Elizabeth Drye, MD and Harlan Krumholz, MD of the Yale-New Haven Hospital Center for Outcomes Research and Evaluation and their colleagues, are the experts on this sort of thing. And they suggest that this huge variability in hospital length of stay results in wide variation between in-hospital and 30-day mortality rates across the country.

And that, they say, presents a troublesome problem for measuring quality in death.

Instead, they argue, hospitals and payers should be using the same metric, which should be 30-day mortality, wherever the patient dies. That period uniformly captures most of the patients with these three illnesses who will die after— and perhaps as a function of—hospital care.

"Our results argue against using in-hospital measures," the authors write, adding that "in-hospital measures favor hospitals with shorter mean LOS (length of stay) and transfer rates."

In a phone interview, I asked lead author Drye if hospitals may tend to like scoring in-hospital death for non-Medicare patients. These tend to be younger, and hospitals can deftly move them to home or hospice or skilled nursing facilities and perhaps hide poor quality when maybe death might have been prevented with better care.

"When you measure quality, you have to worry that hospitals might, as we call it, 'game the system.' And some might behave that way," she replied. "But that's not so much our concern," she said.

"Our main point is that you should always be counting mortality within the same number of days for every patient. You should not be looking at in-hospital mortality because lengths of stay (at hospitals around the country) are so variable."

Drye tells me the authors are hoping that their paper and research will inform the National Quality Forum, which endorsed this and other quality measures that factor in-patient mortality, and urge them to "reassess."

"Right now, the NQF, which is supposed to be the keeper of good quality measures, is approving both kinds of measures," Drye says. "I would be happier if they put more weight on this particular downside of this hospital measure," although for some other conditions or procedures in which patients almost always die in a hospital, using in-hospital mortality statistics remains appropriate.

Drye and Krumholz found that for each of the three conditions, individual hospital mean length of stay varied between 2.3 to 3.7 days for patients with acute myocardial infarction, 3.5 to 11.9 days for those with congestive heart failure and 3.8 to 14.8 days for those with pneumonia.

The researchers found that, in looking at trends, patients are dying inside a hospital's walls much less often, from 8.2% to 4.5% between 1993-1994 and 2005-2006.

Drye notes that "the good thing about using 30-day measures is that it encourages the hospital to pay attention to what will happen to the patient afterwards. Hospitals have a big role in setting up that care: They decide where the patient is going to go, arrange follow-up appointments, communicate with the primary care provider and specialists like cardiologists, reconcile medications, and make sure the patient understands their plan."

"It's reasonable to say that hospitals have some responsibility for what happens when the patient leaves."

That responsibility will only intensify with accountable care models and as various sections of the Affordable Care Act are finalized with regulations that incentivize greater after-care attention that gives patients greater chances of survival.

"As we build more of these outcome measures," Drye added, "I think we'll see not just 30 days, but we'll be looking at other time frames as well, that will encourage the healthcare system to work more like a [cohesive] system."

Using 30-day mortality rates to compare hospital care for all non-Medicare patients with heart attack, congestive heart failure, and pneumonia seems fair, and avoids the variability that may disguise quality issues at hospitals with quicker discharge practices.

Not only will consumers get a more honest assessment of how their hospitals take care of them, but hospitals will get a better idea what they might need to improve, both while the patient is there and wherever they may end up, at least for 30 days.

Tagged Under:


Get the latest on healthcare leadership in your inbox.