Two sets of researchers find that hospitals spending gobs of money to enroll in the National Surgical Quality Improvement Program don't have better patient outcomes than non-participants in the NSQIP program.
War broke out this week in the world of surgical quality.
David Etzioni, MD |
Two groups of researchers used large data sets to undress NSQIP, the American College of Surgeon's 600-hospital improvement collaborative. They said, in a nutshell, in separate JAMA reports, that participation in the program doesn't improve outcomes over non-NSQIP hospitals over time.
That's because all hospitals—prompted by numerous financial penalties, incentives, and special hospital networks funded by the Patient Protection and Affordable Care Act—have shown improvements in surgical outcomes all by themselves in recent years.
"NSQIP, (the National Surgical Quality Improvement Program) officials have made statements, based on their own data over a two-year span of time, that hospitals that engaged in NSQIP showed a measureable improvement in outcomes," says lead author of one of the two JAMA papers, David Etzioni, MD, a colon and rectal surgeon at the Mayo Clinic in Phoenix.
"But if you thump your chest about it, saying that your hospitals are improving because of something you did, you have to at least consider what other hospitals are doing," Etzioni says. "My response is to ask, what exactly will the hospitals be doing differently the day after reading their reports" that reflect poorer outcomes? "NSQIP may be a tool to quality, but it isn't necessarily a straight line to quality."
As they said in the old shoot-em ups, them's fightin' words, especially considering the large number of patients in each study.
Nicholas Osborne, MD, of the Center for Healthcare Outcomes and Policy at the University of Michigan in Ann Arbor, who led the second JAMA study, quipped that his and Etzioni's papers "do shake things up quite a bit."
"The bottom line is that we didn't see an association between NSQIP participation and improved outcomes above and beyond what we see in all hospitals over time, because all hospitals have improved surgical outcomes. Mortality did drop in both sets of hospitals, but it didn't drop more in NSQIP hospitals."
Clifford Ko, MD |
Besides, many hospitals might learn they aren't as good, but lack leadership, tools, culture, and programmatic consistency to implement change, Osborne says.
"Garbage In, Garbage Out"
The studies looked at a large number of patients. Etzioni's looked at 14 complications such as sepsis, aspiration pneumonia or reopening of a surgical site for 345,357 hospitalizations in 113 academic hospitals, half participating in NSQIP.
Most Surgical Readmissions Caused by Common Complications
Osborne's looked at more than 1.2 million patients undergoing surgery at 263 NSQIP hospitals and 526 non-NSQIP hospitals, and analyzed data for serious complications, reoperations, readmissions, and mortality.
Expectedly, NSQIP officials, are shooting back.
They've issued several statements saying there are flaws in both studies, which NSQIP director Clifford Ko, MD, characterized in a phone conversation as "garbage in and garbage out." That's because both Osborne's and Etzioni's reports used administrative claims, whose complication and outcome data are notoriously inaccurate and lack clinical detail.
Claims data is so inaccurate, Ko says, that the Centers for Medicare & Medicaid Services is moving away from using it in pay-for-performance programs such as the physician quality reporting program, and is moving instead to physician specialty registries like NSQIP's.
Claims data is based on hospital coding rules and coders' interpretations of clinician orders based on a search for key words, Ko says. "Let's say someone has had colon surgery, and four or five days after the operation gets a fever. The doctor will write, 'please order CT scan to rule out infection.' The coder sees that word, and under ICD-9 rules, will write 'there's an infection,' unless somewhere in the chart it's specifically written 'no infection,' "which doctors rarely follow up to do. "That's why there's such a huge false positive rate," Ko says.
Proud of NSQIP
The ACS is extremely proud of NSQIP, a program that grew out of a VA hospital initiative in 2001. With more than 600 member hospitals, it's now the largest multi-specialty quality collaborative.
Its popularity is based largely on its audit process. NSQIP requires participating hospitals to pay for NSQIP-trained quality reviewers who investigate charts and even call patients at home to record outcomes, something administrative claims data doesn't reflect. Hospitals pay the ACS upwards of $10,000 to participate, and additional amounts to hire data abstractors.
Surgeons and hospitals sign up for certain modules, for example colorectal surgery. With uniform data collection, NSQIP can compare surgeons' outcomes with those from other NSQIP participants. When surgeons see they do relatively poorly, they can refocus their efforts on improvements, learning from those who scored better.
But that's where Osborne and Etzioni say the NSQIP program may be failing its participants.
"We all feel that anything that gets us around a table talking about quality is a good thing," Etzioni says. "But what happens to the surgeon or hospital that gets a negative quality report on some outcome? Ideally, there would be a list of things we could and should be doing, and it would maybe be some things we don't do every time but should.
"Unfortunately, that list doesn't really exist. We don't know the right way to take that report and turn it into a clear pathway for quality improvement success."
34 Ways to Stop Colorectal Surgery Infections
Osborne adds that even when they know they need to improve, hospitals often can't, because it's tough to change surgical routines. It requires "complex, sustained, multifaceted interventions, and most hospitals
may not have the expertise or resources" to undertake them, he wrote.
"Our interpretation is that without a control group, you can't confidently say that NSQIP participation improves outcomes," he says.
Etzioni says that all by itself, surgical quality across the country has improved: Doctors are selectively referring patients to specialists in centers with greater volume; more surgeons are taking pains to reduce OR time, and there's more selective referral of procedures to higher volume operators.
Berwick Steps Into the Fray
Former CMS Administrator Don Berwick, MD, writing in an accompanying editorial, stepped in to referee. He wrote that authors of both sets of papers "struggle to explain their findings, troubled as any sensible person must be by the suggestion that knowing results would not help caring, committed clinicians and organizations improve their results."
But he disagrees with their conclusion that, as Osborne wrote, one needs a true comparison group to tell if NSQIP hospitals improved more. "In the pursuit of improvement, capturing local individual stories and within-organization trends is as important for learning as is calculating P values for relative differences between groups." He added that some of the hospitals probably did use ACS NSQIP for improvement.
"The most likely explanation for the findings of these two studies is that end-results information, although necessary for improvement, is not sufficient, and that the skills necessary to make effective changes in processes and cultures do not yet pervade US hospitals, to say the least."
This is all terribly depressing. Nobody wins in a war. And common sense tells us sharing information should lead to improvement. But then you hear about yet another friend getting a hospital infection from surgery, and you know that we still have a long way to go to make surgery safer.