First, he says, many of the measures used in both the HEN project and Premier's Quest, "are of low validity, with data varying among sites and limited quality control." And, he points out, there's no peer review of the results, and "no public reporting of how accurate they are. There's a reason why peer review is important; it assures us that the science passes the smell test."
Second, the projects lack control groups. "Hospitals all over the country are working on this stuff," with dozens of other projects, Pronovost says, "so that makes it difficult to attribute the results to one particular intervention."
For example, while the Quest program receives no federal funds, a large number of its participating hospitals are also in Premier's HEN, which does get federal money.
In planning the design of the HEN programs, which target 10 types of hospital harms, officials with the Centers for Medicare & Medicaid Services "didn't standardize the collection of data, Pronovost says. "It was like 'let 1,000 flowers bloom.' I recall pushing back on that, saying, 'if you do that you'll never be able to say how big of an impact these really had.' "
Ashish Jha, MD, professor of health policy and management at Harvard School of Public Health, quipped, "I'm fine with letting 1,000 flowers bloom. But we'd love to know which flowers actually bloomed, and which ones didn't."
A huge amount of this is taxpayer money, after all.