"Other clinicians argue this approach is neither clinically necessary nor cost-effective and therefore do not routinely screen for DVT in trauma patients," they wrote.
"This clinical uncertainty leads to variability in the use of screening duplex ultrasound, creating variability in rates of DVT identified and reported – a typical example of surveillance bias."
That's "a clear incentive to avoid diagnosing this event," they wrote. This surveillance bias "may become extensive, leading to erroneous, undeserved payments for these biased measures of higher or lower quality."
Incentives to improve outcomes "may encourage clinicians to avoid appropriate diagnostic testing to minimize reported complications. Because performance measures do not specify surveillance, outcomes that are not sought ordinarily will not be detected."
The answer, the researchers suggest, is that when measures for reporting are set forth, they must be done based on principles from clinical research. "To do otherwise would be reckless and unjust."
And since surveillance is expensive, "key decisions involve who should pay these expenses and whether certain measurements are worth the financial investments."
They identified three steps to improve measurement.
1. Guidelines should specify which patients and events are at risk, the sensitivity and specificity of the screening tests, and the net risks and benefits associated with false positive results.