Transparency in methodology, case and patient level, and service line level are essential to guide performance improvement initiatives.
Depending on context, the word transparency has different meanings. Outside the scope of medicine, it conveys a sense of invisibility. In health care – which is built on the scientific method requiring evidence for decision-making – it bespeaks the ability to see into areas that were previously obscured. For administrators and physicians to make change to improve outcomes, they not only need data, but they also need the ability to have transparent compare groups to ensure apples-to-apples comparisons. Armed with this transparency, they have the evidence they need to lead change that results in process improvement.
Transparency at hospital and service line level
A common rebuttal I hear when talking to physicians about data and compare groups is, “My patients are sicker, so expecting similar outcomes without appropriate adjustments in compare groups is not meaningful or helpful.” In many cases, this is a valid argument. A hospital that accepts a large number of high-acuity cases shouldn’t be compared, in some outcomes, to a hospital that appropriately transfers its high-acuity cases to a referral center.
When looking at inpatient mortality between hospitals, the patient populations can be very different. Even using filters such as “academic medical center” as a compare group is not transparent enough without identifying who is in the compare group, because not all teaching hospitals are alike. Some have large transfer volumes; some are Level 1 trauma centers; some do bone marrow transplants, and the procedure mix and volume varies. Physicians will want to be able to have transparent compare groups that include the names of hospitals, so when they see differences in their outcomes, they know it is a valid comparison.
Transparency is not only important at the hospital level but also at the service line level. How one hospital defines orthopedics may vary from another hospital’s definition. Spine cases can be handled by orthopedics or neurosurgery, and in some cases, both. Comparing orthopedic outcomes from a hospital that doesn’t include spine cases to a hospital that does include spine cases isn’t an appropriate comparison. And when looking at cardiology, for example, ensuring the mix of invasive cases compared to noninvasive cases is important.
Transparency at case or patient level
The ability to drill down and see what went into the performance calculation at the case or patient level is also important. Patient attribution is an ongoing problem. The accuracy of the surgeon who performed the procedure is often correct but for medical cases or cases managed by numerous physicians the attribution becomes tricky. Oftentimes, the physician of record is the discharge physician but that person may have only taken care of the patient on the last day of a hospital stay. Physicians want to be able to drill down to the case level to ensure they were responsible for the majority of the care and if they weren’t, then they want to have that case excluded from the analysis. With appropriate transparency this can be done and the physician can use the adjusted analysis and not discount the entire result.
Transparency in risk adjustment and methodology
In addition to the transparent compare group, the need for transparent risk adjustment is essential for performance improvement. Transparent methodologies for risk adjustment help at the patient level to ensure accurate and relevant comparison. It differentiates the expected outcome based on the difference in acuity.
For example, a 21-year-old with acute appendicitis without any other medical conditions would have a different expected length of stay and resource utilization than an 80-year-old with acute appendicitis who also has decompensated congestive heart failure, renal insufficiency and poorly controlled diabetes. It’s critical that clinicians can transparently see the factors used in the modeling to understand the drivers that alter the expected outcome. Again, physicians are evidence-driven. Bringing outcomes data that has been gleaned from a “black box approach,” where one has no insight into what influenced the expected values, leads to poor acceptance by physicians.
Lastly, transparency in metrics or methodology is essential to guide performance improvement initiatives. For example, there are a number of composite metrics such as AHRQ’s PSI 90 that are based on numerous weighted metrics. This makes for a simple score, but the score is not actionable by physicians or administrators because there is no insight into what specific component of the composite is driving the performance. As a clinician, if I was told I had an opportunity to improve in PSI 90, I would not know which of the multitude of components to begin to address. With data transparency, I would be able to see that I need to address hospital-acquired postoperative deep vein thrombosis, which is a component of PSI 90.
Health care data is very complex, and transparency is paramount in any effort aimed at reducing clinical variation and improving hospital safety and patient outcomes. Tough conversations about change are much easier with physicians and administrators if you have data from peers to drive improvement than if you have aggregated data from 150 random and non-transparent hospitals. If you have transparent data and work in collaboration with peer organizations, your organization will improve. It’s really that simple.
David Levine, MD, FACEP, Senior Vice President, Advisory Solutions for Vizient, Inc.