Skip to main content

Whose Data Is It, Anyway?

 |  By cclark@healthleadersmedia.com  
   April 18, 2013

Deep down, we knew it would come to this.

If quality of care is ever going to improve, providers must make the big leap from merely tracking process measures, like whether a hospitalized patient got an appropriate drug, to logging whenever complications resulted from their interventions.

Did patients develop surgical infections? Did they require a ventilator for more than 48 hours? Did operations have to be repeated? Were the hospital's rates of renal failure or blood clots higher than average? Did patients require longer lengths of stay? And of course, did any doctor's patients have a higher rate of dying?

This information should be transparent to payers and patients, not closely guarded secrets that only a hospital's insiders get to know. Unfortunately, that's pretty much the way it is today.

Beyond the care team, other hospitals' physicians, payers, employers who buy health plans, and the general public don't get to know that stuff.

But everything, in all likelihood, is about to change in a way that leads to much greater transparency. A few poorly publicized paragraphs in the New Year's Fiscal Cliff law are poised to speed it along this year.

Today, these surgical and procedural complications and their details are gathered by a growing number of physician specialty societies that designed data registries specifically for their members who volunteer their data. The idea as it began, was noble, says Robert Wachter, MD, Chairman of the American Board of Internal Medicine and an expert on adverse medical events.

These registries have been used for "internal quality improvement, to give members data that might help them improve, assuming the data don't go public and assuming that the data are fed back to the clinicians," says Wachter, Chief of Hospital Medicine at UCSF.

As they've evolved, the registries have become "the vehicle for physicians to begin to dip their toe into the measurement pool."

The Society of Thoracic Surgeons and the American College of Cardiology may have the largest such databases. Transplant surgeons, oncologists, general and vascular surgeons, nephrologists performing dialysis, and gastroenterologists also have them, and there are many others. In some cases registry participation may be a condition of certification or credentialing, while in others, not.

Some are great. Others are just beginning their journey.

But tucked into the New Year's American Taxpayer Relief Act is a provision that is making many leaders of these specialty societies quite nervous. The new law directs the Obama administration to set standards for what is a "qualified" clinical data registry, for purposes of fulfilling federal physician quality reporting requirements.

Eventually, most observers believe, this means pay-for-reporting and down the line, pay-for-performance.

The law directs the Secretary of the U.S. Department of Health and Human Services to consider whether any registry has appropriate mechanisms for transparency of data elements and specifications, risk models and measures, whether it provides timely performance reports "to participants at the individual participant level," and whether it supports quality improvement initiatives for its participants.

These few paragraphs, to those who know they exist, are exposing some tension inside the societies that run these registries. "Remember," Wachter says, "these are member organizations. And they're therefore trying to support their members in providing good care. But I think some have a hard time sometimes making hard decisions that might make some of their members unhappy."

The outcomes being measured, for example, "may have some wiggle room or bias. Or the data aren't audited."

To the degree that registries are seen as the answer to physician accountability that leads to bonus payments, as appears to be the case, "then you have a formula that may not add up," he says.

"We have to rethink this," Wachter says, "whether the societies are the right organizations to run these registries, and if they are—and they very well might be because they have buy-in from their members—we probably need to make sure the measures are unambiguously good measures, that their definitions are unambiguous, and that there's a believable audit strategy… because there are some conflicts that are baked into this formula."

Some say there also should be some mechanism to assure these registries produce meaningful quality improvement lessons, perhaps by sharing tips from peers with better results.

This week's study by Harvard researchers Sunil Eappen, MD, and Atul Gawande, MD, in the Journal of the American Medical Associationillustrates why a careful tracking, with details of adverse events and complications, is something these registries must prove they can do well.

The Harvard project evaluated coded charges at a 12-hospital healthcare system in Texas, and discovered that when privately insured patients developed complications, the system made three times as much money, as it did when insured patients' procedures went without a hitch.

The paper is being seen as a call for health plans to renegotiate how they pay for care when something goes wrong. It also is being seen—incorrectly I believe—as evidence that hospitals and doctors are purposefully causing complications to increase profits.

Wachter emphatically insists that, despite the implications of the Harvard report, "I do not believe there is any hospital or doctor in the country who is trying to harm people to make more money. Nobody sits there at a board meeting or in the C-suite and says 'We're okay with these complications because we're being paid for them.'

"But what they do say is, 'Boy, we need to build a new OR, or we need to do more marketing or hire another cardiac surgeon and that will cost a lot of money. Do we do a teamwork training program or buy a bar coding system? Well no. This year we can't afford it.'

"If the business case were stronger, then (quality improvement) would rise higher in the priority list," Wachter says.

Some hospital officials and doctors will argue that quality measurement as they're reporting to these registries is fine for that purpose, but not fine for a platform on which to base how they're paid and publicly reported.

Wachter's experience as an advisor for The Leapfrog Group, which ranks hospitals from A to F based on how safe they are for patient care, saw this first hand last summer and fall when the first rankings came out, and several hospitals received Ds and Fs.

"The hospitals that did poorly are very unhappy. And they say they feel like the measures are not very good, though I can say they are the best out there, the best available from publicly available data.

"But there's no question that while in one part of the organization people moan about the (measurement) system, another part of the organization is working on quality improvement with a passion that may previously have been lacking."

Earlier this week, I wrote about the dozens of specialty societies that wrote the Centers of Medicare & Medicaid with their concerns about federal oversight of these registries.

These registries and the societies that run them are now under pressure to produce.

As Wachter says, "this is changing the dynamics of the equation. The degree to which these specialty society run registries are being touted as the answer to quality measurement, as we move into high stakes measurement for pay for performance and public reporting, I think is creating a tension" that we'll be hearing a lot more about in coming months.

Tagged Under:


Get the latest on healthcare leadership in your inbox.