Skip to main content

84% of Medical Specialties Lack Clinical Registry Affiliation

 |  By Alexandra Wilson Pecci  
   May 18, 2015

A study of 153 existing clinical registries found "there's no governing body, there's no standards, [and] there's no official central clearinghouse," says one researcher.

Of the clinical registries that exist in the United States, only a handful are up to snuff, and a lot of care isn't being tracked at all, according to a study published in the Journal for Healthcare Quality.

It not only found that most U.S. clinical registries that collect data on patient outcomes are substandard, but also that the vast majority of recognized medical specialties in the United States have no national clinical registry.

"One fifth of the US economy [is] largely unmeasured in terms of patient outcomes," says Martin A. Makary MD, MPH, surgeon and professor of health policy and management Johns Hopkins and senior author of the study. "This effort has been underrecognized, underappreciated, and underfunded."

Clinical registries—defined by the researchers as databases of patient outcomes developed and maintained by medical organizations and medical specialty groups—are neglected, and "there's no governing body, there's no standards, there's no official central clearinghouse," Makary says.

So he and his team set out to create a "registry of registries" to identify the ones that exist and describe what they're doing—and not doing. They evaluated 153 U.S. clinical registries containing health service and disease outcomes data and found that:

  • Among the 117 AMA specialty societies, just 16.2% were affiliated with a registry
  • Government funding was associated with only 26.1% of registries
  • Only 23.5% of registries risk adjusted outcomes
  • Only 18.3% of registries audited data
  • Mandatory public reporting of hospital outcomes for all participating hospitals was associated with just 2% of registries

Martin A. Makary MD, MPH

"What's better than the current standard of care? We don't know because we by and large don't measure it," Makary says. "Ninety-nine percent of care delivered has an outcome that will be untracked." Part of the problem, he adds, is that hospitals are required to track and report massive amounts of data, "including a fair amount of junk."

"In fact, the only outcome that anyone's been able to describe in the value-based healthcare conversation have been the ones that are easy to study and have a minimal impact on quality," Makary says.

"We're talking about patient outcomes here, we're not just talking about patient satisfaction scores and bloodstream infection and central line infection rates… Those are very easy to measure and they are poorly associated with overall hospital quality."

His comments echo those of Deeb N. Salem, MD, physician-in-chief and chairman of the Department of Medicine at Tufts Medical Center in Boston, and first author of the Journal of General Internal Medicine paper "Quantity Over Quality: How the Rise in Quality Measures is Not Producing Quality Results." He told HealthLeaders last month that "we may not be scrutinizing quality measures as well as we should."

Makary says that a measure such as patient satisfaction is important and should be tracked. But "you can have a completely unnecessary surgery and be totally satisfied with it," he says. "We should not fool ourselves into thinking that the easy-to-collect metrics are the ones that are comprehensive and tell the whole story about a medical center's performance."

For all of the untracked care and haphazard and sporadic registries that exist, there are a handful of exemplary researchers say, such as the cystic fibrosis registry, which is "in my opinion, the model of American registries," Makary says. It tracks nearly every patient with cystic fibrosis in the United States and allows scientists to analyze what works, what doesn't work, and ultimately, what's best for patients.

He also points to the organ transplant registry as a success story.

"A huge body of scientific literature has come out of that registry," he says, adding that its data has helped researchers have a better understand science of rejection, transplantation, and immunotherapies, and has also changed laws that govern transplant surgery.

The best registries, Makary says, are not too onerous to use, and have data collection standards that are sound yet feasible.

"So how do you resist the urge to make data collection perfect, when the real goal should be to have something that is very good and generalizable?" he says. "The real goal is finding the sweet spot of what's feasible and yet scientifically sound."

Provider Participation

In addition to sound data collection and adequate funding, registries need participation from hospitals. Makary's team often heard that registries had trouble recruiting hospitals, and cost was the top reason why.

"For a hospital to choose not to measure its performance because there's no business case that's obvious to them represents a conflict between what policy makers and quality leaders talk about when they talk about the importance of moving toward value based healthcare," Makary says.

Since the study was published, Makary says he's had a lot of feedback. "I've had members of Congress contact me and say, 'Hey how can we support registries?'" he says. He tells them to recognize registry participation. If hospitals are paying money for national benchmarking, they should be supported rather than punished financially for trying to evaluate their performance.

In short, registries need more funding, better standards, more attention, and more participation.

"It represents a tremendous opportunity to advance medical science," he says. "It's important work. And it probably represents the greatest uncharted territory in American medicine today."

Alexandra Wilson Pecci is an editor for HealthLeaders.

Tagged Under:


Get the latest on healthcare leadership in your inbox.