Existing Databases Make Tracking Comparative Effectiveness Research Difficult
The 15-member Federal Coordinating Council for Comparative Effectiveness Research, which held its final listening session this week in Washington, heard from many healthcare providers in that session on the challenges of working effectively with data. The council will now evaluate testimony from all three of its sessions and prepare recommendations this month for the White House and Congress on where efforts for comparative effectiveness research should be focused.
Polly Pittman, PhD, executive vice president of AcademyHealth in Washington, DC, said her organization "knows firsthand what challenges can result" from a lack of a common definition of comparative effectiveness research. This occurred in compiling the organization's annual report on the volume and cost of comparative effectiveness research across the United States.
The database sources in the study suggested that cancer treatment was the most common treatment that was the focus of comparative research in clinical trials. However, the study also found that tracking comparative effectiveness was not an easy matter: Existing databases made it difficult to track research by study design.
John Cuddeback, MD, the chief medical information officer of Anceta, the collaborative data warehouse of the American Group Practice Association in Alexandria, VA, suggested that attention should be paid to how physicians can use their electronic health record systems to better extract the "wealth of detailed clinical and process of care data and patient outcome data across the continuum." About 85% of his group's membership now has EHRs, he said.
He said his group supports "a view of comparative effectiveness that goes beyond simply comparing medications, devices, and existing guidelines"—especially when it comes to patients with multiple conditions or comorbidities. Instead, he would like to see movement toward using real world data "in the context of collaborative, rapid cycle improvement" to expand the evidence base for "costly and vulnerable patient populations."
Mark Roberts, MD, an internist, professor of medicine at the University of Pittsburgh Medical Center, and president of the Society for Medical Decision Making, supported continuing investment in the "development and advancement of comparative effectiveness methods themselves and the rigorous training in their use."
But he also specified that, "we cannot relay solely on the randomized control trials to answer complex clinical questions," he said. "The best treatment for an individual patient with a special need or disease simply cannot be determined from the knowledge of the average effect of that treatment in a narrowly defined randomized controlled trial."
For instance, a particular therapy that has a higher five-year survival rate may be "irrelevant" to an ailing grandmother who wants a therapy that maximizes her ability to be alive at her granddaughter's wedding in two months, Roberts said.
"Comparative effectiveness research must develop the ability to account for the important individual differences in physiology and risk faced by patients making decisions about their care," he said. "And it also must account for individual patient preferences."
Janice Simmons is a senior editor and Washington, DC, correspondent for HealthLeaders Media Online. She can be reached at firstname.lastname@example.org.
- New G-Codes to Pay Doctors for Broad Array of Non-Face-to-Face Care
- CMS Sets 2014 Pay Rates for Hospital Outpatient and Physician Services
- Telehealth Improves Patient Care in ICUs
- Hospital M&A Volume Up, Value Down in 3Q
- 50 Years of Fighting Pressure Ulcers Called Into Question
- Douglas Hawthorne—A Chance to Do Something Big
- Why You Should Involve Patients in Nursing Handoffs
- States Rejecting Medicaid Expansion Forgo Billions in Federal Funds
- Small Doesn't Mean Doomed
- Nonprofit Hospital Outlook 'Negative' in 2014