The Federal Coordinating Council for Comparative Effectiveness Research took its second listening session on the road from Washington on Wednesday.
At the session in Chicago, the council, authorized under the American Recovery and Reinvestment Act (ARRA) to assist federal agencies in coordinating and comparing the effectiveness of health services research, heard requests for addressing disparities and creating better transparency for research.
The goal of comparative effectiveness research (CER) is to provide information on the relative strengths and weakness of various medical interventions. Like the previous session last month, individuals participating in the listening session—representing provider, patient, research, medical education, and other healthcare organizations—suggested new ideas to consider.
Neva Lubin-Johnson, MD, a general internist on the National Medical Association's board of trustees, told the council that African-Americans have rarely been represented in clinical trials in numbers that are reflective of the general population as a whole.
Since much of comparative effectiveness data is retrospective, current and future data will continue to be "flawed"—to the detriment of African-Americans—if data issues are not addressed now, she said. "Evidence-based research has led to conclusions that are not necessarily relevant."
This can be problematic when examining conditions, such as prostate cancer, which occurs four times more often in African-American men, she said. In situations such as this, an oversampling of the population might be appropriate when it comes to comparative effectiveness research, she suggested.
Thomas Wilson, PhD, an epidemiologist speaking on behalf of the American Board of Quality Assurance and Utilization Review Physicians and the Population Health Impact Institute, called for more transparency when it came to comparative effectiveness research. This includes disclosure of a researcher's potential conflict of interest—especially in peer-reviewed journals.
"Our concern is that the traditional reliance on expert and anonymous peer review to ferret out these problems in not working," Wilson said. "The tried and true way forward is to provide detailed, timely, and clearly written disclosures of the methods used. This will enable the users of comparative effectiveness research findings to trust but verify."
He suggested that those researching comparative effectiveness should pledge "to reduce bias that will rarely—if ever be totally eliminated—and to prominently state in clear language the usefulness and the limitations of their findings." Also, researchers should show results prior to adjustments, as well as adjusted results, and research papers should disclose in details the methods and metrics used.
Naomi Aronson, PhD, executive director of the Technology Evaluation Center of the Blue Cross Blue Shield Association, said it will be important to translate knowledge of what works with the care that will work. Healthcare must find out what interventions can "improve clinician and patient adoption" and should use evidence-based care at multiple levels, she said. "We want to know—must know—how knowledge of what works can be translated to healthcare that does work."