Skip to main content

Common Claims by Physicians on OPPE Data

Analysis  |  By Credentialing Resource Center  
   October 12, 2020

When hospitals and their medical staffs establish expectations of physicians, it is critically important that the organization has the ability to provide physician-specific measurements that can be offered with reasonable assurances of accuracy in an individual physician performance report.

A version of this article was first published October 12, 2020, by HCPro's Credentialing Resource Center, a sibling publication to HealthLeaders.

Physicians may claim that the data used to carry out peer review is invalid.

This is a claim that many medical staff leaders have heard. It is true that the discovery of even the slightest inaccuracy will invalidate the entire performance report in the minds of some physicians.

They will assume, and no one would blame them, that if the report includes one inaccuracy it is likely that there are additional inaccuracies. The problem is that data are often imperfect, and waiting for perfect data may be an impossibly long wait.

It is imperative, however, that data be as accurate as possible. When hospitals and their medical staffs establish expectations of physicians, it is critically important that the organization has the ability to provide physician-specific measurements that can be offered with reasonable assurances of accuracy in an individual physician performance report.

By way of example, the organization may wish to report compliance with an evidence-based protocol. If the electronic health record (EHR) is robust, utilized by the vast majority of admitters, and is capable of producing a report automatically, this would be a good measure.

On the other hand, if the system is poorly utilized and manual abstraction is required to determine compliance, this measure may be a poor choice upon which to gauge physician competency and performance.

You may also hear the following claims from physicians in regard to the accuracy of performance data:

  • Attribution (“It’s not my patient”): Accuracy of attribution is critical to the credibility of the peer review process. If a question is raised about the quality of care delivered, it is important that the correct practitioner be identified. For example, a patient may have a surgical procedure by a gynecologist and then be transferred to an internist for continued medical care in the hospital. If there is a question about a direct complication from the surgical procedure itself, the gynecologist should get the query, not the internist. No system has perfect data or is able to always accurately attribute care to the appropriate physician; however, acknowledging this and working with physicians to improve the process will lend increased credibility to peer review efforts at your organization.

  • Risk adjustment (“My patients are sicker!”): The use of severity-adjusted data when available can help halt this objection at the start. Likewise, using national benchmarks, even if imperfect, is often better than some in-house-defined standard.

  • Sample size (“The ‘n’ isn’t big enough”): Sometimes we need to remind our colleagues that measuring performance is not the same as conducting a statistical study. Medical staff leaders are not using performance reports to decide when corrective action is necessary. These reports are part of the organization’s efforts to improve physician performance. If the physician follows national practice guidelines and delivers high-quality care to eight out of 10 patients, the concern should not be whether “n” is sufficient. The concern is that the physician did not deliver the best practice care to two of the 10 patients. Therefore, any “n” is sufficient.

  • Incomplete data: All medical data is incomplete to some extent. The easiest data to obtain—billing data—is frequently the most incomplete. It depends on what the physician wrote, what the coder could read of what the physician wrote, and what finally came out of the coding computer after all of the data was entered. This system filters data at many levels. Data extracted by nurses, such as core measure data, tend to be more complete. But again, there is a small amount of filtering here. The most complete data are those by physicians in the peer review process (if unbiased). But all of the data are valid for performance improvement purposes.

Source: The Medical Staff Leader's Practical Guide: Survival Tips for Navigating Your Leadership Role by William K. Cors, MD, MMM, FAAPL

The Credentialing Resource Center (CRC) is the premier destination for credentialing, privileging, and peer review expertise. Membership provides MSPs, quality professionals, and medical staff leaders with a collection of continuously updated tools, best practice strategies, and compliance tips developed by industry experts. With three membership tiers, you can customize your access level depending on your education and training needs. Learn more


Get the latest on healthcare leadership in your inbox.