Helping independent providers form stronger collaborations
Healthcare has entered an age of collaboration in which growing numbers of independent organizations and physician practices are partnering to form clinically integrated networks (CINs), accountable care organizations (ACOs) and other collaborative entities. They are entering into risk-sharing agreements and coordinating the management of the health of patient populations while maintaining independent ownership. These entities do not have the central IT governing structure that, for example, a health system leverages to rationalize and consolidate electronic health record technology to a single vendor. As these complex partnerships continue to evolve it’s critical that they have a strategic data and technology plan to accommodate the inherent IT independence of the membership. Not only do they need a strategy to connect and acquire this distributed data, but they also need a data policy that provides a clear roadmap for how information is accessed, shared, and reused.
Solving Key Data Challenges
Indeed, it is essential to have a data policy that ensures the right organizations and the right individuals access data for the right purpose. Considering the independence of these collaborating entities, all with their own EMRs, health records and data, trust considerations arise around how to share data. This is compounded by the fact that often a CIN or an ACO has an anchoring health system that is sponsoring the collaboration, thus partnering with independent practices with whom they compete in other areas. As a result, independent practices may want to protect their information and for example, only share data for a specifically designated population of patients. This presents a considerable IT challenge; how do the practices identify this specific patient population and ensure only this subset of patients are shared with the collaborative entity? A data policy framework can work through this and other complex issues of collaboration, competition, and data protection.
Key to such a framework is ensuring that each practice can confidentially share all their data without implementing (at the practice) the complex and ever-changing patient matching and filtering logic that otherwise enforce data protection.
Indeed, these collaborative entities are continuously changing as different organizations join and leave the networks and as health plan contracts continuously redefine populations of eligible patients. Market dynamics, affiliations and ownership structures change often. The right data policy can accommodate this by providing flexible guidelines on who can access specific information and data -- managing the changing rules that control the chaos.
Creating a Data Policy Solution – Technical Considerations
When developing a comprehensive data policy solution and platform, there are three key tenets to consider: neutrality, scale, and flexibility.
Neutrality is central to efficiently managing policy considerations. The platform must treat each independent organization as an equal peer when it comes to acquiring, managing, protecting and reusing data. A traditional health system’s IT platform is enterprise-based and falls short of this requirement. It doesn’t account for different ownership structures and the need to protect competitive information and even specific patient records from the other partner organizations.
In other words, if an anchoring health system installs data platform technology that they own and manage behind the health system firewalls or in the cloud, a policy layer is somewhat moot. From the perspective of the independent practices participating in the collaboration, data is going to the health system with whom they are otherwise competing.
To that end, the ideal architecture is a multi-tenant cloud solution, which treats each entity – from a five-hospital health system to a two physician practice – as an equal peer and separate and independent tenant. Each tenant has their own IT capability, security considerations, and has value they can receive from the solution that is independent of the collaborative.
Scale is also crucial in a typical network that includes a wide range of EMR technology. Data policy specifies how the organization will handle ongoing transactions in a multi-stakeholder environment with multiple IT departments. An approach that attempts to “manually” reverse engineer these multiple EMR databases to extract clinical information does not scale effectively versus an approach that leverage document-based transactions from these systems. Integration through documents -- reliance on the EMR interfaces -- is far more efficient to build and maintain over time given that the data structures of EMRs are inherently complex and evolve over time.
And finally, flexibility in data storage and management is essential. Legacy SQL or enterprise data warehouse (EDW) approaches typically require rigorous data typing and a complex data structure at the level of data storage. These independent networks, however, employ a wide variance of vendors and systems that are ultimately used at the point of care to generate the data and the legacy approach implies a complex process of rationalization of this variance to understand the semantics of data in the aggregate – vocabulary mapping, cross-walks, syntactical translations, data lookups plus all the detailed operational processes that generate and manage all this metadata. The challenge of legacy SQL or EDW approaches is that this work must happen BEFORE the data can be stored – or the system must employ a complex and error-prone approach that cascades updates to this normalization metadata throughout the solution.
A big data architecture to this problem takes a different approach leveraging a data lake that stores the data close to its raw form. In this architecture, the normalization metadata is applied as you pull data out of the lake. The data semantics are normalized at read versus at write which inherently accommodates the constant change and expensive operational processes associated with creation and evolution of the metadata.
Also, a data lake architecture is far more adept at data portability and reuse than a typical EDW that is designed to be the final destination of data. This is where data policy fits into the ideal architecture. Data is pulled out of the lake accordingly to policy rules. This data is normalized into a data mart associated with a HIPAA-based Purpose of Use and specifically designed to directly drive analytics or transfer this data into existing analytics technologies such as an EDW.
A neutral, cloud-based, multi-tenant data lake architecture that scales through interface documents and interprets data at read is the ideal architecture to accommodate the multiple stakeholders, IT infrastructures, business entities, contracts, data rights of a set of independent and competitive providers collaborating around multiple populations of overlapping patients. Legacy architecture such as EDW are useful as an enterprise-based destination of aggregate data but are insufficient when considering the multi-stakeholder networks and complex data policies of a typical CIN or set of ACOs.
To learn more about Mastering Data Policy in a New Era and download our white paper, click here.