Skip to main content

Optum's Chief AI Officer: AI is Driving Value-Based Care Improvements

Analysis  |  By Scott Mace  
   September 28, 2022

In the first of a two-part interview, Dennis Chornenky, Optum's senior vice president and chief AI officer, describes how he is driving AI tech for 15 million members of the United Health subsidiary.

At the start of 2022, Dennis Chornenky, MPH, became chief artificial intelligence officer and senior vice president at UnitedHealth Group health plan subsidiary Optum Health, after having served as a senior advisor and presidential innovation fellow in the White House in both the Trump and Biden Administrations.

The chief AI officer is one of the newest titles in the C-suite. Only a handful exist, in places such as the US Department of Health and Human Services and at technology companies like IBM, Elevance Health, and eBay.

In this two-part HealthLeaders interview, Chornenky describes just what a chief AI officer does, how it dovetails with pressing needs in Optum and all of healthcare, and what AI means for the future of healthcare.

HealthLeaders: Optum has pushed for value-based healthcare. What role is AI playing in driving that?

Dennis Chornenky: At Optum and UnitedHealth Group we’re driving healthcare transformation toward comprehensive value-based care, and AI is playing a big role in that. It’s a key focus of our growth strategy, helping more patients and care providers transition from fee-for-service to value-based approaches. We’re applying advanced technologies to drive better and more consistent care outcomes at lower overall cost.

Dennis Chornenky, MPH, senior vice president and chief AI officer at Optum Health. Photo courtesy Optum Health.

We have around 15 million members participating in value-based arrangements with over 1,000 hospitals and over 100,000 providers. Through our OptumCare delivery organizations we’re leading the industry in terms of the proportion of the patients we serve participating in value-based arrangements. I think we're expanding that at the highest rate out of any other care delivery organization in the US as well. 

The way AI can help us accelerate this expansion into value-based care is by leveraging data to identify patients and members best fit for value-based care models and the clinical innovations and operational efficiencies that are most important in driving that transformation. There is a spectrum of data-driven insights that help us to better understand which patients may benefit most from which types of interventions and which types of care plans. That ends up getting broken down into a whole lot of different things, whether we're looking at disease prevention or surveillance, or integrating telehealth and virtual encounters into care modalities.

We are applying supervised machine learning techniques to improve our ability to predict disease progression and enable earlier interventions and unsupervised techniques like clustering to help us better understand the natural cohorts in our patient and member populations to advance more personalized care models. Overall, we are looking at anything that can help us improve patient outcomes, advance clinical innovation, and reduce costs.

HL: How can healthcare audit the AI it's starting to consume and use that to drive improvement?

Chornenky: You're right to make the connection that the way we approach the risks involved in deploying AI applications can be an important opportunity to drive improvement. AI governance is an emerging field that can leverage industry frameworks like Responsible AI to facilitate auditability and mitigate regulatory and reputational risks. A more technical framework referred to as ML Ops can help mitigate technical and model lifecycle risks. When done correctly, AI governance in healthcare helps to improve access to care and advance health equity. Without it, AI applications can run the risk of actually amplifying existing healthcare disparities.

I’m really encouraged that healthcare leaders are starting to understand that AI governance is an important area of investment and that it can help enterprises identify and mitigate the technical, regulatory, and financial risks posed by AI. 

It’s also fascinating how rapidly innovation has been evolving in this field, with more and more startups and AI enterprise companies launching new offerings for Responsible AI, ML Ops, and bias and fairness assessments. The more forward-thinking health systems are also making investments in internal processes. Mayo Clinic, for example, has recently stood up a governance model they refer to as the “AI Translation Assessment” process, led by a distinguished group of experts.

At the federal level, the FDA is developing regulatory guidelines for Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. NIST is developing a new AI Risk Management Framework.  There is a growing body of emerging legislation across the US and in the EU that will have a significant impact on how we develop and deploy AI. I participate in several industry groups focused on developing best practices and standards for AI governance in healthcare and strategies for regulatory engagement.

HL: What learnings did you take from your time at the White House?

Chornenky: Serving in a non-political role across Republican and Democrat administrations, especially during the pandemic, gave me a lot of perspective on how our federal government works and how to successfully formulate and advance national policy, particularly in the healthcare and technology sectors. 

As a senior advisor and a Presidential Innovation Fellow I was initially focused on advising our US chief technology officer on national AI strategy and our federal chief information officer on federal AI strategy. National AI strategy is how we think about cultivating growth and innovation in the private sector and the markets regarding AI/ML technologies, scaling up investment in R&D, academic partnerships, and also building trust in these technologies among the American people. This is also where we start getting into AI ethics and Responsible AI, or trustworthy AI.

Federal AI strategy is how do we think about standing up better data science capabilities across the federal government. This is everything from vanilla IT cloud migration, to upskilling the existing workforce with data literacy and analytics curricula, to thinking about data science as a career path, creating new job codes with OPM, and new processes and programs for engaging data science talent, recruiting, and retention. I had a portfolio of agencies I worked with to advance innovation, AI governance models, and AI capability maturity roadmaps.

As part of this work, we launched a new federal AI community of practice that was meant to bring together leaders and practitioners from across federal agencies to share best practices and do the type of collaborative work that they might not always be able to do within their usual, perhaps more constrained, agency environments. I also helped manage a government coordination committee that produced the executive order promoting the use of trustworthy AI in the federal government. This was a very important initiative that was bipartisan in nature, and the government is implementing the provisions of that executive order today.

When the pandemic hit, I was able to apply my training as an epidemiologist to help coordinate response efforts across federal agencies and our private sector partners, including technology companies, health systems, and payers. 

What turned out to be more consequential for me, however, was my background in telehealth. I previously had an AI-driven telehealth and smart-scheduling company out of Palo Alto. Through that work I got to know everybody in the industry, the CEOs of the larger telehealth companies, the different industry associations and who led them, and top law firms working on telehealth regulatory issues around the country. So I really ended up in a unique position to pull all of that together and very quickly formulate and advance a national strategy on telehealth and how we were going to work across federal agencies with our private sector partners to make telehealth accessible to as many Americans as quickly as possible. 

I think probably the biggest silver lining, if you will, of the pandemic, was that it accelerated telehealth adoption and access to virtual care for Americans across the board. It wasn't only for mitigating the risk of the spread of infectious disease, but also helped to ensure continuity of care for non-COVID related cases.

There was a tremendous amount of work done from an administrative and a policy perspective. Within just a few weeks, we put out over 50 waivers to enable telehealth, a couple dozen new billing codes, and a new modern website,, to help patients and providers adopt telehealth in safe ways. We also convened a telehealth innovation summit, which was a great way to celebrate a lot of the work that had been done, particularly out of the deputy secretary's office at HHS, and with our private sector partners, but more importantly, to align on what the next steps should be to continue to advance adoption of telehealth and expanding access to care for all Americans.

Editor's note: Part 2 of this HealthLeaders interview with Optum SVP and chief AI officer Dennis Chornenky will be posted on Thursday, Sept. 29.

“Overall, we are looking at anything that can help us improve patient outcomes, advance clinical innovation, and reduce costs.”

Scott Mace is a contributing writer for HealthLeaders.


Optum is applying data-driven insights from AI to prevent disease through earlier interventions and more personalized care models.

AI governance involves new technical frameworks, such as Responsible AI and ML Ops, to mitigate the technical and lifecycle risks of using AI in healthcare.

In his previous role serving under both Presidents Trump and Biden, Chornenky helped coordinate pandemic response and advanced national strategies on telehealth and AI.

Get the latest on healthcare leadership in your inbox.