More transparency, better human curation, and standards to control privacy were some of the solutions presented at the Precision Medicine World Conference to address healthcare systems' hesitancy to implement AI.
Amid advances in precision medicine, healthcare is facing the twin challenges of having to curate and tailor the use of patient data to drive genomics-powered breakthroughs.
That was the takeaway from the AI & data sciences track of last week’s Precision Medicine World Conference in Santa Clara, California.
"There aren't a lot of physicians saying, 'Bring me more AI,' " said John Mattison, MD, emeritus CMIO and assistant medical director of Kaiser Permanente. "Every physician is saying bring me a safer and more efficient way to deliver care."
Mattison recalled his prolonged conversations with the original developers of IBM's Watson AI technology. "Initially they had no human curation whatsoever," he said. "As Stanford has published over and over again, most of medical published literature is subsequently refuted or ignored, because it's wrong. The original Watson approach was pure machine curation of reported literature without any human curation."
But human curation is not without its own biases. Watson's value to Kaiser was further eroded by Watson's focus on oncology patient data from Memorial Sloan Kettering Cancer Center and MD Anderson Cancer Center, Mattison said.
"I don't really want curation from those two institutions, because they're fee for service, and you get all these biases. The amount of money the drug companies spend on lobbying doctors to use their more expensive novel drugs is remarkably influential. If you're involved in clinical care, you want to take the best output of machine learning and you want to make sure that you have good human curation," which in Kaiser's case, emphasizes value-based care over fee-for-service, he added.
A key in human curation of machine learning and AI is how transparent the curation is, and how accessible the authoring environment for such curation is, so clinicians can make appropriate substitutions for their own requirements, Mattison said.
Revealing how patient data will be used
A current challenge of health systems is being approached by machine learning and AI companies who remain in stealth mode and are not being up-front about how and where that technology will share patient data, making it difficult for chief data officers to introduce the technology to the health system.
"Using [the patient data] for some commercial, unexpected purpose is very different than using it for the purpose that you have agreed with the health system that you're going to be using it with," said Cora Han, JD, chief health data officer with UC Health, the umbrella organization for UCSF, UCLA, UC Irvine, UC Davis, UC San Diego, and UC Riverside health systems.
A recurring theme during the conference was the need for a third party to provide trusted certification that machine learning and AI algorithms are free from bias, such as confirmation bias or ascertainment bias, meaning basing algorithms on a cohort of patients who do not represent the entire population served by the health system.
"We have no certification groups right now that certify these things as being fair," said Atul Butte, MD, director of UCSF's Bakar Computational Health Sciences Institute. "Imagine a world in five to 10 years where we're only going to buy or license methods or algorithms that have been certified as being fair in our population, in the University of California."
UCLA Health has met or exceeded the goal of representing its own demographics within Atlas, the systems community health initiative that "aims to recruit 150,000 patients across the health system with the goal of creating California's largest genomic resource that can be used for translational medicine," according to the UCLA Health website.
"We are a far cry from [meeting] L.A. county" demographics, said Clara Lajonchere, PhD, deputy director of the UCLA Institute for Precision Health. Currently, 15% of Atlas patients are Latino, and 6%–7% are African-American. "While those rates exceed that of some of the other large-scale studies, it still really underlies how critical diversity is going to be."
Alliances drive machine learning and AI-fueled innovation
Recent alliances such as the Google/Ascension agreement, or the Mayo Clinic/nference startup for drug development are further enabling the kind of volume, velocity, and variety that will drive machine learning and AI innovations in healthcare, Han said.
HIPAA, which has enabled business associates such as nference to safely enter patient-sharing relationships with providers such as Mayo, can work against the principle of transparency. "If a tech company signs a BAA with a hospital system, [outsiders] don't get to see that contract," Butte said. "We could take it on faith that all the right terms were put in that contract, but sometimes just naming two entities in a sentence seems sinister and ominous in some ways."
Health systems with more than 100 years of trust associated with their brand find themselves partnering with startups with little or no such trust, and this creates additional tension in the healthcare system.
In addition, concerns linger that deidentified data will somehow be able to be reidentified through the course of its use and sharing by innovative startups.
"Whole genomes, it's hard to deidentify those," Han said. "These are issues that we will be working through."
“We just need to develop a set of standards about how privacy is controlled,” said Brook Byers, founder and partner with Kleiner Perkins, a Silicon Valley venture capital firm.
“There aren't a lot of physicians saying, 'Bring me more AI.' Every physician is saying bring me a safer and more efficient way to deliver care.”
—John Mattison, MD, emeritus CMIO and assistant medical director of Kaiser Permanente
Scott Mace is a contributing writer for HealthLeaders.
The false start of Watson AI technology points to the need for human curation, said one healthcare executive presenter.
A recurring theme during the conference was the need for a third party to provide trusted certification that machine learning and AI algorithms are free from bias.