Skip to main content

New Partnership Explores Nexus of Digital Health and Patient Safety

Analysis  |  By Christopher Cheney  
   May 09, 2019

Artificial intelligence and natural language processing have the potential to boost patient safety.

A million-dollar partnership between University of California San Francisco and The Doctors Company is set to explore the intersections of digital health and patient safety.

The shift from paper-based information systems to digital formats has generated reams of information that has the potential to augment clinical judgment and improve patient safety with digital health tools.

"Artificial intelligence and algorithms can be used to help physicians and nurses select the right assessment information to gather and guide selection of medicine or therapy," Kerin Bashaw, senior vice president of patient safety and risk management at The Doctors Company, told HealthLeaders this week.

"The evidence indicates that—on the whole—we are practicing safer care because we have digital tools in place," Julia Adler-Milstein, PhD, an associate professor at UCSF School of Medicine, told Healthleaders.

The partners are well-matched, Bashaw said. "The Doctors Company is a leader in medical malpractice, so we have been a thought leader in patient safety, and UCSF is a leader in medicine and medical education."

"Malpractice claims are the ultimate data when things have gone wrong," Adler-Milstein said. "This allows us to try to help solve the problems that involve high patient risk, where there is actual harm. That is data that is very hard to come by."

Digital health safety opportunities
 

Bashaw and Adler-Milstein said artificial intelligence (AI) is presenting several opportunities to improve patient safety.

  • Using natural language processing (NLP) to review clinical charts from the previous day to check for omissions in patient assessments
     
  • Using AI to review notes and predict risk of harm
     
  • Using technology to boost clinical documentation with chart reviews, information integrity, and diagnosis support
     
  • Designing digital tools that accommodate the complexity of care but also support the ways teams communicate and interact with each other
     
  • Embedding AI algorithms into frontline clinical decision making

Limits of artificial intelligence
 

Technology is not going to replace physician judgment, Adler-Milstein said.

"We'll probably never get to a state where we would rely wholly on algorithms. There is always going to be a combination of algorithmic input and clinical judgment. We are not headed toward a healthcare system where we won't have doctors anymore."

It's crucial to strike the best balance between clinical judgment and incorporating an algorithm, she says. The key is weighing algorithmic evidence with all the other factors a clinician considers.

"If you think about the number of clinical decisions that are made today and how many have had input from artificial intelligence or an algorithm, we're probably at less than 1%. We are in the early days of finding ways clinical decision making can be supplemented or augmented with algorithms. Where you see it most often today is in image analysis; for example, detection of pulmonary embolism."

Christopher Cheney is the CMO editor at HealthLeaders.


KEY TAKEAWAYS

The shift from paper-based information systems to digital formats has generated a wealth of data.

Technology can support patient assessment and clinical decision making.

For the foreseeable future, there is no artificial replacement for clinical judgment.


Get the latest on healthcare leadership in your inbox.