A new study suggests that automated methods can be used to identify findings in radiology reports.
Before physicians and researchers earned their degrees and titles, they all had to do the same thing: Learn. That's also true for artificial intelligence (AI) systems.
If AI is to live up to its potential for performing tasks such as helping radiologists interpret imaging studies, researchers must determine the best ways for machines to learn how to do so.
A group of researchers has just published a study in the journal Radiology that examined the best ways for computer software to be "taught" the difference between normal and abnormal X-ray, CT scan, or MRI findings. Such a building block is needed to eventually develop AI tools to interpret scans and diagnose conditions.
The researchers used machine learning techniques, including natural language processing algorithms, to identify clinical concepts in radiologist reports for CT scans.
Developing good labels
"The necessary, foundational step is to have good labels," senior author Eric Oermann, MD, instructor in the department of neurosurgery at the Icahn School of Medicine at Mount Sinai in the New York metropolitan area, tells HealthLeaders Media.
"Normally in computer science we can get a lot of images really easily," Oermann says.
The question is, "how do we get good labels for them?" he says.
To answer that question, their study examined natural language processing as a way to get good labels for images.
They trained the computer software using 96,303 radiologist reports associated with head CT scans performed at The Mount Sinai Hospital and Mount Sinai Queens between 2010 and 2016.
"To characterize the "lexical complexity" of radiologist reports, researchers calculated metrics that reflected the variety of language used in these reports and compared these to other large collections of text: thousands of books, Reuters news stories, inpatient physician notes, and Amazon product reviews," as stated in the study.
Alexandra Wilson Pecci is an editor for HealthLeaders.