A new study suggests that automated methods can be used to identify findings in radiology reports.
Before physicians and researchers earned their degrees and titles, they all had to do the same thing: Learn. That's also true for artificial intelligence (AI) systems.
If AI is to live up to its potential for performing tasks such as helping radiologists interpret imaging studies, researchers must determine the best ways for machines to learn how to do so.
A group of researchers has just published a study in the journal Radiology that examined the best ways for computer software to be "taught" the difference between normal and abnormal X-ray, CT scan, or MRI findings. Such a building block is needed to eventually develop AI tools to interpret scans and diagnose conditions.
The researchers used machine learning techniques, including natural language processing algorithms, to identify clinical concepts in radiologist reports for CT scans.
Developing good labels
"The necessary, foundational step is to have good labels," senior author Eric Oermann, MD, instructor in the department of neurosurgery at the Icahn School of Medicine at Mount Sinai in the New York metropolitan area, tells HealthLeaders Media.
"Normally in computer science we can get a lot of images really easily," Oermann says.
The question is, "how do we get good labels for them?" he says.
To answer that question, their study examined natural language processing as a way to get good labels for images.
They trained the computer software using 96,303 radiologist reports associated with head CT scans performed at The Mount Sinai Hospital and Mount Sinai Queens between 2010 and 2016.
"To characterize the "lexical complexity" of radiologist reports, researchers calculated metrics that reflected the variety of language used in these reports and compared these to other large collections of text: thousands of books, Reuters news stories, inpatient physician notes, and Amazon product reviews," as stated in the study.
Oermann says that the language of radiology is different from everyday English. Not only is there a syntactic structure that's highly regular, but the lexicon is significantly less than everyday English.
For a machine, "having a smaller lexicon [is] easier for you to learn predictive labels for the written text because the text is simpler," he says.
For example, typical words and phrases in radiology reports are things like, "no abnormal findings noted," "acute," or "right-sided subdural hematoma." When something is going wrong with a patient, "the whole report is reflective of that," Oermann notes.
"When things are abnormal they're really abnormal," he says. "You don't need the most cutting-edge machine learning to get decent results because the signal is pretty strong."
Machine learning can also automatically learn some of the meaning of words based on the frequency with which they appear next to other words, such as "baloney" plus "cheese," equals "sandwich."
"You can get a kind of semantic algebra to your words," Oermann says.
Ultimately, the techniques used in this study led to an accuracy of 91%, demonstrating that it is possible to automatically identify clinical concepts in text from radiology.
"The ultimate goal is to create algorithms that help doctors accurately diagnose patients," first author John Zech, a medical student at the Icahn School of Medicine at Mount Sinai, said in a statement.
Oermann says showing that the machines can generate the labels needed for using AI techniques in radiology means that physicians don't have to label the reports manually. That's important, since doing so would be expensive and time-consuming.
Instead, "you suddenly have this scalable technique for labeling CAT scans," he says.
He also adds that he sees such technology as augmenting physicians' workflow in a positive way, such as taking a "quick and dirty first pass" at radiology results and helping to "bring the most urgent things to your attention first."
"A lot of people recognize that the future of medicine lies with machine learning," Oermann says.
Alexandra Wilson Pecci is an editor for HealthLeaders.