51˶

Artificial Intelligence Tool Gets High Marks in Reading Mammograms

— Breast cancer screening results show AI's mettle in real-world dataset

MedpageToday

CHICAGO -- An artificial intelligence program designed to assist radiologists in reading breast cancer screening scans proved to be almost as good as trained doctors in a 3-year test, a researcher reported.

Of the 100 positives detected by humans reading the 12,120 cases in the 2016-2017 period at a single center, the artificial intelligence program achieved a sensitivity of 91%, a specificity rate of 91.11%, and a recall rate of 9.56% for the same cases, reported Gerald Lip, MD, of the University of Aberdeen in Scotland, at the annual meeting of the Radiological Society of North America.

The findings were similar when looking at the results for the 2017-19 period, which included 229 positives among 27,824 screens: The artificial intelligence algorithm achieved a sensitivity of 88.21%, a specificity rate of 90.92%, and a recall rate of 9.73%.

"The performance of this artificial intelligence tool shows a high degree of sensitivity, specificity, and an acceptable recall rate," Lip concluded. "The findings demonstrate how an artificial intelligence tool would perform as a stand-alone reader and its potential to contribute to the double reading workflow."

The researchers attempted to demonstrate how the artificial intelligence tool would work in a real-world environment. To do that, the research team interrogated two sets of mammographic images stored within a trusted research environment at a university. The dataset comprised of 3 years of consecutively acquired breast cancer screening activity at a mid-sized center, Lip explained.

Data linkage was performed with a paperless breast screening reporting system, which allowed for complete anonymization of data to facilitate external analysis.

The artificial intelligence tool analysis followed a standard machine-learning format with a validation set of 12,120 cases (2016-2017) and a final test set of 27,824 cases (2017-2019). "No model alterations or re-training were conducted during or prior to the artificial intelligence evaluation," Lip reported.

The researchers defined positive cases as pathologically confirmed screen-detected breast cancers, not including cancers detected between screening rounds -- so-called interval cases.

"Use of a trusted research environment enabled close collaboration between academic, industry, and clinical teams to enable a large-scale, real-world evaluation of an artificial intelligence tool," Lip said.

"We are coming to the point in development of computer-assisted medical technology that we soon will not even mention that 'there is AI inside,'" commented Elliot Fishman, MD, of Johns Hopkins University in Baltimore. "AI is just becoming ubiquitous. At a minimum, artificial intelligence provides a second reader, and I think we will see that your accuracy will surely increase."

While "artificial intelligence is always going to have limitations," Fishman told 51˶, "as a helper, I think that artificial intelligence is spectacular."

He suggested that studies such as the work in Scotland will continue to push artificial intelligence into common radiological practice.

  • author['full_name']

    Ed Susman is a freelance medical writer based in Fort Pierce, Florida, USA.

Disclosures

Lip disclosed no relationships with industry.

Fishman disclosed relationships with Siemens and GE.

Primary Source

Radiological Society of North America

Lip G, et al "Screening analysis with mammographic AI of a full three year round: Standalone performance in a real world study in a novel trusted research environment" RSNA 2021.