Predicting AI Performance

How can you predict whether an AI algorithm will fall short for a particular clinical use case such as detecting cancer? Researchers in Radiology took a crack at this conundrum by developing what they call an “uncertainty quantification” metric to predict when an AI algorithm might be less accurate. 

AI is rapidly moving into wider clinical use, with a number of exciting studies published in just the last few months showing how AI can help radiologists interpret screening mammograms or direct which women should get supplemental breast MRI

But AI isn’t infallible. And unlike a human radiologist who might be less confident in a particular diagnosis, an AI algorithm doesn’t have a built-in hedging mechanism.

So researchers from Denmark and the Netherlands decided to build one. They took publicly available AI algorithms and tweaked their code so they produced “uncertainty quantification” scores with their predictions. 

They then tested how well the scores predicted AI performance in a dataset of 13k images for three common tasks covering some of the deadliest types of cancer:

1) detecting pancreatic ductal adenocarcinoma on CT
2) detecting clinically significant prostate cancer on MRI
3) predicting pulmonary nodule malignancy on low-dose CT 

Researchers classified the highest 80% of the AI predictions as “certain,” and the remaining 20% as “uncertain,” and compared AI’s accuracy in both groups, finding … 

  • AI led to significant accuracy improvements in the “certain” group for pancreatic cancer (80% vs. 59%), prostate cancer (90% vs. 63%), and pulmonary nodule malignancy prediction (80% vs. 51%)
  • AI accuracy was comparable to clinicians when its predictions were “certain” (80% vs. 78%, P=0.07), but much worse when “uncertain” (50% vs. 68%, P<0.001)
  • Using AI to triage “uncertain” cases produced overall accuracy improvements for pancreatic and prostate cancer (+5%) and lung nodule malignancy prediction (+6%) compared to a no-triage scenario

How would uncertainty quantification be used in clinical practice? It could play a triage role, deprioritizing radiologist review of easier cases while helping them focus on more challenging studies. It’s a concept similar to the MASAI study of mammography AI.

The Takeaway

Like MASAI, the new findings present exciting new possibilities for AI implementation. They also present a framework within which AI can be implemented more safely by alerting clinicians to cases in which AI’s analysis might fall short – and enabling humans to step in and pick up the slack.  

Tipping Point for Breast AI?

Have we reached a tipping point when it comes to AI for breast screening? This week another study was published – this one in Radiology – demonstrating the value of AI for interpreting screening mammograms. 

Of all the medical imaging exams, breast screening probably could use the most help. Reading mammograms has been compared to looking for a needle in a haystack, with radiologists reviewing thousands of images before finding a single cancer. 

AI could help in multiple ways, either at the radiologist’s side during interpretation or by reviewing mammograms in advance, triaging the ones most likely to be normal while reserving suspicious exams for closer attention by radiologists (indeed, that was the approach used in the MASAI study in Sweden in August).

In the new study, UK researchers in the PERFORMS trial compared the performance of Lunit’s INSIGHT MMG AI algorithm to that of 552 radiologists in 240 test mammogram cases, finding that …

  • AI was comparable to radiologists for sensitivity (91% vs. 90%, P=0.26) and specificity (77% vs. 76%, P=0.85). 
  • There was no statistically significant difference in AUC (0.93 vs. 0.88, P=0.15)
  • AI and radiologists were comparable or no different with other metrics

Like the MASAI trial, the PERFORMS results show that AI could play an important role in breast screening. To that end, a new paper in European Journal of Radiology proposes a roadmap for implementing mammography AI as part of single-reader breast screening programs, offering suggestions on prospective clinical trials that should take place to prove breast AI is ready for widespread use in the NHS – and beyond. 

The Takeaway

It certainly does seem that AI for breast screening has reached a tipping point. Taken together, PERFORMS and MASAI show that mammography AI works well enough that “the days of double reading are numbered,” at least where it is practiced in Europe, as noted in an editorial by Liane Philpotts, MD

While double-reading isn’t practiced in the US, the PERFORMS protocol could be used to supplement non-specialized radiologists who don’t see that many mammograms, Philpotts notes. Either way, AI looks poised to make a major impact in breast screening on both sides of the Atlantic.

Radiation and Cancer Risk

New research on the cancer risk of low-dose ionizing radiation could have disturbing implications for those who are exposed to radiation on the job – including medical professionals. In a new study in BMJ, researchers found that nuclear workers exposed to occupational levels of radiation had a cancer mortality risk that was higher than previously estimated.

The link between low-dose radiation and cancer has long been controversial. Most studies on the radiation-cancer connection are based on Japanese atomic bomb survivors, many of whom were exposed to far higher levels of radiation than most people receive over their lifetimes – even those who work with ionizing radiation. 

The question is whether that data can be extrapolated to people exposed to much lower levels of radiation, such as nuclear workers, medical professionals, or even patients. To that end, researchers in the International Nuclear Workers Study (INWORKS) have been tracking low-dose radiation exposure and its connection to mortality in nearly 310k people in France, the UK, and the US who worked in the nuclear industry from 1944 to 2016.

INWORKS researchers previously published studies showing low-dose radiation exposure to be carcinogenic, but the new findings in BMJ offer an even stronger link. For the study, researchers tracked radiation exposure based on dosimetry badges worn by the workers and then rates of cancer mortality, and calculated rates of death from solid cancer based on their exposure levels, finding: 

  • Mortality risk was higher for solid cancers, at 52% per 1 Gy of exposure
  • Individuals who received the occupational radiation limit of 20 mSv per year would have a 5.2% increased solid cancer mortality rate over five years
  • There was a linear association between low-dose radiation exposure and cancer mortality, meaning that cancer mortality risk was also found at lower levels of exposure 
  • The dose-response association seen the study was even higher than in studies of atomic bomb survivors (52% vs. 32%)

The Takeaway

Even though the INWORKS study was conducted on nuclear workers rather than medical professionals, the findings could have implications for those who might be exposed to medical radiation, such as interventional radiologists and radiologic technologists. The study will undoubtedly be examined by radiation protection organizations and government regulators; the question is whether it leads to any changes in rules on occupational radiation exposure.

How Vendors Sell AI

Better patient care is the main selling point used by AI vendors when marketing neuroimaging algorithms, followed closely by time savings. Farther down the list of benefits are lower costs and increased revenue for providers. 

So says a new analysis in JACR that takes a close look at how FDA-cleared neuroimaging AI algorithms are marketed by vendors. It also includes several warning signs for both AI developers and clinicians.

AI is the most exciting technology to arrive in healthcare in decades, but questions percolate on whether AI developers are overhyping the technology. In the new analysis, researchers focused on marketing claims made for 59 AI neuroimaging algorithms cleared by the FDA from 2008 to 2022. Researchers analyzed FDA summaries and vendor websites, finding:

  • For 69% of algorithms, vendors highlighted an improvement in quality of patient care, while time savings for clinicians were touted for 44%. Only 16% of algorithms were promoted as lowering costs, while just 11% were positioned as increasing revenue
  • 50% of cleared neuroimaging algorithms were related to detection or quantification of stroke; of these, 41% were for intracranial hemorrhage, 31% for stroke brain perfusion, and 24% for detection of large vessel occlusion 
  • 41% of the algorithms were intended for use with non-contrast CT scans, 36% with MRI, 15% with CT perfusion, 14% with CT angiography, and the rest with MR perfusion and PET
  • 90% of the algorithms studied were cleared in the last five years, and 42% since last year

The researchers further noted two caveats in AI marketing: 

  • There is a lack of publicly available data to support vendor claims about the value of their algorithms. Better transparency is needed to create trust and clinician engagement.
  • The single-use-case nature of many AI algorithms raises questions about their economic viability. Many different algorithms would have to be implemented at a facility to ensure “a reasonable breadth of triage” for critical findings, and the financial burden of such integration is unclear.

The Takeaway

The new study offers intriguing insights into how AI algorithms are marketed by vendors, and how these efforts could be perceived by clinicians. The researchers note that financial pressure on AI developers may cause them to make “unintentional exaggerated claims” to recoup the cost of development; it is incumbent upon vendors to scrutinize their marketing activities to avoid overhyping AI technology.

Mammography AI’s Leap Forward

A new study out of Sweden offers a resounding vote of confidence in the use of AI for analyzing screening mammograms. Published in The Lancet Oncology, researchers found that AI cut radiologist workload almost by half without affecting cancer detection or recall rates.

AI has been promoted as the technology that could save radiology from rising imaging volumes, growing burnout, and pressure to perform at a higher level with fewer resources. But many radiology professionals remember similar promises made in the 1990s around computer-aided detection (CAD), which failed to live up to the hype.

Breast screening presents a particular challenge in Europe, where clinical guidelines call for all screening exams to be double-read by two radiologists – leading to better sensitivity but also imposing a higher workload. AI could help by working as a triage tool, enabling radiologists to only double-read those cases most likely to have cancer.

In the MASAI study, researchers are assessing AI for breast screening in 100k women in a population-based screening program in Sweden, with mammograms being analyzed by ScreenPoint’s Transpara version 1.7.0 software. In an in-progress analysis, researchers looked at results for 80k mammography-eligible women ages 40-80. 

The Transpara software applies a 10-point score to mammograms; in MASAI those scored 1-9 are read by a single radiologist, while those scored 10 are read by two breast radiologists. This technique was compared to double-reading, finding that:

  • AI reduced the mammography reading workload by almost 37k screening mammograms, or 44%
  • AI had a higher cancer detection rate per 1k screened participants (6.1 vs. 5.1) although the difference was not statistically significant (P=0.052)
  • Recall rates were comparable (2.2% vs. 2.0%)

The results demonstrate the safety of using AI as a triage tool, and the MASAI researchers plan to continue the study until it reaches 100k participants so they can measure the impact of AI on detection of interval cancers – cancers that appear between screening rounds.

The Takeaway

It’s hard to overestimate the MASAI study’s significance. The findings strongly support what AI proponents have been saying all along – that AI can save radiologists time while maintaining diagnostic performance. The question is the extent to which the MASAI results will apply outside of the double-reading environment, or to other clinical use cases.

Does ‘Automation Neglect’ Limit AI’s Impact?

Radiologists ignored AI suggestions in a new study because of “automation neglect,” a phenomenon in which humans are less likely to trust algorithmic recommendations. The findings raise questions about whether AI really should be used as a collaborative tool by radiologists. 

How radiologists use AI predictions has become a growing area of research as AI moves into the clinical realm. Most use cases see radiologists employing AI in a collaborative role as a decision-making aid when reviewing cases. 

But is that really the best way to use AI? In a paper published by the National Bureau of Economic Research, researchers from Harvard Medical School and MIT explored the effectiveness of radiologist performance when assisted by AI, in particular its impact on diagnostic quality.

They ran an experiment in which they manipulated radiologist access to predictions from the CheXpert AI algorithm for 324 chest X-ray cases, and then analyzed the results. They also assessed radiologist performance with and without clinical context. The 180 radiologists participating in the study were recruited from US teleradiology firms, as well as from a health network in Vietnam. 

It was expected that AI would boost radiologist performance, but instead accuracy remained unchanged:

  • AI predictions were more accurate than two-thirds of the radiologists
  • Yet, AI assistance failed to improve the radiologists’ diagnostic accuracy, as readers underweighted AI findings by 30% compared to their own assessments
  • Radiologists took 4% longer to interpret cases when either AI or clinical context were added
  • Adding clinical context to cases had a bigger impact on radiologist performance than adding AI interpretations

The findings show automation neglect can be a “major barrier” to human-AI collaboration. Interestingly, the new article seems to run counter to a previous study finding that radiologists who received incorrect AI results were more likely to follow the algorithm’s suggestions – against their own judgment. 

The Takeaway

The authors themselves admit the new findings are “puzzling,” but they do have intriguing ramifications. In particular, the researchers suggest that there may be limitations to the collaborative model in which humans and AI work together to analyze cases. Instead, it may be more effective to assign AI exclusively to certain studies, while radiologists work without AI assistance on other cases.

Mayo’s AI Model

SAN DIEGO – What’s behind the slow clinical adoption of artificial intelligence? That question permeated the discussion at this week’s AIMed Global Summit, an up-and-coming conference dedicated to AI in healthcare.

Running June 4-7, this week’s meeting saw hundreds of healthcare professionals gather in San Diego. Radiology figured prominently as the medical specialty with a lion’s share of the over 500 FDA-cleared AI algorithms available for clinical use.

But being available for use and actually being used are two different things. A common refrain at AIMed 2023 was slow clinical uptake of AI, a problem widely attributed to difficulties in deploying and implementing the technology. One speaker noted that less than 5% of practices are using AI today.

One way to spur AI adoption is the platform approach, in which AI apps are vetted by a single entity for inclusion in a marketplace from which clinicians can pick and choose what they want. 

The platform approach is gaining steam in radiology, but Mayo Clinic is rolling the platform concept out across its entire healthcare enterprise. First launched in 2019, Mayo Clinic Platform aims to help clinicians enjoy the benefits of AI without the implementation headache, according to Halim Abbas, senior director of AI at Mayo, who discussed Mayo’s progress on the platform at AIMed. 

The Mayo Clinic Platform has several main features:

  • Each medical specialty maintains its own internal AI R&D team with access to its own AI applications 
  • At the same time, Mayo operates a centralized AI operation that provides tools and services accessible across departments, such as data de-identification and harmonization, augmented data curation, and validation benchmarks
  • Clinical data is made available outside the -ologies, but the data is anonymized and secured, an approach Mayo calls “data behind glass”

Mayo Clinic Platform gives different -ologies some ownership of AI, but centralizes key functions and services to improve AI efficiency and smooth implementation. 

The Takeaway 

Mayo Clinic Platform offers an intriguing model for AI deployment. By removing AI’s implementation pain points, Mayo hopes to ramp up clinical utilization, and Mayo has the organizational heft and technical expertise to make it work (see below for news on Mayo’s new generative AI deal with Google Cloud). 

But can Mayo’s AI model be duplicated at smaller health systems and community providers that don’t have its IT resources? Maybe we’ll find out at AIMed 2024.

Understanding AI’s Physician Influence

We spend a lot of time exploring the technical aspects of imaging AI performance, but little is known about how physicians are actually influenced by the AI findings they receive. A new Scientific Reports study addresses that knowledge gap, perhaps more directly than any other research to date. 

The researchers provided 233 radiologists (experts) and internal and emergency medicine physicians (non-experts) with eight chest X-ray cases each. The CXR cases featured correct diagnostic advice, but were manipulated to show different advice sources (generated by AI vs. by expert rads) and different levels of advice explanations (only advice vs. advice w/ visual annotated explanations). Here’s what they found…

  • Explanations Improve Accuracy – When the diagnostic advice included annotated explanations, both the IM/EM physicians and radiologists’ accuracy improved (+5.66% & +3.41%).
  • Non-Rads with Explainable Advice Rival Rads – Although the IM/EM physicians performed far worse than rads when given advice without explanations, they were “on par with” radiologists when their advice included explainable annotations (see Fig 3).
  • Explanations Help Radiologists with Tough Cases – Radiologists gained “limited benefit” from advice explanations with most of the X-ray cases, but the explanations significantly improved their performance with the single most difficult case.
  • Presumed AI Use Improves Accuracy – When advice was labeled as AI-generated (vs. rad-generated), accuracy improved for both the IM/EM physicians and radiologists (+4.22% & +3.15%).
  • Presumed AI Use Improves Expert Confidence – When advice was labeled as AI-generated (vs. rad-generated), radiologists were more confident in their diagnosis.

The Takeaway
This study provides solid evidence supporting the use of visual explanations, and bolsters the increasingly popular theory that AI can have the greatest impact on non-experts. It also revealed that physicians trust AI more than some might have expected, to the point where physicians who believed they were using AI made more accurate diagnoses than they would have if they were told the same advice came from a human expert.

However, more than anything else, this study seems to highlight the underappreciated impact of product design on AI’s clinical performance.

Acute Chest Pain CXR AI

Patients who arrive at the ED with acute chest pain (ACP) syndrome end up receiving a series of often-negative tests, but a new MGB-led study suggests that CXR AI might make ACP triage more accurate and efficient.

The researchers trained three ACP triage models using data from 23k MGH patients to predict acute coronary syndrome, pulmonary embolism, aortic dissection, and all-cause mortality within 30 days. 

  • Model 1: Patient age and sex
  • Model 2: Patient age, sex, and troponin or D-dimer positivity
  • Model 3: CXR AI predictions plus Model 2

In internal testing with 5.7k MGH patients, Model 3 predicted which patients would experience any of the ACP outcomes far more accurately than Models 2 and 1 (AUCs: 0.85 vs. 0.76 vs. 0.62), while maintaining performance across patient demographic groups.

  • At a 99% sensitivity threshold, Model 3 would have allowed 14% of the patients to skip additional cardiovascular or pulmonary testing (vs. Model 2’s 2%).

In external validation with 22.8k Brigham and Women’s patients, poor AI generalizability caused Model 3’s performance to drop dramatically, while Models 2 and 1 maintained their performance (AUCs: 0.77 vs. 0.76 vs. 0.64). However, fine-tuning with BWH’s own images significantly improved the performance of the CXR AI model (from 0.67 to 0.74 AUCs) and Model 3 (from 0.77 to 0.81 AUCs).

  • At a 99% sensitivity threshold, the fine-tuned Model 3 would have allowed 8% of BWH patients to skip additional cardiovascular or pulmonary testing (vs. Model 2’s 2%).

The Takeaway

Acute chest pain is among the most common reasons for ED visits, but it’s also a major driver of wasted ED time and resources. Considering that most ACP patients undergo CXR exams early in the triage process, this proof-of-concept study suggests that adding CXR AI could improve ACP diagnosis and significantly reduce downstream testing.

CXR AI’s Screening Generalizability Gap

A new European Radiology study detailed a commercial CXR AI tool’s challenges when used for screening patients with low disease prevalence, bringing more attention to the mismatch between how some AI tools are trained and how they’re applied in the real world.

The researchers used an unnamed commercial AI tool to detect abnormalities in 3k screening CXRs sourced from two healthcare centers (2.2% w/ clinically significant lesions), and had four radiology residents read the same CXRs with and without AI assistance, finding that the AI:

  • Produced a far lower AUROC than in its other studies (0.648 vs. 0.77–0.99)
  • Achieved 94.2% specificity, but just 35.3% sensitivity
  • Detected 12 of 41 pneumonia, 3 of 5 tuberculosis, and 9 of 22 tumors 
  • Only “modestly” improved the residents’ AUROCs (0.571–0.688 vs. 0.534–0.676)
  • Added 2.96 to 10.27 seconds to the residents’ average CXR reading times

The researchers attributed the AI tool’s “poorer than expected” performance to differences between the data used in its initial training and validation (high disease prevalence) and the study’s clinical setting (high-volume, low-prevalence, screening).

  • More notably, the authors pointed to these results as evidence that many commercial AI products “may not directly translate to real-world practice,” urging providers facing this kind of training mismatch to retrain their AI or change their thresholds, and calling for more rigorous AI testing and trials.

These results also inspired lively online discussions. Some commenters cited the study as proof of the problems caused by training AI with augmented datasets, while others contended that the AI tool’s AUROC still rivaled the residents and its “decent” specificity is promising for screening use.

The Takeaway

We cover plenty of studies about AI generalizability, but most have explored bias due to patient geography and demographics, rather than disease prevalence mismatches. Even if AI vendors and researchers are already aware of this issue, AI users and study authors might not be, placing more emphasis on how vendors position their AI products for different use cases (or how they train it).

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!