Nuclear Medicine’s AI Uptake

Nuclear medicine is one of the more venerable medical imaging technologies. Artificial intelligence is one of the newest. How are the two getting on? That question is explored in new point-counterpoint articles in AJR

Nuclear medicine was an early adopter of computerized image processing, for tasks like image analysis, quantification, and segmentation, giving rise to a cottage industry of niche software developers.

  • But this early momentum hasn’t carried over into the AI age: on the FDA’s list of 694 cleared AI medical applications through July 2023, 76% of the listed devices are classified as radiology, while just four address nuclear medicine and PET.

In the AJR articles, the position that AI in nuclear medicine is more hype than reality is taken by Eliot Siegel, MD, and Michael Morris, MD, who note that software has already been developed for most of the image analysis tasks that nuclear medicine physicians need. 

  • At the same time, Siegel and Morris say the development of AI-type algorithms like convolutional neural networks and transformers has been “relatively slow” in nuclear medicine. 

Why the slow uptake? One big reason is the lack of publicly available nuclear medicine databases for algorithm training. 

  • Also, nuclear medicine’s emphasis on function rather than anatomical changes means fewer tasks requiring detection of subtle changes.

On the other side of the coin, Babak Saboury, MD, and Munir Ghesani, MD, take a more optimistic view of AI in nuclear medicine, particularly thanks to the booming growth in theranostics. 

  • New commercial AI applications to guide the therapeutic use of radiopharmaceuticals are being developed, and some have received FDA clearance. 

As for the data shortage, groups like SNMMI are collaborating with agencies and institutions to create registries – such as for theranostics – to help train algorithms. 

  • They note that advances are already underway for AI-enhanced applications such as improving image quality, decreasing radiation dose, reducing imaging time, quantifying disease, and aiding radiation therapy planning. 

The Takeaway
The AJR articles offer a fascinating perspective on an area of medical imaging that’s often overlooked. While nuclear medicine may never have the broad impact of anatomical-based modalities like MRI and CT, growth in exciting areas like theranostics suggest that it will attract AI developers to create solutions for delivering better patient care.

AI Speeds Up MRI Scans

In our last issue, we reported on a new study underscoring the positive return on investment when deploying radiology AI at the hospital level. This week, we’re bringing you additional research that confirms AI’s economic value, this time when used to speed up MRI data reconstruction. 

While AI for medical image analysis has garnered the lion’s share of attention, AI algorithms are also being developed for behind-the-scenes applications like facilitating staff workflow or reconstructing image data. 

  • For example, software developers have created solutions that enable scans to be acquired faster and with less input data (such as radiation dose) and then upscaled to resemble full-resolution images. 

In the new study in European Journal of Radiology, researchers from Finland focused on whether accelerated data reconstruction could help their hospital avoid the need to buy a new MRI scanner. 

  • Six MRI scanners currently serve their hospital, but the radiology department will be losing access to one of them by the end of the year, leaving them with five. 

They calculated that a 20% increase in capacity per remaining scanner could help them achieve the same MRI throughput at a lower cost; to test that hypothesis they evaluated Siemens Healthineers’ Deep Resolve Boost algorithm. 

  • Deep Resolve Boost uses raw-data-to-image deep learning reconstruction to denoise images and enable rapid acceleration of scan times; a total knee MRI exam can be performed in just two minutes. 

Deep Resolve Boost was applied to 3T MRI scans of 78 patients acquired in fall of 2023, with the researchers finding that deep learning reconstruction… 

  • Reduced annual exam costs by 399k euros compared to acquiring a new scanner
  • Enabled an overall increase in scanner capacity of 20-32%
  • Had an acquisition cost 10% of the price of a new MRI scanner, leading to a cost reduction of 19 euros per scan
  • Was a lower-cost option than operating five scanners and adding a Saturday shift

The Takeaway

As with last week’s study, the new research demonstrates that AI’s real value comes from helping radiologists work more efficiently and do more with less, rather than from direct reimbursement for AI use. It’s the same argument that was made to promote the adoption of PACS some 30 years ago – and we all know how that turned out.

Study Shows AI’s Economic Value

One of the biggest criticisms of AI for radiology is that it hasn’t demonstrated its return on investment. Well, a new study in JACR tackles that argument head on, demonstrating AI’s ability to both improve radiologist efficiency and also drive new revenues for imaging facilities. 

AI adoption into radiology workflow on a broad scale will require significant investment, both in financial cost and IT resources. 

  • So far, there have been few studies showing that imaging facilities will get a payback for these investments, especially as Medicare and private insurance reimbursement for AI under CPT codes is limited to fewer than 20 algorithms. 

The new paper analyzes the use of an ROI calculator developed for Bayer’s Calantic platform, a centralized architecture for radiology AI integration and deployment. 

  • The calculator provides an estimate of AI’s value to an enterprise – such as by generating downstream procedures – by comparing workflow without AI to a scenario in which AI is integrated into operations.

The study included inputs for 14 AI algorithms covering thoracic and neurology applications on the Calantic platform, with researchers finding that over five years … 

  • The use of AI generated $3.6M in revenue versus $1.8M in costs, representing payback of $4.51 for every $1 invested
  • Use of the platform generated 1.5k additional diagnoses, resulting in more follow-up scans, hospitalizations, and downstream procedures
  • AI’s ROI jumped to 791% when radiologist time savings were considered
  • These time savings included a reduction of 15 eight-hour working days of waiting time, 78 days in triage time, 10 days in reading time, and 41 days in reporting time  

Although AI led to additional hospitalizations, it’s possible that length of stay was shorter: for example, reprioritization of stroke cases resulted in 264 fewer hospital days for patients with intracerebral hemorrhage. 

  • Executives with Bayer told The Imaging Wire that while the calculator is not publicly available, the company does use it in consultations with health systems about new AI deployments. 

The Takeaway

This study suggests that examining AI through the lens of direct reimbursement for AI-aided imaging services might not be the right way to assess the technology’s real economic value. Although it won’t settle the debate over AI’s economic benefits, the research is a step in the right direction.

Report from ECR 2024

ECR 2024 kicked off yesterday in Vienna, Austria, with European radiology professionals gathering to celebrate the field and demonstrate the latest in medical imaging research and technology. 

As we noted in last year’s coverage, ECR has bounced back strongly from the disruptions of the COVID-19 pandemic. 

  • While this year’s attendance numbers aren’t in yet, the rooms and halls of Austria Center Vienna appear to be just as crowded as in the pre-pandemic days.

In particular, the show’s opening ceremony on Wednesday evening was standing room only, with attendees delighting in friendly banter on the future of AI and radiology between congress president Carlo Catalano, MD, and Ameca, an AI-powered animatronic robot. 

From a content perspective, this year’s meeting continues a strong focus on AI.

Some highpoints from the first few days are as follows:

The Takeaway
Based on the first two days, ECR 2024 is off to a great start. We’ll be featuring additional coverage in upcoming issues, so be sure to come back, and check out our YouTube channel and LinkedIn page for video highlights from the conference.

Real-World AI Experiences

Clinical studies showing that AI helps radiologists interpret medical images are great, but how well does AI work in the real world – and what do radiologists think about it? These questions are addressed in a new study in Applied Ergonomics that takes a deep dive into the real-world implementation of a commercially available AI algorithm at a German hospital. 

A slew of clinical studies supporting AI were published in 2023, from the MASAI study on AI for breast screening to smaller studies on applications like opportunistic screening or predicting who should get lung cancer screening

  • But even an AI algorithm with the best clinical evidence behind it could fall flat if it’s difficult to use and doesn’t integrate well with existing radiology workflow.

To gain insight into this issue, the new study tracked University Hospital Bonn’s implementation of Quantib’s Prostate software for interpreting and documenting prostate MRI scans (Quantib was acquired by RadNet in January 2022). 

  • Researchers described the solution as providing partial automation of prostate MRI workflow, such as helping segment the prostate, generating heat maps of areas of interest, and automatically producing patient reports based on lesions it identifies. 

Prostate was installed at the hospital in the spring of 2022, with nine radiology residents and three attending physicians interviewed before and after implementation, finding…

  • All but one radiologist had a positive attitude toward AI before implementation and one was undecided 
  • After implementation, seven said their attitudes were unchanged, one was disappointed, and one saw their opinion shift positively
  • Use of the AI was inconsistent, with radiologists adopting different workflows and some using it all the time with others only using it occasionally
  • Major concerns cited included workflow delays due to AI use, additional steps required such as sending images to a server, and unstable performance

The findings prompted the researchers to conclude that AI is likely to be implemented and used in the real world differently than in clinical trials. Radiologists should be included in AI algorithm development to provide insights into workflow where the tools will be used.

The Takeaway

The new study is unique in that – rather than focusing on AI algorithm performance – it concentrated on the experiences of radiologists using the software and how they changed following implementation. Such studies can be illuminating as AI developers seek broader clinical use of their tools. 

AI Models Go Head-to-Head in Project AIR Study

One of the biggest challenges in assessing the performance of different AI algorithms is the varying conditions under which AI research studies are conducted. A new study from the Netherlands published this week in Radiology aims to correct that by testing a variety of AI algorithms head-to-head under similar conditions. 

There are over 200 AI algorithms on the European market (and even more in the US), many of which address the same clinical condition. 

  • Therefore, hospitals looking to acquire AI can find it difficult to assess the diagnostic performance of different models. 

The Project AIR initiative was launched to fill the gap in accurate assessment of AI algorithms by creating a Consumer Reports-style testing environment that’s consistent and transparent.

  • Project AIR researchers have assembled a validated database of medical images for different clinical applications, against which multiple AI algorithms can be tested; to ensure generalizability, images have come from different institutions and were acquired on equipment from different vendors. 

In the first test of the Project AIR concept, a team led by Kicky van Leeuwen of Radboud University Medical Centre in the Netherlands invited AI developers to participate, with nine products from eight vendors validated from June 2022 to January 2023: two models for bone age prediction and seven algorithms for lung nodule assessment (one vendor participated in both tests). Results included:

  • For bone age analysis, both of the tested algorithms (Visiana and Vuno) showed “excellent correlation” with the reference standard, with an r correlation coefficient of 0.987-0.989 (1 = perfect agreement)
  • For lung nodule analysis, there was a wider spread in AUC between the algorithms and human readers, with humans posting a mean AUC of 0.81
  • Researchers found superior performance for Annalise.ai (0.90), Lunit (0.93), Milvue (0.86), and Oxipit (0.88)

What’s next on Project AIR’s testing agenda? Van Leeuwen told The Imaging Wire that the next study will involve fracture detection. Meanwhile, interested parties can follow along on leaderboards for both bone age and lung nodule use cases. 

The Takeaway

Head-to-head studies like the one conducted by Project AIR may make many AI developers squirm (several that were invited declined to participate), but they are a necessary step toward building clinician confidence in the performance of AI algorithms that needs to take place to support the widespread adoption of AI. 

Lunit’s Deal for Volpara and AI Consolidation

Is the long-awaited consolidation of the healthcare AI sector gaining steam? In a deal valued at close to $200M, South Korean AI developer Lunit announced a bid to acquire Volpara Health, a developer of software for calculating breast density and cancer risk. 

At first glance, the alliance seems to be a match made in heaven. Lunit is a well-regarded AI developer that has seen impressive results in clinical trials of its Insight family of algorithms for indications ranging from mammography to chest imaging. 

  • Most recently, Lunit received FDA clearance for its Insight DBT software, marking its entry into the US breast screening market, and it also raised $150M in a public stock offering. 

Volpara has a long pedigree as a developer of breast imaging software, although it has shied away from image analysis applications to instead focus on breast center operations and risk assessment, in particular by calculating breast density. 

  • Thus, combining Lunit’s concentration in image analysis with Volpara’s focus on operations and risk assessment enables the combined company to offer a wider breadth of products to breast centers.

Lunit will also be able to take advantage of the marketing and sales structure that Volpara has built in the US mammography sector (97% of Volpara’s sales come from the US, where it has an installed base of 2k sites). Volpara expects 2024 sales of $30M and is cash-flow positive.

The question is whether the acquisition is a sign of things to come in the AI market. 

  • As commercial AI sales have been slow to develop, AI firms have largely funded their operations through venture capital firms – which are notoriously impatient in their quest for returns.

In fact, observers at the recent RSNA 2023 meeting noted that there were very few new start-up entrants into the AI space, and many AI vendors had smaller booths. 

  • And previous research has documented a slowdown in VC funding for AI developers that is prompting start-up firms to seek partners to provide more comprehensive offerings while also focusing on developing a road to profitability. 

The Takeaway

It’s not clear yet whether the Lunit/Volpara deal is a one-off combination or the start of a renewed consolidation trend in healthcare AI. Regardless of what happens, this alliance unites two of the stronger players in the field and has exciting potential for the years to come. 

AI Powers Opportunistic Screening

The growing power of AI is opening up new possibilities for opportunistic screening – the detection of pathology using data acquired for other clinical indications. The potential of CT-based opportunistic screening – and AI’s role in its growth – was explored in a session at RSNA 2023.

What’s so interesting about opportunistic screening with CT? 

  • As one of imaging’s most widely used modalities, CT scans are already being acquired for many clinical indications, collecting body composition data on muscle, fat, and bone that can be biomarkers for hidden pathology. 

What’s more, AI-based tools are replacing many of the onerous manual measurement tasks that previously required radiologist involvement. There are four primary biomarkers for opportunistic screening, which are typically related to several major pathologies, said Perry Pickhardt, MD, of the University of Wisconsin-Madison, who led off the RSNA session:

  • Skeletal muscle density (sarcopenia)
  • Hard calcified plaque, either coronary or aortic (cardiovascular risk)
  • Visceral fat (cardiovascular risk)
  • Bone mineral density (osteoporosis and fractures) 

But what about the economics of opportunistic screening? 

  • A recent study in Abdominal Radiology found that in a hypothetical cohort of 55-year-old men and women, AI-assisted opportunistic screening for cardiovascular disease, osteoporosis, and sarcopenia was more cost-effective compared to both “no-treatment” and “statins for all” strategies – even assuming a $250/scan charge for use of AI.

But there are barriers to opportunistic screening, despite its potential. In a follow-up talk, Arun Krishnaraj, MD, of UVA Health in Virginia said he believes fully automated AI algorithms are needed to avoid putting the burden on radiologists. 

And the regulatory environment for AI tools is complex and must be navigated, said Bernardo Bizzo, MD, PhD, of Mass General Brigham.

Ready to take the plunge? The steps for setting up a screening program using AI were described in another talk by John Garrett, PhD, Pickhardt’s colleague at UW-Madison. This includes: 

  • Normalizing your data for AI tools
  • Identifying the anatomical landmarks you want to focus on
  • Automatically segmenting areas of interest
  • Making the biomarker measurements
  • Plugging your data into AI models to predict outcomes and risk-stratify patients

The Takeaway

Opportunistic screening has the potential to flip the script in the debate over radiology utilization, making imaging exams more cost-effective while detecting additional pathology and paving the way to more personalized medicine. With AI’s help, radiologists have the opportunity to place themselves at the center of modern healthcare. 

AI’s Impact on Breast Screening

One of the most exciting radiology use cases for AI is in breast screening. At last week’s RSNA 2023 show, a paper highlighted the technology’s potential for helping breast imagers focus on cases more likely to have cancer.

Looking for cancers on screening mammography has been compared to finding a needle in a haystack, and as such it’s considered to be one of the areas where AI can best help. 

  • One of the earliest use cases was in identifying suspicious breast lesions during radiologist interpretation (remember computer-aided detection?), but more recently researchers have focused on using AI as a triage tool, by identifying cases most likely to be normal that could be removed from the radiologist’s urgent worklist. Studies have found that 30-40% of breast screening cases could be read by AI alone or triaged to a low-suspicion list.

But what impact would AI-based breast screening triage have on radiologist metrics such as recall rate? 

  • To answer this question, researchers from NYU Langone Health prospectively tested their homegrown AI algorithm for analyzing DBT screening cases.

The algorithm was trained to identify extremely low-risk cases that could be triaged from the worklist while more complex cases where the AI was uncertain were sent to radiologists, who knew in advance the cases they were reading were more complicated. In 11.7k screening mammograms, researchers examined recall rates over two periods, one before AI triage and one after, finding: 

  • The overall recall rate went from 13% before the triage period to 15% after 
  • Recall rates for complex cases went from 17% to 20%
  • Recall rates for extremely low-risk studies went from 6% to 5%
  • There were no statistically significant differences in any of the comparisons
  • No change in median self-reported perceived difficulty of reading from the triage lists compared to non-triage list, regardless of years of experience

In future work, the NYU Langone researchers will continue their study to look at AI’s impact on cancer detection rate, biopsy rate, positive predictive value, and other metrics.

The Takeaway

The NYU Langone study puts a US spin on research like MASAI from Sweden, in which AI was able to reduce radiologists’ breast screening workload by 44%. Given the differences in screening protocols between the US and Europe, it’s important to assess how AI affects workload between the regions.

Further work is needed in this ongoing study, but early results indicate that AI can triage complex cases without having an undue impact on recall rate or self-perceived difficulty in interpreting exams – a surrogate measure for burnout.

AI’s Incremental Revolution

So AI dominated the discussion at last week’s RSNA 2023 meeting. But does that mean it’s finally on the path to widespread clinical use? 

Maybe not so much. For a technology that’s supposed to have a revolutionary impact on medicine, AI is taking a frustratingly long time to arrive. 

Indeed, there was plenty of skepticism about AI in the halls of McCormick Place last week. (For two interesting looks at AI at RSNA 2023, also see Hugh Harvey, MD’s list of takeaways in a post on X/Twitter and Herman Oosterwijk’s post on LinkedIn.) 

But as one executive we talked to pointed out, AI’s advance to routine clinical use in radiology is likely to be more incremental than all at once. 

  • And from that perspective, last week’s RSNA meeting was undoubtedly positive for AI. Scientific sessions were full of talks on practical clinical applications of AI, from breast AI to CT lung screening

Researchers also discussed the use of AI apart from image interpretation, with generative AI and large language models taking on tasks from answering patient questions about their reports to helping radiologists with dictation.

It’s fine to be a skeptic (especially when it comes to things you hear at RSNA), but for perspective look at many of the past arguments casting doubt on AI: 

  • AI algorithms don’t have FDA clearance (the FDA authorized 171 algorithms in just the past year)
  • You can’t get paid for using AI clinically (16 algorithms have CPT codes, with more on the way) 
  • There isn’t enough clinical evidence backing the use of AI (tell that to the authors of MASAI, PERFORMS, and a number of other recent studies with positive findings)
  • The AI market is overcrowded with companies and ripe for consolidation (what exciting new growth market isn’t?)

The Takeaway

Sure, it’s taking longer than expected for AI to take hold in radiology. But last week’s conference showed that AI’s incremental revolution is not only advancing but expanding in ways no one expected when IBM Watson was unveiled to an RSNA audience a mere 6-7 years ago. One can only imagine what the field will look like at RSNA 2030.

Looking for more coverage of RSNA 2023? Be sure to check out our videos from the technical exhibit floor, which you can find on our new Shows page.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!