Teleradiology AI’s Mixed Bag

An AI algorithm that examined teleradiology studies for signs of intracranial hemorrhage had mixed performance in a new study in Radiology: Artificial Intelligence. AI helped detect ICH cases that might have been missed, but false positives slowed radiologists down. 

AI is being touted as a tool that can detect unseen pathology and speed up the workflow of radiologists facing an environment of limited resources and growing image volume.

  • This dynamic is particularly evident at teleradiology practices, which frequently see high volumes during off-hour shifts; indeed, a recent study found that telerad cases had higher rates of patient death and more malpractice claims than cases read by traditional radiology practices.

So teleradiologists could use a bit more help. In the new study, researchers from the VA’s National Teleradiology Program assessed Avicenna.ai’s CINA v1.0 algorithm for detecting ICH on STAT non-contrast head CT studies.

  • AI was used to analyze 58.3k CT exams processed by the teleradiology service from January 2023 to February 2024, with a 2.7% prevalence of ICH.

Results were as follows

  • AI flagged 5.7k studies as positive for acute ICH and 52.7k as negative
  • Final radiology reports confirmed that 1.2k exams were true positives for a sensitivity of 76% and a positive predictive value of 21%
  • There were 384 false negatives (missed ICH cases), for a specificity of 92% and a negative predictive value of 99.3%
  • The algorithm’s performance at the VA was a bit lower than in previously published literature
  • Cases that the algorithm falsely flagged as positive took over a minute longer to interpret than prior to AI deployment
  • Overall, case interpretation times were slightly lower after AI than before

One issue to note is that the CINA algorithm is not intended for small hemorrhages with volumes < 3 mL; the researchers did not exclude these cases from their analysis, which could have reduced its performance.

  • Also, at 2.7% the VA’s teleradiology program ICH prevalence was lower than the 10% prevalence Avicenna has used to rate its performance.

The Takeaway

The new findings aren’t exactly a slam dunk for AI in the teleradiology setting, but in terms of real-world results they are exactly what’s needed to assess the true value of the technology compared to outcomes in more tightly controlled environments.

AI Detects Incidental PE

In one of the most famous quotes about radiology and artificial intelligence, Curtis Langlotz, MD, PhD, once said that AI will not replace radiologists, but radiologists with AI will replace those without it. A new study in AJR illustrates his point, showing that radiologists using a commercially available AI algorithm had higher rates of detecting incidental pulmonary embolism on CT scans. 

AI is being applied to many clinical use cases in radiology, but one of the more promising is for detecting and triaging emergent conditions that might have escaped the radiologist’s attention on initial interpretations.

  • Pulmonary embolism is one such condition. PE can be life-threatening and occurs in 1.3-2.6% of routine contrast-enhanced CT exams, but radiologist miss rates range from 10-75% depending on patient population.

AI can help by automatically analyzing CT scans and alerting radiologists to PEs when they can be treated quickly; the FDA has authorized several algorithms for this clinical use. 

  • In the new paper, researchers conducted a prospective real-world study of Aidoc’s BriefCase for iPE Triage at the University of Alabama at Birmingham. 

Researchers tracked rates of PE detection in 4.3k patients before and after AI implementation in 2021, finding … 

  • Radiologists saw their sensitivity for PE detection go up after AI implementation (80% vs. 96%) 
  • Specificity was unchanged (99.1% vs. 99.9%, p=0.58)
  • The PE incidence rate went up (1.4% vs. 1.6%)
  • There was no statistically significant difference in report turnaround time before and after AI (65 vs. 78 minutes, p=0.26)

The study echoes findings from 2023, when researchers from UT Southwestern also used the Aidoc algorithm for PE detection, in that case finding that AI cut times for report turnaround and patient waits. 

The Takeaway

While studies showing AI’s value to radiologists are commonplace, many of them are performed under controlled conditions that don’t translate to the real world. The current study is significant because it shows that with AI, radiologists can achieve near-perfect detection of a potentially life-threatening condition without a negative impact on workflow.

Advances in AI-Automated Echocardiography with Us2.ai

Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography. 

The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography? 

Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function. 

In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.

The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis. 

The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop. 

The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.

And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability. 

The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is? 

José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.

Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.

Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.

We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside. 

In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes. 

This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams. 

It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc. 

Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury. 

Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes. 

So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report. 

You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur. 

It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.

The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks. 

The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.

José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension. 

One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology. 

We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).

Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3 

The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?

Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.

Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.

A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.

With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.

Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction. 

As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed 

José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45. 

And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.

The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?

José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).

We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.

Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.

José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention

With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends. 

The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?

Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.

Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.

With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use. 

The Imaging Wire: What about the economics of echo AI?

Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500. 

When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high. 

But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes. 

If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI. 

This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.

The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?

Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy. 

Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there. 

In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters. 

If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.

This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.

In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.

José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world. 

The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?

José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.

Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves. 

REFERENCES

  1. Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
  2. Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145. 
  3. Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography
  4. Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.

Is Radiology’s AI Edge Fading?

Is radiology’s AI edge fading, at least when it comes to its share of AI-enabled medical devices being granted regulatory authorization by the FDA? The latest year-to-date figures from the agency suggest that radiology’s AI dominance could be declining. 

Radiology was one of the first medical specialties to go digital, and software developers have targeted the field for AI applications like image analysis and data reconstruction.

  • Indeed, FDA data from recent years shows that radiology makes up the vast majority of agency authorizations for AI- and machine learning-enabled medical devices, ranging from 86% in 2020 and 2022 to 79% in 2023

But in the new data, radiology devices made up only 73% of authorizations from January-March 2024. Other data points indicate that the FDA …

  • Authorized 151 new devices since August 2023
  • Reclassified as AI/ML-enabled 40 devices that were previously authorized 
  • Authorized a total of 882 devices since it began tracking the field 

      In an interesting wrinkle, many of the devices on the updated list are big-iron scanners that the FDA has decided to classify as AI/ML-enabled devices. 

      • These include CT and MRI scanners from Siemens Healthineers, ultrasound scanners from Philips and Canon Medical Systems, an MRI scanner from United Imaging, and the recently launched Butterfly iQ3 POCUS scanner. 

      The additions could be a sign that imaging OEMs increasingly are baking AI functionality into their products at a basic level, blurring the line between hardware and software. 

      The Takeaway

      It should be no cause for panic that radiology’s share of AI/ML authorizations is declining as other medical specialties catch up to the discipline’s head start. The good news is that the FDA’s latest figures show how AI is becoming an integral part of medicine, in ways that clinicians may not even notice.

      Fine-Tuning AI for Breast Screening

      AI has shown in research studies it can help radiologists interpret breast screening exams, but for routine clinical use many questions remain about the optimal AI parameters to catch the most cancers while generating the fewest callbacks. Fortunately, a massive new study out of Norway in Radiology: Artificial Intelligence provides some guidance. 

      Recent research such as the MASAI trial has already demonstrated that AI can help reduce the number of screening mammograms radiologists have to review, and for many low-risk cases eliminate the need for double-reading, which is commonplace in Europe. 

      • But growing interest in breast screening AI is tempered by the field’s experience with computer-aided detection, which was introduced over 20 years ago but generated many false alarms that slowed radiologists down. 

      Fast forward to 2024. The new generation of breast AI algorithms seems to have addressed CAD’s shortcomings, but it’s still not clear exactly how they can best be used. 

      • Researchers from Norway’s national breast screening program tested one mammography AI tool – Lunit’s Insight MMG – in a study with data obtained from 662k women screened with 2D mammography from 2004 to 2018. 

      Researchers tested AI with a variety of specificity and sensitivity settings based on AI risk scores; in one scenario, 50% of the highest risk scores were classified as positive for cancer, while in another that threshold was set to 10%. The group found …

      • At the 50% cutoff, AI would correctly identify 99% of screen-detected cancers and 85% of interval cancers. 
      • At the 10% cutoff, AI would detect 92% of screen-detected cancers and 45% of interval cancers 
      • AI understandably performed better in identifying false-positive cases as negative at the 10% threshold than 50% (69% vs. 17%)
      • AI had a higher AUC than double-reading for screen-detected cancers (0.97 vs. 0.88)

      How generalizable is the study? It’s worth noting that the research relied on AI of 2D mammography, which is prevalent in Europe (most mammography in the US employs DBT). In fact, Lunit is targeting the US with its recently cleared Insight DBT algorithm rather than Insight MMG. 

      The Takeaway

      As with MASAI, the new study offers an exciting look at AI’s potential for breast screening. Ultimately, it may turn out that there’s no single sensitivity and specificity threshold at which mammography AI should be set; instead, each breast imaging facility might choose the parameters they feel best suit the characteristics of their radiologists and patient population. 

      Headwinds Slow AI Funding

      Venture capital funding of medical imaging AI developers continues to slow. A new report from Signify Research shows that funding declined 19% in 2023, and is off to a slow start in 2024 as well. 

      Signify tracks VC funding on an annual basis, and previous reports from the UK firm showed that AI investment peaked in 2021 and has been declining ever since. 

      • The report’s author, Signify analyst Ellie Baker, sees a variety of factors behind the decline, chief among them macroeconomic headwinds such as tighter access to capital due to higher interest rates. 

      Total Funding Value Drops – Total funding for 2023 came in at $627M, down 19% from $771M in 2022. Funding hit a peak in 2021 at $1.1B.

      Deal Volume Declines – The number of deals in 2023 fell to 35, down 30% from 50 the year before. Deal volume peaked in 2021 at 63. And 2024 isn’t off to a great start, with only five deals recorded in the first quarter. 

      Deals Are Getting Bigger – Despite the declines, the average deal size grew last year, to $19M, up 23% versus $15M in 2022. 

      HeartFlow Rules the Roost – HeartFlow raised the most in 2023, fueled by a massive $215M funding round in April 2023, while Cleerly held the crown in 2022.

      US Funding Dominates – On a geographic basis, funding is shifting away from Europe (-46%) and Asia-Pacific (no 2023 deals) and back to the Americas, which generated over 70% of the funding raised last year. This may be due to the US providing faster technology uptake and more routes to reimbursement.

      Early Bird Gets the Worm – Unlike past years in which later-stage funding dominated, 2024 has seen a shift to early-stage deals with seed funding and Series A rounds, such as AZmed’s $16M deal in February 2024. 

      $100M Club Admits New Members – Signify’s exclusive “$100M Club” of AI developers has expanded to include Elucid and RapidAI. 

      The Takeaway

      Despite the funding drop, Signify still sees a healthy funding environment for AI developers ($627M is definitely a lot of money). That said, AI software developers are going to have to make a stronger case to investors regarding revenue potential and a path to ROI. 

      Nuclear Medicine’s AI Uptake

      Nuclear medicine is one of the more venerable medical imaging technologies. Artificial intelligence is one of the newest. How are the two getting on? That question is explored in new point-counterpoint articles in AJR

      Nuclear medicine was an early adopter of computerized image processing, for tasks like image analysis, quantification, and segmentation, giving rise to a cottage industry of niche software developers.

      • But this early momentum hasn’t carried over into the AI age: on the FDA’s list of 694 cleared AI medical applications through July 2023, 76% of the listed devices are classified as radiology, while just four address nuclear medicine and PET.

      In the AJR articles, the position that AI in nuclear medicine is more hype than reality is taken by Eliot Siegel, MD, and Michael Morris, MD, who note that software has already been developed for most of the image analysis tasks that nuclear medicine physicians need. 

      • At the same time, Siegel and Morris say the development of AI-type algorithms like convolutional neural networks and transformers has been “relatively slow” in nuclear medicine. 

      Why the slow uptake? One big reason is the lack of publicly available nuclear medicine databases for algorithm training. 

      • Also, nuclear medicine’s emphasis on function rather than anatomical changes means fewer tasks requiring detection of subtle changes.

      On the other side of the coin, Babak Saboury, MD, and Munir Ghesani, MD, take a more optimistic view of AI in nuclear medicine, particularly thanks to the booming growth in theranostics. 

      • New commercial AI applications to guide the therapeutic use of radiopharmaceuticals are being developed, and some have received FDA clearance. 

      As for the data shortage, groups like SNMMI are collaborating with agencies and institutions to create registries – such as for theranostics – to help train algorithms. 

      • They note that advances are already underway for AI-enhanced applications such as improving image quality, decreasing radiation dose, reducing imaging time, quantifying disease, and aiding radiation therapy planning. 

      The Takeaway
      The AJR articles offer a fascinating perspective on an area of medical imaging that’s often overlooked. While nuclear medicine may never have the broad impact of anatomical-based modalities like MRI and CT, growth in exciting areas like theranostics suggest that it will attract AI developers to create solutions for delivering better patient care.

      Real-World AI Experiences

      Clinical studies showing that AI helps radiologists interpret medical images are great, but how well does AI work in the real world – and what do radiologists think about it? These questions are addressed in a new study in Applied Ergonomics that takes a deep dive into the real-world implementation of a commercially available AI algorithm at a German hospital. 

      A slew of clinical studies supporting AI were published in 2023, from the MASAI study on AI for breast screening to smaller studies on applications like opportunistic screening or predicting who should get lung cancer screening

      • But even an AI algorithm with the best clinical evidence behind it could fall flat if it’s difficult to use and doesn’t integrate well with existing radiology workflow.

      To gain insight into this issue, the new study tracked University Hospital Bonn’s implementation of Quantib’s Prostate software for interpreting and documenting prostate MRI scans (Quantib was acquired by RadNet in January 2022). 

      • Researchers described the solution as providing partial automation of prostate MRI workflow, such as helping segment the prostate, generating heat maps of areas of interest, and automatically producing patient reports based on lesions it identifies. 

      Prostate was installed at the hospital in the spring of 2022, with nine radiology residents and three attending physicians interviewed before and after implementation, finding…

      • All but one radiologist had a positive attitude toward AI before implementation and one was undecided 
      • After implementation, seven said their attitudes were unchanged, one was disappointed, and one saw their opinion shift positively
      • Use of the AI was inconsistent, with radiologists adopting different workflows and some using it all the time with others only using it occasionally
      • Major concerns cited included workflow delays due to AI use, additional steps required such as sending images to a server, and unstable performance

      The findings prompted the researchers to conclude that AI is likely to be implemented and used in the real world differently than in clinical trials. Radiologists should be included in AI algorithm development to provide insights into workflow where the tools will be used.

      The Takeaway

      The new study is unique in that – rather than focusing on AI algorithm performance – it concentrated on the experiences of radiologists using the software and how they changed following implementation. Such studies can be illuminating as AI developers seek broader clinical use of their tools. 

      AI Models Go Head-to-Head in Project AIR Study

      One of the biggest challenges in assessing the performance of different AI algorithms is the varying conditions under which AI research studies are conducted. A new study from the Netherlands published this week in Radiology aims to correct that by testing a variety of AI algorithms head-to-head under similar conditions. 

      There are over 200 AI algorithms on the European market (and even more in the US), many of which address the same clinical condition. 

      • Therefore, hospitals looking to acquire AI can find it difficult to assess the diagnostic performance of different models. 

      The Project AIR initiative was launched to fill the gap in accurate assessment of AI algorithms by creating a Consumer Reports-style testing environment that’s consistent and transparent.

      • Project AIR researchers have assembled a validated database of medical images for different clinical applications, against which multiple AI algorithms can be tested; to ensure generalizability, images have come from different institutions and were acquired on equipment from different vendors. 

      In the first test of the Project AIR concept, a team led by Kicky van Leeuwen of Radboud University Medical Centre in the Netherlands invited AI developers to participate, with nine products from eight vendors validated from June 2022 to January 2023: two models for bone age prediction and seven algorithms for lung nodule assessment (one vendor participated in both tests). Results included:

      • For bone age analysis, both of the tested algorithms (Visiana and Vuno) showed “excellent correlation” with the reference standard, with an r correlation coefficient of 0.987-0.989 (1 = perfect agreement)
      • For lung nodule analysis, there was a wider spread in AUC between the algorithms and human readers, with humans posting a mean AUC of 0.81
      • Researchers found superior performance for Annalise.ai (0.90), Lunit (0.93), Milvue (0.86), and Oxipit (0.88)

      What’s next on Project AIR’s testing agenda? Van Leeuwen told The Imaging Wire that the next study will involve fracture detection. Meanwhile, interested parties can follow along on leaderboards for both bone age and lung nodule use cases. 

      The Takeaway

      Head-to-head studies like the one conducted by Project AIR may make many AI developers squirm (several that were invited declined to participate), but they are a necessary step toward building clinician confidence in the performance of AI algorithms that needs to take place to support the widespread adoption of AI. 

      Top 12 Radiology Trends for 2024

      What will be the top radiology trends for 2024? We talked to key opinion leaders across the medical imaging spectrum to get their opinions on the technologies, clinical applications, and regulatory developments that will shape the specialty for the next 12 months.

      AI – Generative AI to Reduce Radiology’s Workload: “New generative AI methods will summarize complex medical records, draft radiology reports from images, and explain radiology reports to patients using language they understand. These innovative systems will reduce our workload and will provide more time for us to connect with our colleagues and our patients.” — Curtis Langlotz, MD, PhD, Stanford University and president, RSNA 2024

      AI – Generative AI Will Get Multimodal: “In 2024, we can expect continued innovations in generative AI with a greater emphasis on integrating GenAI into existing and new radiology and patient-facing applications with growing interests in retrieval-augmented generation, fine-tuning, smaller models, multi-model routing, and AI assistants. Medicine being multimodal, the term ‘multimodal’ will become more ubiquitous.” — Woojin Kim, MD, CMIO at Rad AI

      AI – Will AI Really Reduce Radiology Burnout? “Burnout will continue to be a huge issue in radiology with no solution in sight. AI vendors will offer algorithms as solutions to burnout with catchy slogans such as ‘buy our lung nodule detector and become the radiologist your parents wanted you to be.’ Their enthusiasm will cause even more burnout.” — Saurabh Jha, MBBS, AKA RogueRad, Hospital of the University of Pennsylvania

      Breast Imaging – Prepare Now for Density Reporting: “The FDA ‘dense breast’ reporting standard to patients becomes effective on September 10, 2024, and breast imaging centers should be prepared for new patient questions and conversations. A plan for a consistent approach to recommending supplemental screening and facilitating ordering of additional imaging from referring providers should be put into action.” — JoAnn Pushkin, executive director, DenseBreast-info.org

      Breast Imaging – Density Reporting to Spur Earlier Detection: “In March 2023, FDA issued a national requirement for reporting breast density to patients and referring providers after mammography. Facilities performing mammograms must meet the September 2024 deadline incorporating breast density type and associated breast cancer risk in their reporting. This change can lead to earlier breast cancer detection as these patients will be informed of supplemental screening as it relates to their breast density and [will] choose to pursue it.” — Stamatia Destounis, MD, Elizabeth Wende Breast Care and chair, ACR Breast Imaging Commission

      CT – Lung Cancer Screening to Build Momentum: “Uptake of LDCT screening for lung cancer will increase in the US and worldwide. AI-enabled cardiac evaluation, even on non-gated scans, will allow for prediction of illnesses such as AFib and heart failure.  Quantifying measurement error across platforms will become an important aspect of nodule management.” — David Yankelevitz, MD, Icahn School of Medicine at Mount Sinai Health System

      CT – Photon-Counting CT to Expand: “In 2024, we will continue to see many papers published on photon-counting CT, strengthening the body of scientific evidence as to its many strengths. Results from clinical trials involving multiple manufacturers’ systems will also increase in number, perhaps leading to more commercial systems entering the market.” — Cynthia McCollough, PhD, director, CT Clinical Innovation Center, Mayo Clinic

      Enterprise Imaging – Time is Ripe for Cloud and AI: “Healthcare has an opportunity for change in 2024, and imaging is ripe for disruption, with burnout, staffing challenges, and new technology needs. Many organizations are expanding their enterprise imaging strategy and are asking how and where they can take the plunge into cloud and AI. Vendors have got the message; now it’s time to push the gas and deliver.” — Monique Rasband, VP of strategy & research, imaging/oncology at KLAS

      Imaging IT – Data Brokerage to Go Mainstream: “A new market will hit the mainstream in 2024 – radiology data brokerage. As data-hungry LLMs scale up and the use of companion diagnostics in lifesciences proliferates, health systems will look to cash in on curated radiology data. This will also be an even bigger driver for migration to cloud-based imaging IT.” — Steve Holloway, managing director, Signify Research     

      MRI – Prostate MRI to Reduce Biopsies: “Prostate MRI in conjunction with PSMA PET will explode in 2024 and reduce the number of unnecessary biopsies for patients.” — Stephen Pomeranz, MD, CEO of ProScan Imaging and chair, Naples Florida Community Hospital Network 

      Theranostics – New Radiotracers to Drive Diagnosis & Treatment: “Through 2024, nuclear medicine theranostics will increasingly be integrated into standard global practice. With many new radiopharmaceuticals in development, theranostics promise early diagnosis and precision treatment for a broadening range of cancers, expanding options for patients resistant to traditional therapies. Treatments will be enhanced by personalized dosimetry, artificial intelligence, and combination therapies.” — Helen Nadel, MD, Stanford University and president, SNMMI 2023-2024

      Radiology Operations – Reimbursement Challenges Continue: “In 2024, we will continue to experience recruitment challenges coupled with decreases in reimbursement. Now, more than ever, every radiologist needs to be diligent in advocating for the specialty, focus on business plan diversification, and ensure all services rendered are optimally documented and billed.” — Rebecca Farrington, chief revenue officer, Healthcare Administrative Partners 

      The Takeaway
      To paraphrase Robert F. Kennedy, radiology is indeed living in interesting times – times of “danger and uncertainty,” but also times of unprecedented creativity and innovation. In 2024, radiology will get a much better glimpse of where these trends are taking us.

      Get every issue of The Imaging Wire, delivered right to your inbox.

      You might also like..

      Select All

      You're signed up!

      It's great to have you as a reader. Check your inbox for a welcome email.

      -- The Imaging Wire team

      You're all set!