Mammography AI Predicts Cancer Before It’s Detected

A new study highlights the predictive power of AI for mammography screening – before cancers are even detected. Researchers in a study JAMA Network Open found that risk scores generated by Lunit’s Insight MMG algorithm predicted which women would develop breast cancer – years before radiologists found it on mammograms. 

Mammography image analysis has always been one of the most promising use cases for AI – even dating back to the days of computer-aided detection in the early 2000s. 

  • Most mammography AI developers have focused on helping radiologists identify suspicious lesions on mammograms, or triage low-risk studies so they don’t require extra review.

But a funny thing has happened during clinical use of these algorithms – radiologists found that AI-generated risk scores appeared to predict future breast cancers before they could be seen on mammograms. 

  • Insight MMG marks areas of concern and generates a risk score of 0-100 for the presence of breast cancer (higher numbers are worse). 

Researchers decided to investigate the risk scores’ predictive power by applying Insight MMG to screening mammography exams acquired in the BreastScreen Norway program over three biennial rounds of screening from 2004 to 2018. 

  • They then correlated AI risk scores to clinical outcomes in exams for 116k women for up to six years after the initial screening round.

Major findings of the study included … 

  • AI risk scores were higher for women who later developed cancer, 4-6 years before the cancer was detected.
  • The difference in risk scores increased over three screening rounds, from 21 points in the first round to 79 points in the third round.
  • Risk scores had very high accuracy by the third round (AUC=0.93).
  • AI scores were more accurate than existing risk tools like the Tyrer-Cuzick model.

How could AI risk scores be used in clinical practice? 

  • Women without detectable cancer but with high scores could be directed to shorter screening intervals or screening with supplemental modalities like ultrasound or MRI.

The Takeaway
It’s hard to overstate the significance of the new results. While AI for direct mammography image interpretation still seems to be having trouble catching on (just like CAD did), risk prediction is a use case that could direct more effective breast screening. The study is also a major coup for Lunit, continuing a string of impressive clinical results with the company’s technology.

AI Recon Cuts CT Radiation Dose

Artificial intelligence got its start in radiology as a tool to help medical image interpretation, but much of AI’s recent progress is in data reconstruction: improving images before radiologists even get to see them. Two new studies underscore the potential of AI-based reconstruction to reduce CT radiation dose while preserving image quality. 

Radiology vendors and clinicians have been remarkably successful in reducing CT radiation dose over the past two decades, but there’s always room for improvement. 

  • In addition to adjusting CT scanning protocols like tube voltage and current, data reconstruction protocols have been introduced to take images acquired at lower radiation levels and “boost” them to look like full-dose images. 

The arrival of AI and other deep learning-based technologies has turbocharged these efforts. 

They compared DLIR operating at high strength to GE’s older ASiR-V protocol in CCTA scans with lower tube voltage (80 kVp), finding that deep learning reconstruction led to …

  • 42% reduction in radiation dose (2.36 mSv vs. 4.07)
  • 13% reduction in contrast dose (50 mL vs. 58 mL).
  • Better signal- and contrast-to-noise ratios.
  • Higher image quality ratings.

In the second study, researchers from China including two employees of United Imaging Healthcare used a deep learning reconstruction algorithm to test ultralow-dose CT scans for coronary artery calcium scoring. 

  • They wanted to see if CAC scoring could be performed with lower tube voltage and current (80 kVp/20 mAs) and how the protocol compared to existing low-dose scans.

In tests with 156 patients, they found the ultralow-dose protocol produced …

  • Lower radiation dose (0.09 vs. 0.49 mSv).
  • No difference in CAC scoring or risk categorization. 
  • Higher contrast-to-noise ratio.

The Takeaway

AI-based data reconstruction gives radiologists the best of both worlds: lower radiation dose with better-quality images. These two new studies illustrate AI’s potential for lowering CT dose to previously unheard-of levels, with major benefits for patients.

Imaging News from ESC 2024

The European Society of Cardiology annual meeting concluded on September 2 in London, with around 32k clinicians from 171 countries attending some 4.4k presentations. Organizers reported that attendance finally rebounded to pre-COVID numbers. 

While much of ESC 2024 focused on treatments for cardiovascular disease, diagnosis with medical imaging still played a prominent role. 

  • Cardiac CT dominated many ESC sessions, and AI showed it is nearly as hot in cardiology as it is in radiology. 

Major imaging-related ESC presentations included…

  • A track on cardiac CT that underscored CT’s prognostic value:
    • Myocardial revascularization patients who got FFR-CT had lower hazard ratios for MACE and all-cause mortality (HR=0.73 and 0.48).
    • Incidental coronary artery anomalies appeared on 1.45% of CCTA scans for patients with suspected coronary artery disease.
  • AI flexed its muscles in a machine learning track:
    • AI of low-dose CT scans had an AUC of 0.95 for predicting pulmonary congestion, a sign of acute heart failure. 
    • Echocardiography AI identified HFpEF with higher AUC than clinical models (0.75 vs. 0.69).
    • AI of transthoracic echo detected hypertrophic cardiomyopathy with AUC=0.85.

Another ESC hot topic was CT for calculating coronary artery calcium (CAC) scores, a possible predictor of heart disease. Sessions found … 

  • AI-generated volumetry of cardiac chambers based on CAC scans better predicted cardiovascular events than Agatston scores over 15 years of follow-up in an analysis of 5.8k patients from the MESA study. 
  • AI-CAC with CT was comparable to cardiac MRI read by humans for predicting atrial fibrillation (0.802 vs. 0.798) and stroke (0.762 vs. 0.751) over 15 years, which could give an edge to AI-CAC given its automated nature.
  • An AI algorithm enabled opportunistic screening of CAC quantification from non-gated chest CT scans of 631 patients, finding high CAC scores in 13%. Many got statins, while 22 got additional imaging and 2 intervention.
  • AI-generated CAC scores were also highlighted in a Polish study, detecting CAC on contrast CT at a rate comparable to humans on non-contrast CT (77% vs. 79%), possibly eliminating the need for additional non-contrast CT.  

The Takeaway

This week’s ESC 2024 sessions demonstrate the vital role of imaging in diagnosing and treating cardiovascular disease. While radiologists may not control the patients, they can always apply knowledge of advances in other disciplines to their work.

AI Detects Interval Cancer on Mammograms

In yet another demonstration of AI’s potential to improve mammography screening, a new study in Radiology shows that Lunit’s Insight MMG algorithm detected nearly a quarter of interval cancers missed by radiologists on regular breast screening exams. 

Breast screening is one of healthcare’s most challenging cancer screening exams, and for decades has been under attack by skeptics who question its life-saving benefit relative to “harms” like false-positive biopsies.  

  • But AI has the potential to change the cost-benefit equation by detecting a higher percentage of early-stage cancers and improving breast cancer survival rates. 

Indeed, 2024 has been a watershed year for mammography AI. 

U.K. researchers used Insight MMG (also used in the BreastScreen Norway trial) to analyze 2.1k screening mammograms, of which 25% were interval cancers (cancers occurring between screening rounds) and the rest normal. 

  • The AI algorithm generates risk scores from 0-100, with higher scores indicating likelihood of malignancy, and this study was set at a 96% specificity threshold, equivalent to the average 4% recall rate in the U.K. national breast screening program.

In analyzing the results, researchers found … 

  • AI flagged 24% of the interval cancers and correctly localized 77%.
  • AI localized a higher proportion of node-positive than node-negative cancers (24% vs. 16%).
  • Invasive tumors had higher median risk scores than noninvasive (62 vs. 33), with median scores of 26 for normal mammograms.

Researchers also tested AI at a lower specificity threshold of 90%. 

  • AI detected more interval cancers at this level, but in real-world practice this would bump up recall rates.  

It’s also worth noting that Insight MMG is designed for the analysis of 2D digital mammography, which is more common in Europe than DBT. 

  • For the U.S., Lunit is emphasizing its recently cleared Insight DBT algorithm, which may perform differently.  

The Takeaway

As with the MASAI and BreastScreen Norway results, the new study points to an exciting role for AI in making mammography screening more accurate with less drain on radiologist resources. But as with those studies, the new results must be interpreted against Europe’s double-reading paradigm, which differs from the single-reading protocol used in the U.S. 

FDA Keeps Pace on AI Approvals

The FDA has updated its list of AI- and machine learning-enabled medical devices that have received regulatory authorization. The list is a closely watched barometer of the health of the AI sector, and the update shows the FDA is keeping a brisk pace of authorizations.

The FDA has maintained double-digit growth of AI authorizations for the last several years, a pace that reflects the growing number of submissions it’s getting from AI developers. 

  • Indeed, data compiled by regulatory expert Bradley Merrill Thompson show how the number of FDA authorizations has been growing rapidly since the dawn of the medical AI era in around 2016 (see also our article on AI safety below). 

The new FDA numbers show that …

  • The FDA has now authorized 950 AI/ML-enabled devices since it began keeping track
  • Device authorizations are up 15% for the first half of 2024 compared to the same period the year before (107 vs. 93)
  • The pace could grow even faster in late 2024 – in 2023, FDA in the second half authorized 126 devices, up 35% over the first half
  • At that pace, the FDA should hit just over 250 total authorizations in 2024 
  • This would represent 14% growth over 220 authorizations in 2023, and compares to growth of 14% in 2022 and 15% in 2021
  • As with past updates, radiology makes up the lion’s share of AI/ML authorizations, but had a 73% share in the first half, down from 80% for all of 2023
  • Siemens Healthineers led in all H1 2024 clearances with 11, bringing its total to 70 (66 for Siemens and four for Varian). GE HealthCare remains the leader with 80 total clearances after adding three in H1 2024 (GE’s total includes companies it has acquired, like Caption Health and MIM Software). There’s a big drop off after GE and Siemens, including Canon Medical (30), Aidoc (24), and Philips (24).

The FDA’s list includes both software-only algorithms as well as hardware devices like scanners that have built-in AI capabilities, such as a mobile X-ray unit that can alert users to emergent conditions. 

  • Indeed, many of the authorizations on the FDA’s list are for updated versions of already-cleared products rather than brand-new solutions – a trend that tends to inflate radiology’s share of approvals.

The Takeaway

The new FDA numbers on AI/ML regulatory authorizations are significant not only for revealing the growth in approvals, but also because the agency appears to be releasing the updates more frequently – perhaps a sign it is practicing what it preaches when it comes to AI openness and transparency. 

Better Prostate MRI with AI

A homegrown AI algorithm was able to detect clinically significant prostate cancer on MRI scans with the same accuracy as experienced radiologists. In a new study in Radiology, researchers say the algorithm could improve radiologists’ ability to detect prostate cancer on MRI, with fewer false positives.

In past issues of The Imaging Wire, we’ve discussed the need to improve on existing tools like PSA tests to make prostate cancer screening more precise with fewer false positives and less need for patient work-up.

  • Adding MRI to prostate screening protocols is a step forward, but MRI is an expensive technology that requires experienced radiologists to interpret.

Could AI help? In the new study, researchers tested a deep learning algorithm developed at the Mayo Clinic to detect clinically significant prostate cancer on multiparametric (mpMRI) scans.

  • In an interesting wrinkle, the Mayo algorithm does not indicate tumor location, so a second algorithm – called Grad-CAM – was employed to localize tumors.

The Mayo algorithm was trained on a population of 5k patients with a cancer prevalence similar to a screening population, then tested in an external test set of 204 patients, finding …

  • No statistically significant difference in performance between the Mayo algorithm and radiologists based on AUC (0.86 vs. 0.84, p=0.68)
  • The highest AUC was with the combination of AI and radiologists (0.89, p<0.001)
  • The Grad-CAM algorithm was accurate in localizing 56 of 58 true-positive exams

An editorial noted that the study employed the Mayo algorithm on multiparametric MRI exams.

  • Prostate cancer imaging is moving from mpMRI toward biparametric MRI (bpMRI) due to its faster scan times and lack of contrast, and if validated on bpMRI, AI’s impact could be even more dramatic.

The Takeaway
The current study illustrates the exciting developments underway to make prostate imaging more accurate and easier to perform. They also support the technology evolution that could one day make prostate cancer screening a more widely accepted test.

US + Mammo vs. Mammo + AI for Dense Breasts

Artificial intelligence may represent radiology’s future, but for at least one clinical application traditional imaging seems to be the present. In a new study in Radiology, ultrasound was more effective than AI for supplemental imaging of women with dense breast tissue. 

Dense breast tissue has long presented problems for breast imaging specialists. 

  • Women with dense breasts are at higher risk of breast cancer, but traditional screening modalities like X-ray mammography don’t work very well (sensitivity of 30-48%), creating the need for supplemental imaging tools like ultrasound and MRI.

In the new study, researchers from South Korea tested the use of Lunit’s Insight MMG mammography AI algorithm in 5.7k women without symptoms who had breast tissue classified as heterogeneously (63%) or extremely dense (37%). 

  • AI’s performance was compared to both mammography alone as well as to mammography with ultrasound, one of the gold-standard modalities for imaging women with dense breasts. 

All in all, researchers found …

  • Mammography with AI had lower sensitivity than mammography with ultrasound but slightly better than mammography alone (61% vs. 97% vs. 58%)
  • Mammography with AI had a lower cancer detection rate per 1k women but higher than mammography alone (3.5 vs. 5.6 vs. 3.3)
  • Mammography with AI missed 12 cancers detected with mammography with ultrasound
  • Mammography with AI had the highest specificity (95% vs. 78% vs. 94%)
  • And the lowest abnormal interpretation rate (5% vs. 23% vs. 6%)

The results show that while AI can help radiologists interpret screening mammography for most women, at present it can’t compensate for mammography’s low sensitivity in women with dense breast tissue.

In an editorial, breast radiologists Gary Whitman, MD, and Stamatia Destounis, MD, observed that supplemental imaging of women with dense breasts is getting more attention as the FDA prepares to implement breast density notification rules in September. 

  • They recommended follow-up studies with other AI algorithms, more patients, and a longer follow-up period. 

The Takeaway

As with a recent study on AI and teleradiology, the current research is a good step toward real-world evaluation of AI for a specific use case. While AI in this instance didn’t improve mammography’s sensitivity in women with dense breast tissue, it could carve out a role reducing false positives for these women who get mammography and ultrasound.

AI Detects Incidental PE

In one of the most famous quotes about radiology and artificial intelligence, Curtis Langlotz, MD, PhD, once said that AI will not replace radiologists, but radiologists with AI will replace those without it. A new study in AJR illustrates his point, showing that radiologists using a commercially available AI algorithm had higher rates of detecting incidental pulmonary embolism on CT scans. 

AI is being applied to many clinical use cases in radiology, but one of the more promising is for detecting and triaging emergent conditions that might have escaped the radiologist’s attention on initial interpretations.

  • Pulmonary embolism is one such condition. PE can be life-threatening and occurs in 1.3-2.6% of routine contrast-enhanced CT exams, but radiologist miss rates range from 10-75% depending on patient population.

AI can help by automatically analyzing CT scans and alerting radiologists to PEs when they can be treated quickly; the FDA has authorized several algorithms for this clinical use. 

  • In the new paper, researchers conducted a prospective real-world study of Aidoc’s BriefCase for iPE Triage at the University of Alabama at Birmingham. 

Researchers tracked rates of PE detection in 4.3k patients before and after AI implementation in 2021, finding … 

  • Radiologists saw their sensitivity for PE detection go up after AI implementation (80% vs. 96%) 
  • Specificity was unchanged (99.1% vs. 99.9%, p=0.58)
  • The PE incidence rate went up (1.4% vs. 1.6%)
  • There was no statistically significant difference in report turnaround time before and after AI (65 vs. 78 minutes, p=0.26)

The study echoes findings from 2023, when researchers from UT Southwestern also used the Aidoc algorithm for PE detection, in that case finding that AI cut times for report turnaround and patient waits. 

The Takeaway

While studies showing AI’s value to radiologists are commonplace, many of them are performed under controlled conditions that don’t translate to the real world. The current study is significant because it shows that with AI, radiologists can achieve near-perfect detection of a potentially life-threatening condition without a negative impact on workflow.

Better Prostate MRI Tools

In past issues of The Imaging Wire, we’ve discussed some of the challenges to prostate cancer screening that have limited its wider adoption. But researchers continue to develop new tools for prostate imaging – particularly with MRI – that could flip the script. 

Three new studies were published in just the last week focusing on prostate MRI, two involving AI image analysis.

In a new study in The Lancet Oncology, researchers presented results from AI algorithms developed for the Prostate Imaging—Cancer Artificial Intelligence (PI-CAI) Challenge.

  • PI-CAI pitted teams from around the world in a competition to develop the best prostate AI algorithms, with results presented at recent RSNA and ECR conferences. 

Researchers measured the ensemble performance of top-performing PI-CAI algorithms for detecting clinically significant prostate cancer against 62 radiologists who used the PI-RADS system in a population of 400 cases, finding that AI …

  • Had performance superior to radiologists (AUROC=0.91 vs. 0.86)
  • Generated 50% fewer false-positive results
  • Detected 20% fewer low-grade cases 

Broader use of prostate AI could reduce inter-reader variability and need for experienced radiologists to diagnose prostate cancer.

In the next study, in the Journal of Urology, researchers tested Avenda Health’s Unfold AI cancer mapping algorithm to measure the extent of tumors by analyzing their margins on MRI scans, finding that compared to physicians, AI … 

  • Had higher accuracy for defining tumor margins compared to two manual methods (85% vs. 67% and 76%)
  • Reduced underestimations of cancer extent with a significantly higher negative margin rate (73% vs. 1.6%)

AI wasn’t used in the final study, but this one could be the most important of the three due to its potential economic impact on prostate MRI.

  • Canadian researchers in Radiology tested a biparametric prostate MRI protocol that avoids the use of gadolinium contrast against multiparametric contrast-based MRI for guiding prostate biopsy. 

They compared the protocols in 1.5k patients with prostate lesions undergoing biopsy, finding…

  • No statistically significant difference in PPV between bpMRI and mpMRI for all prostate cancer (55% vs. 56%, p=0.61) 
  • No difference for clinically significant prostate cancer (34% vs. 34%, p=0.97). 

They concluded that bpMRI offers lower costs and could improve access to prostate MRI by making the scans easier to perform.

The Takeaway

The advances in AI and MRI protocols shown in the new studies could easily be applied to prostate cancer screening, making it more economical, accessible, and clinically effective.  

Advances in AI-Automated Echocardiography with Us2.ai

Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography. 

The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography? 

Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function. 

In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.

The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis. 

The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop. 

The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.

And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability. 

The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is? 

José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.

Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.

Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.

We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside. 

In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes. 

This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams. 

It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc. 

Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury. 

Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes. 

So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report. 

You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur. 

It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.

The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks. 

The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.

José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension. 

One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology. 

We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).

Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3 

The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?

Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.

Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.

A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.

With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.

Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction. 

As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed 

José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45. 

And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.

The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?

José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).

We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.

Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.

José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention

With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends. 

The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?

Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.

Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.

With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use. 

The Imaging Wire: What about the economics of echo AI?

Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500. 

When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high. 

But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes. 

If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI. 

This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.

The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?

Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy. 

Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there. 

In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters. 

If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.

This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.

In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.

José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world. 

The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?

José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.

Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves. 

REFERENCES

  1. Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
  2. Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145. 
  3. Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography
  4. Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.
Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!